dnl AMD K7 mpn_divrem_1 -- mpn by limb division. dnl dnl K7: 17.0 cycles/limb integer part, 15.0 cycles/limb fraction part. dnl Copyright (C) 1999, 2000 Free Software Foundation, Inc. dnl dnl This file is part of the GNU MP Library. dnl dnl The GNU MP Library is free software; you can redistribute it and/or dnl modify it under the terms of the GNU Lesser General Public License as dnl published by the Free Software Foundation; either version 2.1 of the dnl License, or (at your option) any later version. dnl dnl The GNU MP Library is distributed in the hope that it will be useful, dnl but WITHOUT ANY WARRANTY; without even the implied warranty of dnl MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU dnl Lesser General Public License for more details. dnl dnl You should have received a copy of the GNU Lesser General Public dnl License along with the GNU MP Library; see the file COPYING.LIB. If dnl not, write to the Free Software Foundation, Inc., 59 Temple Place - dnl Suite 330, Boston, MA 02111-1307, USA. include(`../config.m4') C mp_limb_t mpn_divrem_1 (mp_ptr dst, mp_size_t xsize, C mp_srcptr src, mp_size_t size, C mp_limb_t divisor); C mp_limb_t mpn_divrem_1c (mp_ptr dst, mp_size_t xsize, C mp_srcptr src, mp_size_t size, C mp_limb_t divisor, mp_limb_t carry); C C The method and nomenclature follow part 8 of "Division by Invariant C Integers using Multiplication" by Granlund and Montgomery, reference in C gmp.texi. C C The "and"s shown in the paper are done here with "cmov"s. "m" is written C for m', and "d" for d_norm, which won't cause any confusion since it's C only the normalized divisor that's of any use in the code. "b" is written C for 2^N, the size of a limb, N being 32 here. C C mpn_divrem_1 avoids one division if the src high limb is less than the C divisor. mpn_divrem_1c doesn't check for a zero carry, since in normal C circumstances that will be a very rare event. C C There's a small bias towards expecting xsize==0, by having code for C xsize==0 in a straight line and xsize!=0 under forward jumps. dnl MUL_THRESHOLD is the value of xsize+size at which the multiply by dnl inverse method is used, rather than plain "divl"s. Minimum value 1. dnl dnl The inverse takes about 50 cycles to calculate, but after that the dnl multiply is 17 c/l versus division at 42 c/l. dnl dnl At 3 limbs the mul is a touch faster than div on the integer part, and dnl even more so on the fractional part. deflit(MUL_THRESHOLD, 3) defframe(PARAM_CARRY, 24) defframe(PARAM_DIVISOR,20) defframe(PARAM_SIZE, 16) defframe(PARAM_SRC, 12) defframe(PARAM_XSIZE, 8) defframe(PARAM_DST, 4) defframe(SAVE_EBX, -4) defframe(SAVE_ESI, -8) defframe(SAVE_EDI, -12) defframe(SAVE_EBP, -16) defframe(VAR_NORM, -20) defframe(VAR_INVERSE, -24) defframe(VAR_SRC, -28) defframe(VAR_DST, -32) defframe(VAR_DST_STOP,-36) deflit(STACK_SPACE, 36) .text ALIGN(32) PROLOGUE(mpn_divrem_1c) deflit(`FRAME',0) movl PARAM_CARRY, %edx movl PARAM_SIZE, %ecx subl $STACK_SPACE, %esp deflit(`FRAME',STACK_SPACE) movl %ebx, SAVE_EBX movl PARAM_XSIZE, %ebx movl %edi, SAVE_EDI movl PARAM_DST, %edi movl %ebp, SAVE_EBP movl PARAM_DIVISOR, %ebp movl %esi, SAVE_ESI movl PARAM_SRC, %esi leal -4(%edi,%ebx,4), %edi jmp LF(mpn_divrem_1,start_1c) EPILOGUE() C offset 0x31, close enough to aligned PROLOGUE(mpn_divrem_1) deflit(`FRAME',0) movl PARAM_SIZE, %ecx movl $0, %edx C initial carry (if can't skip a div) subl $STACK_SPACE, %esp deflit(`FRAME',STACK_SPACE) movl %ebp, SAVE_EBP movl PARAM_DIVISOR, %ebp movl %ebx, SAVE_EBX movl PARAM_XSIZE, %ebx movl %esi, SAVE_ESI movl PARAM_SRC, %esi orl %ecx, %ecx movl %edi, SAVE_EDI movl PARAM_DST, %edi leal -4(%edi,%ebx,4), %edi C &dst[xsize-1] jz L(no_skip_div) movl -4(%esi,%ecx,4), %eax C src high limb cmpl %ebp, %eax C one less div if high=MUL_THRESHOLD, so with size==0 then C must have xsize!=0 jmp L(fraction_some) C ----------------------------------------------------------------------------- C C The multiply by inverse loop is 17 cycles, and relies on some out-of-order C execution. The instruction scheduling is important, with various C apparently equivalent forms running 1 to 5 cycles slower. C C A lower bound for the time would seem to be 16 cycles, based on the C following successive dependencies. C C cycles C n2+n1 1 C mul 6 C q1+1 1 C mul 6 C sub 1 C addback 1 C --- C 16 C C This chain is what the loop has already, but 16 cycles isn't achieved. C K7 has enough decode, and probably enough execute (depending maybe on what C a mul actually consumes), but nothing running under 17 has been found. C C In theory n2+n1 could be done in the sub and addback stages (by C calculating both n2 and n2+n1 there), but lack of registers makes this an C unlikely proposition. C C The jz in the loop keeps the q1+1 stage to 1 cycle. Handling an overflow C from q1+1 with an "sbbl $0, %ebx" would add a cycle to the dependent C chain, and nothing better than 18 cycles has been found when using it. C The jump is taken only when q1 is 0xFFFFFFFF, and on random data this will C be an extremely rare event. C C Branch mispredictions will hit random occurrances of q1==0xFFFFFFFF, but C if some special data is coming out with this always, the q1_ff special C case actually runs at 15 c/l. 0x2FFF...FFFD divided by 3 is a good way to C induce the q1_ff case, for speed measurements or testing. Note that C 0xFFF...FFF divided by 1 or 2 doesn't induce it. C C The instruction groupings and empty comments show the cycles for a naive C in-order view of the code (conveniently ignoring the load latency on C VAR_INVERSE). This shows some of where the time is going, but is nonsense C to the extent that out-of-order execution rearranges it. In this case C there's 19 cycles shown, but it executes at 17. ALIGN(16) L(integer_top): C eax scratch C ebx scratch (nadj, q1) C ecx scratch (src, dst) C edx scratch C esi n10 C edi n2 C ebp divisor C C mm0 scratch (src qword) C mm7 rshift for normalization cmpl $0x80000000, %esi C n1 as 0=c, 1=nc movl %edi, %eax C n2 movl VAR_SRC, %ecx leal (%ebp,%esi), %ebx cmovc( %esi, %ebx) C nadj = n10 + (-n1 & d), ignoring overflow sbbl $-1, %eax C n2+n1 mull VAR_INVERSE C m*(n2+n1) movq (%ecx), %mm0 C next limb and the one below it subl $4, %ecx movl %ecx, VAR_SRC C addl %ebx, %eax C m*(n2+n1) + nadj, low giving carry flag leal 1(%edi), %ebx C n2<<32 + m*(n2+n1)) movl %ebp, %eax C d C adcl %edx, %ebx C 1 + high(n2<<32 + m*(n2+n1) + nadj) = q1+1 jz L(q1_ff) movl VAR_DST, %ecx mull %ebx C (q1+1)*d psrlq %mm7, %mm0 leal -4(%ecx), %ecx C subl %eax, %esi movl VAR_DST_STOP, %eax C sbbl %edx, %edi C n - (q1+1)*d movl %esi, %edi C remainder -> n2 leal (%ebp,%esi), %edx movd %mm0, %esi cmovc( %edx, %edi) C n - q1*d if underflow from using q1+1 sbbl $0, %ebx C q cmpl %eax, %ecx movl %ebx, (%ecx) movl %ecx, VAR_DST jne L(integer_top) L(integer_loop_done): C ----------------------------------------------------------------------------- C C Here, and in integer_one_left below, an sbbl $0 is used rather than a jz C q1_ff special case. This make the code a bit smaller and simpler, and C costs only 1 cycle (each). L(integer_two_left): C eax scratch C ebx scratch (nadj, q1) C ecx scratch (src, dst) C edx scratch C esi n10 C edi n2 C ebp divisor C C mm0 src limb, shifted C mm7 rshift cmpl $0x80000000, %esi C n1 as 0=c, 1=nc movl %edi, %eax C n2 movl PARAM_SRC, %ecx leal (%ebp,%esi), %ebx cmovc( %esi, %ebx) C nadj = n10 + (-n1 & d), ignoring overflow sbbl $-1, %eax C n2+n1 mull VAR_INVERSE C m*(n2+n1) movd (%ecx), %mm0 C src low limb movl VAR_DST_STOP, %ecx C addl %ebx, %eax C m*(n2+n1) + nadj, low giving carry flag leal 1(%edi), %ebx C n2<<32 + m*(n2+n1)) movl %ebp, %eax C d adcl %edx, %ebx C 1 + high(n2<<32 + m*(n2+n1) + nadj) = q1+1 sbbl $0, %ebx mull %ebx C (q1+1)*d psllq $32, %mm0 psrlq %mm7, %mm0 C subl %eax, %esi C sbbl %edx, %edi C n - (q1+1)*d movl %esi, %edi C remainder -> n2 leal (%ebp,%esi), %edx movd %mm0, %esi cmovc( %edx, %edi) C n - q1*d if underflow from using q1+1 sbbl $0, %ebx C q movl %ebx, -4(%ecx) C ----------------------------------------------------------------------------- L(integer_one_left): C eax scratch C ebx scratch (nadj, q1) C ecx dst C edx scratch C esi n10 C edi n2 C ebp divisor C C mm0 src limb, shifted C mm7 rshift movl VAR_DST_STOP, %ecx cmpl $0x80000000, %esi C n1 as 0=c, 1=nc movl %edi, %eax C n2 leal (%ebp,%esi), %ebx cmovc( %esi, %ebx) C nadj = n10 + (-n1 & d), ignoring overflow sbbl $-1, %eax C n2+n1 mull VAR_INVERSE C m*(n2+n1) C C C addl %ebx, %eax C m*(n2+n1) + nadj, low giving carry flag leal 1(%edi), %ebx C n2<<32 + m*(n2+n1)) movl %ebp, %eax C d C adcl %edx, %ebx C 1 + high(n2<<32 + m*(n2+n1) + nadj) = q1+1 sbbl $0, %ebx C q1 if q1+1 overflowed mull %ebx C C C subl %eax, %esi C sbbl %edx, %edi C n - (q1+1)*d movl %esi, %edi C remainder -> n2 leal (%ebp,%esi), %edx cmovc( %edx, %edi) C n - q1*d if underflow from using q1+1 sbbl $0, %ebx C q movl %ebx, -8(%ecx) subl $8, %ecx L(integer_none): cmpl $0, PARAM_XSIZE jne L(fraction_some) movl %edi, %eax L(fraction_done): movl VAR_NORM, %ecx movl SAVE_EBP, %ebp movl SAVE_EDI, %edi movl SAVE_ESI, %esi movl SAVE_EBX, %ebx addl $STACK_SPACE, %esp shrl %cl, %eax emms ret C ----------------------------------------------------------------------------- C C Special case for q1=0xFFFFFFFF, giving q=0xFFFFFFFF meaning the low dword C of q*d is simply -d and the remainder n-q*d = n10+d L(q1_ff): C eax (divisor) C ebx (q1+1 == 0) C ecx C edx C esi n10 C edi n2 C ebp divisor movl VAR_DST, %ecx movl VAR_DST_STOP, %edx subl $4, %ecx psrlq %mm7, %mm0 leal (%ebp,%esi), %edi C n-q*d remainder -> next n2 movl %ecx, VAR_DST movd %mm0, %esi C next n10 movl $-1, (%ecx) cmpl %ecx, %edx jne L(integer_top) jmp L(integer_loop_done) C ----------------------------------------------------------------------------- C C Being the fractional part, the "source" limbs are all zero, meaning C n10=0, n1=0, and hence nadj=0, leading to many instructions eliminated. C C The loop runs at 15 cycles. The dependent chain is the same as the C general case above, but without the n2+n1 stage (due to n1==0), so 15 C would seem to be the lower bound. C C A not entirely obvious simplification is that q1+1 never overflows a limb, C and so there's no need for the sbbl $0 or jz q1_ff from the general case. C q1 is the high word of m*n2+b*n2 and the following shows q1<=b-2 always. C rnd() means rounding down to a multiple of d. C C m*n2 + b*n2 <= m*(d-1) + b*(d-1) C = m*d + b*d - m - b C = floor((b(b-d)-1)/d)*d + b*d - m - b C = rnd(b(b-d)-1) + b*d - m - b C = rnd(b(b-d)-1 + b*d) - m - b C = rnd(b*b-1) - m - b C <= (b-2)*b C C Unchanged from the general case is that the final quotient limb q can be C either q1 or q1+1, and the q1+1 case occurs often. This can be seen from C equation 8.4 of the paper which simplifies as follows when n1==0 and C n0==0. C C n-q1*d = (n2*k+q0*d)/b <= d + (d*d-2d)/b C C As before, the instruction groupings and empty comments show a naive C in-order view of the code, which is made a nonsense by out of order C execution. There's 17 cycles shown, but it executes at 15. C C Rotating the store q and remainder->n2 instructions up to the top of the C loop gets the run time down from 16 to 15. ALIGN(16) L(fraction_some): C eax C ebx C ecx C edx C esi C edi carry C ebp divisor movl PARAM_DST, %esi movl VAR_DST_STOP, %ecx movl %edi, %eax subl $8, %ecx jmp L(fraction_entry) ALIGN(16) L(fraction_top): C eax n2 carry, then scratch C ebx scratch (nadj, q1) C ecx dst, decrementing C edx scratch C esi dst stop point C edi (will be n2) C ebp divisor movl %ebx, (%ecx) C previous q movl %eax, %edi C remainder->n2 L(fraction_entry): mull VAR_INVERSE C m*n2 movl %ebp, %eax C d subl $4, %ecx C dst leal 1(%edi), %ebx C C C C addl %edx, %ebx C 1 + high(n2<<32 + m*n2) = q1+1 mull %ebx C (q1+1)*d C C C negl %eax C low of n - (q1+1)*d C sbbl %edx, %edi C high of n - (q1+1)*d, caring only about carry leal (%ebp,%eax), %edx cmovc( %edx, %eax) C n - q1*d if underflow from using q1+1 sbbl $0, %ebx C q cmpl %esi, %ecx jne L(fraction_top) movl %ebx, (%ecx) jmp L(fraction_done) EPILOGUE()