Monthly Archives: November 2013

Adventures of a Programmer: Parser Writing Peril XVI

Back to the experiment with balancing multiplicands which failed despite the advantage it must have shown based on the theory and if the margin of this blog post would have been wider I could even proof that it is better!
*stomps foot*
And it is better! All I have done was to comment the wrong line out and instead of getting two distinct benchmarks the second benchmark was the sum of both that let me think that balancing needs about twice the time than normal multiplication.
Moral of the story: if some results look suspicious, they mostly are. Or would you buy a machine that generates energy for free?

So, how does the real benchmark look?
Well, differently πŸ˜‰

The method is as described before but let me talk a bit about the numbers to be tested.
It should be obvious that the balancing will make sense only for numbers large enough to pass the cut-off point of the Toom-Cook algorithms (T-C 2 {Karatsuba} and 3 are implemented in Libtommath) otherwise it would slow the multiplication down—costly overhead without any effect. The cut-off points will differ from architecture to architecture and mine are (in mp_digits): TC2 = 48,TC3 190 and for the very big numbers FFT=4000 (which is oversimplified but I’m working at it).
The tests for the small numbers run 1,000 times each. Time is in seconds.

</tr

Number Pair Normal Multiplication Balanced Multiplication
50 * 100 0.04 0.07
100 * 150 0.14 0.15
100 * 200 0.18 0.19
150 * 300 0.35 0.34
100 * 400 0.37 0.44
200 * 400 0.79 0.61
300 * 400 0.98 0.91
150 * 500 0.62 0.63
250 * 500 1.13 0.81
350 * 500 1.38 1.19
400 * 500 1.35 1.28
450 * 500 1.30 1.30
50 * 600 0.53 0.54
100 * 600 1.00 0.60
150 * 600 1.42 0.75
200 * 600 1.31 1.12
250 * 600 1.44 1.14
300 * 600 1.78 1.37
350 * 600 1.81 1.45
400 * 600 1.83 1.78
450 * 600 1.86 1.57
500 * 600 1.74 1.68
550 * 600 1.62 1.88
50 * 700 0.61 0.63
100 * 700 1.18 1.21
150 * 700 1.69 0.97
200 * 700 2.83 1.39
250 * 700 3.23 1.55
300 * 700 2.16 1.91
350 * 700 2.22 1.44
400 * 700 2.34 1.87
450 * 700 2.35 2.06
500 * 700 2.32 2.27
550 * 700 2.26 2.09
600 * 700 2.68 2.76
650 * 700 2.39 2.53
50 * 800 0.69 0.74
100 * 800 1.36 1.42
150 * 800 1.95 1.91
200 * 800 3.34 1.70
250 * 800 3.84 1.90
300 * 800 4.46 2.35
350 * 800 3.18 2.25
400 * 800 2.85 1.70
450 * 800 2.91 2.22
500 * 800 2.93 2.60
550 * 800 2.88 2.70
600 * 800 3.73 3.07
650 * 800 3.51 3.51
700 * 800 3.16 3.37
750 * 800 2.82 2.96
50 * 900 0.78 0.83
100 * 900 1.54 1.57
150 * 900 2.21 2.15
200 * 900 3.94 3.43
250 * 900 4.48 2.20
300 * 900 5.29 2.64
350 * 900 5.72 2.42
400 * 900 6.00 2.29
450 * 900 6.20 2.01
500 * 900 3.58 2.64
550 * 900 3.55 2.99
600 * 900 4.91 3.56
650 * 900 4.79 3.76
700 * 900 4.33 5.07
750 * 900 4.07 4.21
800 * 900 3.73 4.07
850 * 900 3.68 3.89
50 * 1000 0.87 0.93
100 * 1000 1.72 1.78
150 * 1000 2.50 2.49
200 * 1000 4.35 4.01
250 * 1000 5.12 4.38
300 * 1000 5.99 3.03
350 * 1000 6.55 3.07
400 * 1000 6.96 2.80
450 * 1000 7.25 2.62
500 * 1000 7.44 2.32
550 * 1000 7.57 2.96
600 * 1000 5.86 3.64
650 * 1000 5.74 3.99
700 * 1000 5.55 4.40
750 * 1000 5.35 6.00
800 * 1000 5.08 6.12
850 * 1000 5.08 5.26
900 * 1000 4.87 5.01
950 * 1000 4.28 4.51

Some points a more off than others, that might have a reason in the actual numbers which get produced with a cheap PRNG and are used over the whole loop. Let me repeat the last round ([50,950] * 1,000) with a different number each time. (Generating thousands of large numbers takes some time but we can ignore it, it is the same for both)

Number Pair Normal Multiplication Balanced Multiplication
50 * 1000 5.03 3.87
100 * 1000 5.05 3.84
150 * 1000 4.99 3.85
200 * 1000 4.99 3.86
250 * 1000 5.00 3.89
300 * 1000 5.00 3.85
350 * 1000 5.00 3.86
400 * 1000 5.00 3.92
450 * 1000 4.99 3.85
500 * 1000 5.05 3.87
550 * 1000 4.99 3.89
600 * 1000 5.05 3.86
650 * 1000 4.98 3.88
700 * 1000 5.08 3.86
750 * 1000 5.05 3.88
800 * 1000 4.98 3.84
850 * 1000 5.01 3.94
900 * 1000 5.12 3.86
950 * 1000 5.03 3.85

Oh?
Let’s use libtomath’s own tool mp_rand,too:

Number Pair Normal Multiplication Balanced Multiplication
50 * 1000 8.50 7.34
100 * 1000 8.50 7.37
150 * 1000 8.49 7.38
200 * 1000 8.47 7.35
250 * 1000 8.48 7.34
300 * 1000 8.50 7.39
350 * 1000 8.48 7.36
400 * 1000 8.49 7.35
450 * 1000 8.47 7.30
500 * 1000 8.47 7.33
550 * 1000 8.49 7.37
600 * 1000 8.47 7.33
650 * 1000 8.47 7.32
700 * 1000 8.47 7.34
750 * 1000 8.51 7.34
800 * 1000 8.46 7.36
850 * 1000 8.49 7.37
900 * 1000 8.47 7.33
950 * 1000 8.46 7.38

The function mp_rand is more exact as it makes sure that the MSD is always different from zero. This gives an interesting effect: the balanced version is even faster when both multiplicands have been ordered to have the same size. So let me get something to read while the following script runs:

for i in `seq 50 50  1000`;
 do  for j in `seq 50 50  $i`;
   do ./testbalancing $i $j;done;done

The last round is the most significant:

Number Pair Normal Multiplication Balanced Multiplication
1000 * 50 1.31 1.32
1000 * 100 1.72 1.67
1000 * 150 2.03 1.90
1000 * 200 2.75 2.30
1000 * 250 2.99 2.35
1000 * 300 3.26 2.60
1000 * 350 3.37 2.66
1000 * 400 3.49 2.79
1000 * 450 3.57 2.98
1000 * 500 3.66 3.31
1000 * 550 3.73 3.63
1000 * 600 4.26 4.20
1000 * 650 4.49 4.41
1000 * 700 4.85 4.86
1000 * 750 5.15 5.18
1000 * 800 5.63 5.40
1000 * 850 6.28 5.88
1000 * 900 6.92 6.30
1000 * 950 7.67 6.80
1000 * 1000 8.47 7.36

As I said at the start of this post: “If some result look suspicious, they mostly are.”, so let’s do a check:
Mmh…

  n = strtoul(argv[1],NULL,10);
  m = strtoul(argv[2],NULL,10);
---
  for(n=0;n<1000;n++){
...
    mp_rand(&a,n);
    mp_rand(&b,m);
  }

Ouch!
Now if that’s not embarassing, I don’t know what else is πŸ˜‰
Ok, now on with the real one. The round with one thousand first to see if the results are reasonable now.

Number Pair Normal Multiplication Balanced Multiplication
50 * 1000 0.920000 1.060000
100 * 1000 1.770000 1.930000
150 * 1000 2.400000 2.460000
200 * 1000 4.640000 4.070000
250 * 1000 5.140000 4.560000
300 * 1000 6.380000 2.880000
350 * 1000 7.080000 2.980000
400 * 1000 7.420000 2.860000
450 * 1000 7.210000 2.520000
500 * 1000 7.360000 2.410000
550 * 1000 7.560000 2.980000
600 * 1000 5.940000 3.630000
650 * 1000 5.910000 3.600000
700 * 1000 5.760000 4.270000
750 * 1000 5.440000 6.060000
800 * 1000 5.210000 5.880000
850 * 1000 5.460000 5.380000
900 * 1000 5.190000 5.100000
950 * 1000 4.410000 4.400000
1000 * 1000 3.540000 3.530000

Yepp, that makes more sense; the data supports the theory. There is a jump about 300*1,000, increases smoothly (more or less) up to about 700*1,000 and…oh, forgot to switch off the shortcuts. Aaaaaand again πŸ˜‰

Number Pair Normal Multiplication Balanced Multiplication
50 * 1000 1.040000 0.870000
100 * 1000 1.870000 2.010000
150 * 1000 2.670000 2.400000
200 * 1000 4.610000 3.980000
250 * 1000 5.180000 4.510000
300 * 1000 6.330000 3.000000
350 * 1000 6.770000 2.850000
400 * 1000 7.030000 2.890000
450 * 1000 7.490000 2.430000
500 * 1000 7.600000 2.450000
550 * 1000 7.730000 2.980000
600 * 1000 5.680000 3.620000
650 * 1000 6.110000 3.990000
700 * 1000 5.890000 4.350000
750 * 1000 5.630000 5.910000
800 * 1000 5.150000 6.110000
850 * 1000 5.360000 5.250000
900 * 1000 5.180000 5.090000
950 * 1000 4.620000 4.340000
1000 * 1000 4.160000 4.290000

Nearly the same. There are two peaks where the differences are close to the Toom-Cook cut-off point. I’ll put the full table after the fold but the conclusion is that this kind of balancing makes most sense between about 3/10 and 7/10 and both multiplicands should be larger than the Toom-Cook 3 cut-off.
Continue reading

Adventures of a Programmer: Parser Writing Peril XV

To test the last try of balancing multiplication in libtommath I needed to generate some large numbers. Really large numbers. Tens of millions of decimal digits long numbers. Using e.g.: stdin as the input needs patience and has limits regarding the length of the argument buffer. It is more elegant to produce them directly with the conditions that the bits should be more or less uniformly distributed, can get generated fast and have no unwelcome side effects.

There is a function in libtommath called mp_rand which produces a pseudo-random integer of a given size but it does not meet the above conditions. It uses a slow method involving elementary functions like add and shift but that is negligible. It uses rand() which has side effects. The first one is that is not in the C-standard ISO/IEC 9899/2011 but in Posix (POSIX.1-2001) and the second one is that calls to rand() might be implemented in a cryptographically secure way and its usage reduces the entropy pool without a good reason when used to generate large numbers for mere testing.

The method I used was to take a simple PRNG (pseudo random number generator) and copy the generated small 32-bit integers direct into the digits.

int make_large_number(mp_int * c, int size)
{
  int e;

  if ((e = mp_grow(c, size)) != MP_OKAY) {
    return e;
  }
  c->used = size;
  while (size--) {
    c->dp[size] = (mp_digit)(light_random() & MP_MASK);
  }
  mp_clamp(c);
  return MP_OKAY;
}

With light_random() a small generator of the form x_{n+1} = \mathop{\mathrm{remainder}}( \frac{48271*x_n}{ 2^31-1})[ [4] based on [3] see also [2] ] (48271 is a primitive root modulo 2^31-1).

#include <stdint.h>
static unsigned int seed = 1;
int light_random(void)
{
  int32_t result;

  result = seed;
  result = 48271 * (result % 44488) - 3399 * (int32_t) (result / 44488);
  if (result < 0){
    result += ((1U<<31) -1);
  }
  seed = result;
  return result;
}
void slight_random(unsigned int grain){
  /*
    The seed of the Lehmer-PRNg above must be co-prime to the modulus. The
    modulus 2^31-1 is prime (Mersenne prime M_{31}) so all numbers in
    [1,2^31-2] are co-prime with the exception of zero.
   */
  seed = (grain)?seed:1;
}

The method used to compute the remainder is called Schrage’s method[1]. Let me give a short description.
Given x_{n+1} = ax_n \mod m than m = aq + r, so q = \lfloor\frac{m}{a}\rfloor and r = m\mod a such that we can write

{\displaystyle{ \begin{aligned} x_{n+1} &= ax_n \mod m \\ &= ax_n - \bigg\lfloor\frac{ax_n}{m}\bigg\rfloor m \\  &= ax_n - \bigg\lfloor\frac{ax_n}{aq+r}\bigg\rfloor (aq+r) \end{aligned}  } }

I’ll ommit the full proof and point to the paper but will give a short sketch of it.

Expanding the inner fractions

{\displaystyle{ \frac{ax_n}{aq+r} = \frac{x_n}{q} \frac{1}{1+\frac{r}{aq}} } }

and with the additional conditions r < a and r < q the fraction \tfrac{r}{aq} is much smaller than unity.
With the Taylor expansion \frac{1}{1+\epsilon}= 1-\epsilon in hand and replacing \epsilon with \frac{r}{aq} we get

{\displaystyle{ \begin{aligned}  \frac{ax_n}{aq+r} &= \frac{x_n}{q} \frac{1}{1+\frac{r}{aq}}  \\   &= \frac{x_n}{q} \biggl(1 - \frac{r}{aq}\biggr) \\   &= \frac{x_n}{q} - \frac{x_n}{aq}\cdot \frac{r}{q}     \end{aligned}  } }

As both \frac{x_n}{aq} and \frac{x_n}{m} are smaller than unity and with
\frac{r}{q} much smaller than unity we can state that

{\displaystyle{ 0 \le \biggl\lfloor\frac{x_n}{q}\biggr\rfloor  - \biggl\lfloor\frac{x_n}{q}\biggl(1-\frac{r}{aq}\biggr)\biggr\rfloor \le 1   }}

This allows us to conclude that

{\displaystyle{ ax_n \mod q - r \biggl\lfloor\frac{ax_n}{q}\biggr\rfloor + \begin{cases} 0 & \quad\text{if}\quad ax_n \mod q - r \biggl\lfloor\frac{ax_n}{q}\biggr\rfloor \ge 0 \\[3ex] m & \quad\text{if}\quad  ax_n \mod q - r \biggl\lfloor\frac{ax_n}{q}\biggr\rfloor < 0 \end{cases}  }}

Put in the values and we have the code from above.

The period is 2^{31}-2 which is enough to fill the maximum number of digits assuming sizeof(int) = 4. With 28 bit long digits it is good enough for numbers up to 18,100,795,794 decimal digits.

[1] Schrage, L., A More Portable Fortran Random Number Generator, ACM Trans. Math. Software 5, 132-138, 1979.
[2] Stephen K. Park and Keith W. Miller and Paul K. Stockmeyer, Another Test for Randomness: Response, Communications of the ACM 36.7 (1993): 108-110.
[3] Lehmer, D. H., Mathematical methods in large-scale computing units, Proceedings of a Second Symposium on Large-Scale Digital Calculating Machinery, 1949,141–146. Harvard University Press, Cambridge, Mass., 1951.
[4] Park, Stephen K., and Keith W. Miller, Random number generators: good ones are hard to find, Communications of the ACM 31.10 (1988): 1192-1201.

Adventures of a Programmer: Parser Writing Peril XIV

As I noted in my last post, the big integer library libtommath lacks a method to balance the multiplicands in size. The method to do it is quite simple and based on the rule:

{ \displaystyle{ (a_1\beta+a_0) \cdot b = (a_1b)\beta + a_0\cdot b }}

Where a,b are the multiplicands and \beta is a multiplier of positive integer value. Example shall be 12345678 \cdot 8765:

{ \displaystyle{ (1234\cdot 10^4 + 5678) \cdot 8765 = (1234\cdot 8765)\cdot 10^4 + 5678\cdot 8765 }}

If we use a binary multiplier instead of the decimal one we can use simple shifting to do the multiplication and we should use the big-number digits instead of the decimal ones, too, I think.

{ \displaystyle{ \begin{aligned}  & \text{\textbf{function}}\;\text{\textit{Balance Multiplication}}\;n\cdot m  \\ & \qquad \text{\textbf{Ensure: }}\; \mathop{\mathrm{min}}(n,m) > C_1\\ & \qquad \text{\textbf{Ensure: }}\; \frac{\mathop{\mathrm{min}}(n,m)}{\mathop{\mathrm{max}}(n,m)} < C_2           \\ & \qquad \beta \gets \mathop{\mathrm{length}}\left(\mathop{\mathrm{min}}\left(n,m\right) \right) \\ & \qquad  a_1 \gets \bigg\lfloor \frac{\mathop{\mathrm{max}}(n,m)}{\beta} \bigg\rfloor \\ & \qquad  a_1 \gets a_1\cdot\mathop{\mathrm{min}}\left(n,m\right)   \\ & \qquad  a_1 \gets a_1\cdot 2^\beta \\ & \qquad  a_0 \gets a - \bigg\lfloor \frac{\mathop{\mathrm{max}}(n,m)}{\beta} \bigg\rfloor \\ & \qquad  a_0 \gets a_0 \cdot \mathop{\mathrm{min}}\left(n,m\right)  \\   & \qquad \text{\textbf{Return}}; a_1 + a_0 \\ \end{aligned}  }}

Here C_1 denotes the cut-off point marking the minimum size of the smaller multiplicand. This could be as small as 1 (one) but I would take that as a special value where the algorithm used in mp_*_d will show better results (and it should be done in mp_mul directly).
The other cut-off point C_2 is the relation of a,b. It should be smaller than 1 (one), of course, but how much? \tfrac{1}{2} ? Or even earlier at \tfrac{2}{3} ? Hard to tell without a test but I think C_1 = 10 and C_2 = \tfrac{2}{3} will do for a start.
A straightforward implementation could be

#define MP_C1 2
#define MP_C2 0.5f
int mp_balance_mult(mp_int *a, mp_int *b, mp_int *c){
  int e, count,len_a, len_b;
  mp_int a_0,a_1;
  /* get digit size of a and b */
  len_a = a->used; 
  len_b = b->used;
  /* check if size of smaller one is larger than C1 */
  if(MIN(len_a,len_b) < MP_C1){
//printf("Checkpoint C1 failed with length(a) = %d, length(b) = %d\n",
     //     a->used,b->used);
    mp_mul(a,b,c);
    return MP_OKAY;
  }
  /* check if the sizes of both differ enough (smaller than C2)*/
   if(( (float)MAX(len_a,len_b) / (float)MIN(len_a,len_b)) < MP_C2){
//printf("Checkpoint C2 failed with %f\n",( (float)len_a / (float)len_b));
    mp_mul(a,b,c);
    return MP_OKAY;
  }
  /*Make sure that a is the larger one */
  if(len_a < len_b){
    mp_exch(a,b);
  }
  /* cut larger one in two parts a1, a0 with the smaller part a0 of the same
     length as the smaller input number b_0 */
  mp_init_size(&a_0,b->used);
  a_0.used = b->used;
  mp_init_size(&a_1,a->used - b->used);
  a_1.used = a->used - b->used;
    /* fill smaller part a_0 */
    for (count = 0; count < b->used ; count++) {
      a_0.dp[count] = a->dp[count];
    }
    /* fill bigger part a_1 with the counter already at the right place*/
    for (; count < a->used; count++) {
      a_1.dp[count-b->used] = a->dp[count];
    }
  /* Speciale aanbieding: Zeeuwse mosselen maar 1,11 EUR/kg! */
  mp_clamp(&a_0);
  mp_clamp(&a_1);
  /* a_1 = a_1 * b_0 */
  mp_mul(&a_1,b,&a_1);
  /* a_1 = a_1 * 2^(length(a_0)) */
  mp_lshd(&a_1,b->used);
  /* a_0 = a_0 * b_0 */
  mp_mul(&a_0,b,&a_0);
  /* c = a_1 + a_0 */
  mp_add(&a_1,&a_0,c);
  /* Don't mess with the input more than necessary */
  if(len_a < len_b){
    mp_exch(a,b);
  }
  return MP_OKAY;
}

To make it short: it is slower and needs twice the time on average in contrast to the native multiplication algorithms tested with two numbers in relation \tfrac{a}{b} = \tfrac{1}{2} with a \le 4\,000\,000\cdot 28 \text{bits}, using other relations makes it even worse.

So we can call it, with good conscience, an utter failure. Back to the blackboard.

Adventures of a Programmer: Parser Writing Peril XIII

The biggest native integer libtommath allowed to set directly seems to be[1] an unsigned long in the function mp_set_int. The biggest native integer used, on the other side, is hidden behind mp_word which is the type able to hold twice the size of mp_digit and can be larger than an unsigned long.

I need for my calculator some ways to work with native numbers without much ado where ado means a lot of conditionals, preprocessor directives, complicated data structures and all that mess. One of the ways to avoid it is to use the digits of the large integers directly if the large integer has only one. An example? Ok, an example.

Take the partial harmonic series, for example

{\displaystyle {  \mathrm{H}_n = \sum_{k=1}^n \frac{1}{k}  }}

If you calculate it with the help of the binary-splitting algorithm, especially than, a lot of the numbers involved are in the range of native integers and hold only on digit of the big numbers. The initialization of the big numbers in libtommath set them to 8 digits at least (the responsible variable is MP_PREC in tommath.h) and consumes costly heap memory to do so.

Fredrik Johansson proposed in a blogpost to postpone the reducing of the fraction to the very end. It is not much but it is something so let’s follow his advice and do so using my rational library (as mostly: without any error checking for less code-cluttering).

static mp_rat h_temp;
mp_rat *_harmonics(unsigned long a, unsigned long b)
{
  unsigned long m;
  mp_rat ta, tb;
  mp_int p, q, r, s;
  mp_word ps, qr;
  int e;

  if (b - a == 1) {
    mpq_set_int(&h_temp, (long) 1, (long) a);
    return &h_temp;
  }
  m = (a + b) >> 1;
  mpq_init_multi(&ta, &tb, NULL);
  // This is not necessarily necessary
  mp_init_multi(&p, &q, &r, &s, NULL);

  mpq_exch(_harmonics(a, m), &ta);
  mpq_exch(_harmonics(m, b), &tb);

  mp_exch(&ta.numerator, &p);
  mp_exch(&ta.denominator, &q);
  mp_exch(&tb.numerator, &r);
  mp_exch(&tb.denominator, &s);

  if ((&p)->used == 1 && (&s)->used == 1) {
    ps = (&p)->dp[0] * (mp_word) (&s)->dp[0];
    mp_set_word(&ta.numerator, ps);
  } else {
    mp_mul(&p, &s, &ta.numerator);
  }
  if ((&q)->used == 1 && (&r)->used == 1) {
    qr = (&q)->dp[0] * (mp_word) (&r)->dp[0];
    mp_set_word(&tb.numerator, qr);

  } else {
    mp_mul(&q, &r, &tb.numerator);
  }
  mp_add(&ta.numerator, &tb.numerator, &h_temp.numerator);
  mp_mul(&q, &s, &h_temp.denominator);

  mp_clear_multi(&p, &q, &r, &s, NULL);
  mpq_clear_multi(&ta, &tb, NULL);
  return &h_temp;
}
int harmonics(unsigned long n, mp_rat * c)
{
  mpq_init(&h_temp);
  mpq_exch(_harmonics(1, n + 1), c);
  mpq_reduce(c);
  mpq_clear(&h_temp);
  return 1;
}

The library libtommath is not very friendly if used in such a way but I cannot find out a better way to implement the binary splitting algorithm. This implementation of the partial harmonic series for example, is much slower than Fredriks implementation with gmpy (but faster than the native Python one, at least πŸ˜‰ ). It takes about 0.67 seconds for \mathop{\mathrm{H}}_{10\,000} but already 193.61 seconds—yes, over 3 minutes!— for \mathop{\mathrm{H}}_{100\,000} . That is definitely too much.
Funny thing: the normal algorithm is much faster, just 40 seconds for \mathop{\mathrm{H}}_{100\,000} but also slower for smaller values, like about 1.09 seconds for \mathop{\mathrm{H}}_{10\,000} whith the cut-off point at about \mathop{\mathrm{H}}_{21\,000} on my machine. And it is asymptotically slower, I measured some points in between to find out. It is really time to implement fast multiplication in full in libtommath.
Some of the problems with the first algorithm might have their reason in the nonexistent balancing of the multiplicands. There is a balancing function in the pull-queue but it seems to be a port from Ruby which makes it impossible to accept because of Ruby’s license (given that the submitter is not the original programmer of the Ruby code which I haven’t checked.)

static mp_rat h_temp;
mp_rat * _harmonics(unsigned long a,unsigned long b){
  unsigned long m;
  mp_rat ta,tb;
  if(b-a==1){
   mpq_set_int(&h_temp,(long)1,(long)a); 
   return &h_temp;
  }
  m = (a+b)>>1;
  mpq_init_multi(&ta,&tb,NULL);

  mpq_exch(_harmonics(a,m),&ta);
  mpq_exch(_harmonics(m,b),&tb);

  mpq_add(&ta,&tb,&h_temp);
  mpq_clear_multi(&ta,&tb,NULL);
  return &h_temp;
}
// use the same caller as above

However, it was just meant to be used as an example for mp_set_word which I’ll present here:

#if (MP_PREC > 1)
int mp_set_word(mp_int *c,mp_word w){
  mp_zero(c);
  if(w == 0){
    return MP_OKAY;
  }
  do{
    c->dp[c->used++] = (mp_digit)w&MP_MASK;
  }while( (w >>= DIGIT_BIT) > 0 && c->used < MP_PREC);
  if( w != 0 ){
    return MP_VAL;
  }
  return MP_OKAY;
}
#else
#warning variable "MP_PREC" should be at least 2
#endif

The preprocessor mess is necessary even if the constant MP_PREC should be at least 8 (eight) but, as I can tell you from some very bad experience, one never knows.

[1] The reason for the stylistically awkward subjunctive is: I mean it. I really could have overseen an already implemented function doing exactly what I wanted in the first place and hence is not a case of NIH. This time πŸ˜‰

On the numerical evaluation of factorials XII

Falling Factorial

The algorithm for the falling factorial (x)_n makes shamelessly use of \frac{(x)_n}{n!}  = \binom xn which can be written in terms of factorials as (x)_n  =  \frac{x!}{n! (x-n)!}n! = \frac{x!}{(x-n)!} which is our algorithm for the binomial coefficients without the first loop.
Oh, that was simple!

#ifndef LN_113
#   define LN_113 1.25505871293247979696870747618124469168920275806274
#endif
#include <math.h>

int mp_falling_factorial(unsigned long n, unsigned long k, mp_int * c)
{
  unsigned long *prime_list;
  unsigned long pix = 0, prime, K, diff;
  mp_bitset_t *bst;

  int e;

  if (n < k) {
    mp_set(c, 0);
    return MP_OKAY;
  }
  if (k == 0) {
    mp_set(c, 1);
    return MP_OKAY;
  }
  if (k == 1) {
    if ((e = mp_set_int(c, n)) != MP_OKAY) {
      return e;
    }
    return MP_OKAY;
  }
  if (k == n) {
    if ((e = mp_factorial(n, c)) != MP_OKAY) {
      return e;
    }
    return MP_OKAY;
  }

  bst = malloc(sizeof(mp_bitset_t));
  if (bst == NULL) {
    return MP_MEM;
  }
  mp_bitset_alloc(bst, n + 1);
  mp_eratosthenes(bst);
  /* One could also count the number of primes in the already filled sieve */
  pix = (unsigned long) (LN_113 * n / log(n))+2;

  prime_list = malloc(sizeof(unsigned long) * (pix) * 2);
  if (prime_list == NULL) {
    return MP_MEM;
  }
  prime = 2;
  K = 0;
  do {
    diff = mp_prime_divisors(n, prime) - mp_prime_divisors(n - k, prime);
    if (diff != 0) {
      prime_list[K] = prime;
      prime_list[K + 1] = diff;
      K += 2;
    }
    prime = mp_bitset_nextset(bst, prime + 1);
  } while (prime <= n - k);
  do {
    prime_list[K] = prime;
    prime_list[K + 1] = mp_prime_divisors(n, prime);
    prime = mp_bitset_nextset(bst, prime + 1);
    K += 2;
  } while (prime <= n);
  prime_list = realloc(prime_list, sizeof(unsigned long) * K);
  if (prime_list == NULL) {
    return MP_MEM;
  }
  if ((e = mp_compute_factored_factorial(prime_list, K - 1, c, 0)) != MP_OKAY) {
    return e;
  }
  free(bst);
  free(prime_list);

  return MP_OKAY;
}

But if you think that was simple, wait for the next post πŸ˜‰

On the numerical evaluation of factorials XI

Superfactorial

The superfactorial is, according to Sloane and Plouffe

{\displaystyle{ \mathop{\mathrm{sf}}\left(n\right)= \prod_{k=1}^n k! }}

Another definition is according to Clifford A. Pickover’s “Keys to Infinity” (Wiley,1995, ISBN-10: 0471118575, ISBN-13: 978-0471118572, saw it at Amazon offered by different sellers between $2,57 for a used one and $6,89 for new ones. I just ordered one and let you know if it is worth the money). The notation with the arrows is Donald Knuth’s up-arrow notation.

{\displaystyle{ n\!\mathop{\mathrm{\$}} = (n!)\uparrow\uparrow(n!) }}

This post is about the first variation. The superfactorial is also equivalent to the following product of powers.

{\displaystyle{ n\!\mathop{\mathrm{\$}}=\prod_{k=1}^n k^{n-k+1}}}

It is possible to multiply such a product in a fast way with the help of nested squaring, as Borwein and SchΓΆnhage have shown. It was also the basis for the fast algorithm for factorials described in this series of postings.

The original product is over the natural numbers up to and including n and it can be made faster by restricting it to the primes and works, roughly, by making the exponents bigger which gives a change for more squarings and lowering the number of intermediate linear multiplications.

The question is: What is faster, factorizing every single power of the product or factorizing every single factorial?

Factorizing every power of the product reduces to factoring of every base which reduces to all bases without the primes. The time complexity of Eratosthenes’ sieve is \mathop{\mathrm{O}}(n \log\log n) which can be told, without much loss of accuracy, linear. With this sieve (there are faster ones, but we need the full sequence, which makes it difficult to implement the following algorithm with other sieves) we get in one wash the prime factors for every number. The prime factors, not the exponents of these prime factors (i.e.: the single information we get is if the exponent is bigger than zero).

An example with factorizing 10 which should make it more clear.

{\displaystyle{ \begin{matrix}  7 &   &   &   &   &   & x &   &   &   \\ 5 &   &   &   & x &   &   &   &   & x  \\ 3 &   & x &   &   & x &   &   & x &   \\ 2 & x &   & x &   & x &   & x &   & x \\ / & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \end{matrix}  }}

Once we have this result we are bound to trial division to find the exponents[1]. That is costly but how much does it cost at the end?
We have to do at most \lfloor\log_2 n  \rfloor trial divisions (biggest exponent possible is reserved for 2) with at most \mathop{\pi}\left(\lfloor\sqrt{n} \rfloor\right) primes (e.g.: 2\cdot 3\cdot 5 = 30 but 5^2 = 25 ). The same has to be done for the exponents[1] of the powers from the superfactorial without the prime bases of which are all in all n-\pi(n) .

Assuming Riemann…no, we don’t need to do that πŸ˜‰

Because all of the trial divisions are in fact equivalent to taking the nth-root which can be done in \mathop{\mathrm{O}}(\mathop{\mathrm{M}}(m)\log m) it is quite fast. In theory.

Calculating the individual factorials on the other side would need \mathop{\mathrm{O}}(\mathop{\mathrm{M}}(m\log m) time but that includes the final multiplication, too.
Now, how much does this cost?
The number of digits of \mathop{\mathrm{sf}}(n) is \sum_{k=1}^m k\log k which is the logarithm of the hyperfactorial \mathop\mathrm{{hf}}(m) = \prod_{k=1}^m k^k that can be approximated with \log \left( \mathop{\mathrm{hf}}\left( m\right)\right) \sim 0.5 m^2 \log m which we can put in our time complexity and solemnly conclude: this is the largest part of the work and the rest pales in contrast.

It also means that we can just choose the simplest way to do it and that’s how we do it. We have implemented some methods before to do some simple arithmetic with lists of prime factors, to find the prime factors of a factorial, and to compute the final result.
Let’s put it al together:

int superfactorial(unsigned long n, mp_int * c)
{
  long *t, *s;
  unsigned long i, length_t, length_s;
  int e;
  if (n < 2) {
    mp_set(c, 1);
    return MP_OKAY;
  }
  if (n == 2) {
    mp_set(c, 2);
    return MP_OKAY;
  }
  if ((e = mp_factor_factorial(2, 0, &s, &length_s)) != MP_OKAY) {
    return e;
  }
  for (i = 3; i < n; i++) {
    if ((e = mp_factor_factorial(i, 1, &t, &length_t)) != MP_OKAY) {
      return e;
    }
    if ((e =
	 mp_add_factored_factorials(s, length_s, t, length_t, &s,
				    &length_s)) != MP_OKAY) {
      return e;
    }
    free(t);
    t = NULL;
    length_t = 0;
  }
  if ((e =
       mp_compute_signed_factored_factorials(s, length_s, c,
					     NULL)) != MP_OKAY) {
    return e;
  }
  return MP_OKAY;
}

The relevant functions are in my fork of libtommath.

A first test against the naïve implementation, admittedly quite unfair, gave a difference in time of 1:300 for \mathop{\mathrm{sf}}(1\,000) . But more elborated algorithms do not give much back for such small numbers, maybe binary splitting could gain something by making the numbers to multiply more balanced in size.
With \mathop{\mathrm{sf}}(1\,000) \sim 3.2457\mathrm{\mathbf{e}}1\,177\,245 the cutoff to FFT multiplication gets reached.

[1] The trial divisions to find the exponents is actually integer root finding, as you might have known already, which can be done faster than just trial division, much faster. With \mathop{\mathrm{M}}(m) the time to multiply two m-digit long integers the time complexity for the nnth-root is \mathop{\mathrm{O}}(\mathop{\mathrm{M}}(m)\log m) with the AGM.