# Some Twin Primes

With $n_{32}$ representing the sum of all primes smaller than $2^{32}$, the number $n_{32}-1 = 425649736193687429$ is a member of the twin prime $\left(425649736193687429, 425649736193687431\right)$. The numbers $n_{7} + 1$ and $n_{34} + 1$ are also twin primes: $\left( 1721, 1723 \right)$ and $\left( 6400753258957522037, 6400753258957522039 \right)$ respectively.

It is unknown if that information is of any use but as the saying goes: “You’ll never know!”.

# Closed Form for n-th digit Generalization in Benford’s Law

The sum as described in the Wikipedia article has a closed form. It took me some time but after some twiddling with the sum itself I decided to exponentiate the whole thing to get rid of the logarithm and had a simple product I was able to handle.

With $d$ the digit and $b$ the base:

${\displaystyle{ \prod_{n=b^{k-2}}^{b^{k-1}-1}\left(1+\frac{1}{d+b n}\right) }}$

This can be expressed as a fraction of two rising factorials (Pochhammer symbol).

${\displaystyle{ \frac{\Bigl(\frac{b^k+d b+b}{b^2}\Bigr)_{b^{k-2}\left( b - 1\right)}}{\Bigl(\frac{b^k+d b}{b^2}\Bigr)_{b^{k-2}\left( b - 1\right) }} }}$

The rising factorial $(x)_n$ can be expressed as a fraction of two Gamma functions

${\displaystyle{ (x)_n = \frac{\Gamma(x + n)}{\Gamma(x)} }}$

With which we get

${\displaystyle{\frac{\Gamma\bigl(b^{k-2}+\frac{d}{b}\bigr) \Gamma\bigl(b^{k-1}+\frac{d}{b}+\frac{1}{b}\bigr)}{\Gamma\bigl(b^{k-2}+\frac{d}{b}+\frac{1}{b}\bigr) \Gamma\bigl(b^{k-1}+\frac{d}{b}\bigr)} }}$

By setting $d = b-1$ we can already see that the result will get smaller the larger $b$ gets with the obvious limit

${\displaystyle{ \lim_{b\to\infty} \frac{\Gamma\bigl(b^{k-2}+\frac{d}{b}\bigr) \Gamma\bigl(b^{k-1}+\frac{d}{b}+\frac{1}{b}\bigr)}{\Gamma\bigl(b^{k-2}+\frac{d}{b}+\frac{1}{b}\bigr) \Gamma\bigl(b^{k-1}+\frac{d}{b}\bigr)} = 0 \quad\text{with }d = b-1 \land d\in\mathbb{N} }}$

The arguments of the Gamma functions get quite large which makes the results hard to handle but we already need the logarithm of the product, that means we can use the logarithm of the Gamma function and save one step (we are on the real-line, so I used the equality $\log\Gamma(x) = \log\left(\Gamma\left(x\right)\right)$ to avoid an even denser parentheses fence than it already is)

${\displaystyle{ \Biggl(\log\Gamma\biggl(b^{k-2}+\frac{d}{b}\biggr) + \log\Gamma\biggl(b^{k-1}+\frac{d}{b}+\frac{1}{b}\biggr)\Biggr) - \Biggl(\log\Gamma\biggl(b^{k-2}+\frac{d}{b}+\frac{1}{b}\biggr) + \log\Gamma\biggl(b^{k-1}+\frac{d}{b}\biggr)\Biggr) }}$

We can simplify a bit with $\frac{d}{b} + \frac{1}{b} = 1$ because $d = b-1$. Furthermore $b^{k-2} + \frac{d}{b} = \frac{b^{k-1} + d}{b}$ which can save us a multiplication here and there. So far we got

${\displaystyle{ \Biggl(\log\Gamma\biggl(\frac{b^{k-1}+d}{b}\biggr) + \log\Gamma\biggl(b^{k-1}+1\biggr)\Biggr) - \Biggl(\log\Gamma\biggl(b^{k-2}+1\biggr) + \log\Gamma\biggl(b^{k-1}+\frac{d}{b}\biggr)\Biggr) }}$

or

${\displaystyle{ \Biggl(\log\Gamma\biggl(\frac{b^{k-1}+d}{b}\biggr) + \log\Gamma\biggl(b^{k-1}+1\biggr)\Biggr) - \Biggl(\log\Gamma\biggl(\frac{b^{k-1}+d + 1}{b}\biggr) + \log\Gamma\biggl(b^{k-1}+\frac{d}{b}\biggr)\Biggr) }}$

In the special case I used it for $b$ was a power of two such that none of the exponentiations had to be calculated, the numbers could be build in O(1).

Calling the whole thing $f(d,b)$ we can compute the probability that the nth digit is $b-1$ with $\frac{f(d,b)}{ \log(b)}$. For general $0 \le d < b$ use the unsimplified version above for $f(d,b)$.

Beware: the numbers get really large, you will need a slightly higher amount of precision for the loggamma function if you calculate with a large base. I had 226 as the base and had trouble to compute the values for places higher than the 20th in PARI/GP (it said *** lngamma: division by zero). It worked in calc with a precision of 300 decimal digits but it is sloooow. Who wrote that lame gamma function? Who? Me? Myself? Really? Ouch! 😉

# On the numerical evaluation of factorials X

The last post in this sequence was about some more general way to work with the prime factorization of the factorials and about the Narayana numbers:

${\displaystyle{ N(n,k) = \frac{1}{n}{ \binom nk}{\binom n{k-1}} }}$

Which expands to

${\displaystyle{ {{n!^2}\over{\left(k-1\right)!\,k!\,n\,\left(n-k\right)!\,\left(n-k +1\right)!}} }}$

Using $(n+1)! = n(n!)$ and vice versa $(n-1)! = \frac{n!}{n}$ we can shove more things together and get

${\displaystyle{ \frac{k n!^2}{k!^2 n \left(n-k+1\right) \left(n-k\right)!^2} }}$

Get the non-factorials out of the way

${\displaystyle{ \frac{k}{n \left(n-k+1\right)}\; \frac{n!^2}{k!^2 \left(n-k\right)!^2} }}$

With $P_n$ the prime factorization of $n!$ and with their arithmetic resembling the relation of the arithmetic of numbers and the arithmetic of their logarithms (e.g.:$x\cdot y = \exp\left(\log\left(x\right)+\log\left(y\right) \right)$ ) we get (dropping the fraction with the non-factorials)

{\displaystyle{ \begin{aligned} \phantom{\Leftrightarrow\;} &2P_n - \left( 2P_k + 2P_{n-k} \right) \\ \Leftrightarrow\; &2P_n - 2P_k - 2P_{n-k} \\ \Leftrightarrow\; &2\left(P_n - P_k - P_{n-k} \right) \end{aligned} }}

We have to be carefull with the term $n \left(n-k+1\right)$ for it can overflow, so we should divide the prime factorization by and by instead of multiplying $n \left(n-k+1\right)$ out and divide by that all together.

The actual implementation is left as an exercise for the reader.
Or just wait until I do it myself in the next days 😉

# On the Principal Branch of the Log-Gamma Function [updated]

[UPDATE at Mo 29. Jul 17:09:30 CEST 2013] fixed a small bug.
We did a fast glimps—albeit with working code—at the implementation of the gamma function with Spouge’s formula lastly. This code implements the logarithm of the gamma function $\log (\Gamma z)$ with the principal branches of the complex logarithm. Another function, named $\log\Gamma z$ uses different principal branches that are more useful. You will find that it is this function that is implemented in most programs that offer arbitrary precision.
To go from $\log (\Gamma z)$ to $\log\Gamma z$ is not easy:
It is equivalent in the strip

$\displaystyle{ \log\Gamma z = \log(\Gamma z) \quad \text{with} \quad 0 < \Re z \le 2 \vee \lvert \Im z\rvert \le \frac{7}{2} }$

The imaginary part of negative values are simply multiples of $\pi$

$\displaystyle{ \log\Gamma x = \log (\Gamma z) + 2 i \pi \biggl\lceil\frac{x}{2}-1\biggr\rceil \quad \text{with} \quad x<0 \vee -x\not\in\mathbb{N}}$

The interesting part comes with the rest of the complex plane:

$\displaystyle{\log\Gamma x = \log (\Gamma z) +2 i \pi k(z)}$

The function $k(z)$ returns an integer. To compute these size of the aforementioned integer the following simple integral has to be solved:

$\displaystyle{k(z) = \int_0^z \theta \bigl( -\Re(\Gamma t) \bigr) \bigl\lvert \Im\bigl(\Gamma(t)\psi(t) \bigr) \bigr\rvert \delta\bigl( \Im(\Gamma t) \bigr) \mathrm{d}t}$

with $\theta (z)$ the unit step function

$\displaystyle{\theta (x) = \begin{cases} 0& x< 0\\ 1&<\ge 0 \end{cases} }$

with $\psi (z)$ the first logarithmic derivative of the Gamma function

$\displaystyle{ \psi (z) = \frac{\mathrm{d}}{\mathrm{d}z}\Gamma z = \frac{\Gamma '(z)}{\Gamma(z)} }$

and finally $\delta (x)$ a kind of a derivative of the Heaviside step function and has the following limit

$\displaystyle{\delta(x)\lim_{\epsilon\to 0}\frac{\sin\frac{x}{\epsilon}}{\pi x} }$

All that information should suffice to handle the integral. Not that it wouldn’t be a good training, but others have solved the problem already, namely D. E. G. Hare in “Computing the Principal Branch of log-Gamma” published in the Journal of Algorithms 25(2):221-236 in 1997.

Attached to a simple Gamma function with the Lanczos coefficients

define kroneckerdelta(i,j){
if(isnull(j)){
if(i==0) return 1;
else return 0;
}
if(i!=j) return 0;
else return 1;
}
define __CZ__lngamma_lanczos(z){
local a k g zghalf lanczos;
mat lanczos[15] = {
9.9999999999999709182e-1,
5.7156235665862923516e1,
-5.9597960355475491248e1,
1.4136097974741747173e1,
-4.9191381609762019978e-1,
3.3994649984811888638e-5,
4.6523628927048576010e-5,
-9.8374475304879566105e-5,
1.5808870322491249322e-4,
-2.1026444172410489480e-4,
2.1743961811521265523e-4,
-1.6431810653676390482e-4,
8.4418223983852751308e-5,
-2.6190838401581411237e-5,
3.6899182659531626821e-6
};
g = 607/128;
z = z-1;
a = 0;
for(k = 12; k >= 1; k--){
a += lanczos[k]/(z+k);
}
a += lanczos[0];
zghalf = z + g + .5;
return ( ln(sqrt(2*pi())) + ln(a)  -zghalf  ) + (z+.5)*ln( zghalf );
}
define lngamma(z){
local ret eps flag corr;
flag = 0;
if(isint(z)){
if(z <=0)
return newerror("gamma(z): z is a negative integer");
else{
/* may hold up accuracy a bit longer, but YMMV */
if(z < 20)
return ln((z-1)!);
else
return __CZ__lngamma_lanczos(z);
}
}
else{
if(re(z)<0){
if(im(z) && im(z)<0){
z = conj(z);
flag = 1;
}

ret = ln( pi() / sin(pi() *z ) ) -__CZ__lngamma_lanczos(1-z);

if(!im(z)){
if(abs(z) <= 1/2)
ret = re(ret) - pi()*1i;
else if( isodd(floor(abs(re(z)))) ){
ret = re(ret) + ( ceil(abs(z))   * pi() * 1i);
}
else if( iseven(floor(abs(re(z)))) ){
/* < n+1/2 */
if(iseven(floor(abs(z)))){
ret = re(ret) + ( ceil(abs(z)-1/2 -1 )   * pi() * 1i);
}
else{
ret = re(ret) + ( ceil(abs(z) -1/2 )   * pi() * 1i);
}
}
}
else{
corr = ceil( re(z)/2 -3/4 - kroneckerdelta(im(z))/4);
ret = ret +2*(corr *pi() )*1i;
}
if(flag == 1)
ret = conj(ret);
return ret;
}
return __CZ__lngamma_lanczos(z);
}
}


My problem is, that this trick does not work with the Spouge coefficients. At least not for me. I didn’t find out if it is a bug in my implementation (most probably) or a more technical problem. The paper mentioned above talks about the Stirling series. It uses Bernoulli numbers which are quite expensive to compute and the series is not convergent which makes it quite tricky to find the right number of terms. D.E.G. Hare showed in section 4.1 a trick to evaluate the error in a simple way for $z\in\mathbb{C}\setminus\mathbb{R}$.

/* Compute error for Stirling series for "z" with "k" terms*/
define R(z,k){
local a b ab stirlingterm;
stirlingterm = bernoulli(2*k)/( 2*k*(2*k-1)*z^(2*k));
a = abs( stirlingterm );
b = (cos(.5*arg(z)^(2*k)));
ab = a/b;
/* a/b might round to zero */
if( abs(ab) < epsilon()){
return digits(1/epsilon());
}
return digits(1/(a/b));
}


# On the numerical evaluation of factorials V

It’s been a bit since I started the series and, freely admitted, forgot to add the last part, the actual implementation of Peter B. Borwein’s algorithm[1] for computing factorials.
Some changes from the past: the language changed from Javascript to the script language implemented in Calc. It is easier to read and can be used as runnable pseudo-code. And it is a bit faster than my Javascript implementation of a big-integer library.
A short summary of the algorithm goes:

1. Factorize the factorial
2. Exponentiate the primes according to the factorization
3. Compute the product of the above powers

To factorize a factorial no factorization of the fully computed factorial is needed. Obviously. The number theoretical background is given at Wikipedia, at Wolframs and elsewhere, too.
There are several methods to do that but simple trial division is good enough; these are small numbers: the largest possible exponent is the exponent of the smaller prime, that would be 2 and since $2^x$ cannot exceed $n!$ if $2^x$ is a prime factor so $x$ cannot exceed $n$. Proof omitted, ask Adrien-Marie Legendre if you need one.
The time complexity to do the computation is O(log n) or, in this case, exactly $\lfloor log_p n \rfloor$ integer divisions with $p$ prime the base of the logarithm, therefore only half of the primes have exponents $x>1$. We might treat these separately.

/* A simple list to keep things...uhm...simple?*/
global __CZ__primelist = list();
/* Helper for factorial/primorial: fill list with primes in range a,b */
define __CZ__fill_prime_list(a,b)
{
local k;
k=a;
/*
nextprime(k) means next prime even if k is prime. This is
different from the behaviour of other programs e.g.; PARI/GP
*/
if(isprime(k))k--;
while(1){
k = nextprime(k);
if(k > b) break;
append(__CZ__primelist,k );
}
}
/* Helper for factorial: how often prime p divides the factorial of n */
define __CZ__prime_divisors(n,p)
{
local q,m;
q = n;
m = 0;
if (p > n) return 0;
if (p > n/2) return 1;
while (q >= p) {
q  = q//p; /* double forward slash is integer division */
m += q;
}
return m;
}


To make it more complete and easier to follow we get the primes from a generated list, you might use your own implementation of a sieve or similar methods to get the primes.
Now that we have that ready we can go on, power up, and exponentiate the primes according to the results from above.

/* Helper for factorial: exponentiate p^k */
define __CZ__exponentiate(prime,exponent) {
local bit result temp;

result = 1;
bit = digits(exponent,2);
while((exponent & (1 << bit)) == 0) bit--;
for( ; bit>=0 ; bit--){
result = result^2;
/* square if bit is zero, multiply otherwise */
if((exponent & (1 << bit)) != 0){
result *=  prime;
}
}
return result;
}


The method here is the so called binary exponentiation and has a bit (ha!) of optimization. A simpler and more readable piece of code is probably

define exponentiate(base,exponent)
{
local result e b;

result = 1;
b      = base;
e      = exponent;

while(e != 0){
if(isodd(e)){
result = result * b;
}
e = e>>1;
if(e != 0){
b = b^2;
}
}

return result;
}


This method allows for less multiplications and even better: more squaring which can be done faster than multiplication.

Apropos faster: we already have the number of times 2 divides $n!$ and because shifting is cheap in modern computers it is advisable to use that fact and drop $2^x$ from the product and just shift by $x$ at the end. The second half of the primes have exponent 1 (one) and it is a better idea to treat the separately as a fraction of two primorials $\frac{P_{(n/2)\to n}}{P_{1\to (n/2)}}$ and compute the product with the help of binary splitting.
Now, lets get it all together.

  local prime result prime_raised shift cut;

result = 1;
prime      = 2;
/* checks and balances omitted */
shift = __CZ__prime_divisors(n,prime);

prime = 3;
cut   = n//2;
do {
prime_raised = __CZ__exponentiate(prime,__CZ__prime_divisors(n,prime));
result *= prime_raised;
prime = nextprime(prime);
} while(prime <= cut);
result *= primorial( cut, n);
result <<= shift;
return result


What? Oh, sorry, the primorial, yes.

define primorial( a, b)
{
local C1;
/*
Checks and balances omitted bt keep in mind that the number of primes you
get from functions like nextprime() might be limited. Here the limit is
2^32 so the last prime you can get with nextprime() is 4294967291.
*/
__CZ__fill_prime_list(a,b);
C1 = size(__CZ__primelist)-1;
return __CZ__primorial__lowlevel( 0, C1,1)
}


The actual product gets, as promised, calculated with a simple binary splitting algorithm:

/*
Helper for primorial: do the product with binary splitting
TODO: do it without the intermediate list
*/
define __CZ__primorial__lowlevel( a, b ,p)
{
local  c;
if( b == a) return p ;
if( b-a > 1){
c= (b + a) >> 1;
return  __CZ__primorial__lowlevel( a   , c , __CZ__primelist[a] )
*  __CZ__primorial__lowlevel( c+1 , b , __CZ__primelist[b] ) ;
}
return  __CZ__primelist[a] * __CZ__primelist[b];
}


As we can see it is the very same algorithm as the binary splitting algorithm for the factorial itself with the single difference that a,b are not to be multiplied themselves but act as indices to the ordered(!) list of primes.
This method would lower the time complexity of the complete algorithm if we exponentiate the primes in-place and run the above algorithm over the whole list instead of the upper half only.
We should grab the chance and make some things more general, for example the product of the list of primes can easily be the product of any ordered list. Please be carefull if is possible to persuade your language to pass arguments by pointer and not by copying the whole thingamabob every f’ing time.

define product_list( a, b ,p,list)
{
local  c;
if( b == a) return p ;
if( b-a > 1){
c= (b + a) >> 1;
return  product_list( a   , c , list[a],list )
*  product_list( c+1 , b , list[b],list ) ;
}
return  list[a] * list[b];
}


It is also possible now to skip the global variable which is in most cases a bad idea to have; only in rare cases it is a very bad idea.

define factorial2(n){
local prime result  shift cut prime_list k;

result = 1;
prime      = 2;
prime_list = list();

shift = __CZ__prime_divisors(n,prime);

prime = 3;
cut   = n//2;
/* first half with exponentiation */
do {
append(prime_list,__CZ__exponentiate(prime,__CZ__prime_divisors(n,prime)));
prime = nextprime(prime);
} while(prime <= cut);
/* first half rare primes only */
do{
append(prime_list,prime );
prime = nextprime(prime);
}while(prime <= n);

k = size(prime_list)-1;

result =  product_list( 0, k,1,prime_list);
result <<= shift;

return result;
}


One final note: the respective sizes of the factors in the multiplication is not very equal. That means that the use of the common methods for fast multiplication (e.g.: FFT) which depends on equal size of the multiplicands can be tricky. Simple padding is, uhm, simple but raises the n in M(n).
Well, a theme for a later posting, I guess 😉
UPDATE just an hour later.
Ok, I’ll do it now and here.
I admit: I withhold the essential trick that makes it faster than everything known before: nested squaring.
If you imaging the whole list of primes with their respective exponents and lay it flat, exponents down, primes up, and you’ll get the following:

 2 3 5 7 … 0 1 1 1 … 1 0 1 0 … … … … …

What we do now is reading it row-wise. Multiply the result with the respective prime if the bit is set, do nothing otherwise and when you come at the end of the row square the result, rinse and repeat.
Now we have only fast squaring and multiplication with very small primes which can be done in O(n) if the prime is smaller than one LIMB which has in most cases 32 bit if not 64.

define factorial3(n){
local prime result shift prime_list expo_list k k1 k2;

result = 1;
prime      = 2;
prime_list = list();
expo_list  = list();
/* Checks and balances omitted */
shift = __CZ__prime_divisors(n,prime);
prime = 3;
do {
append(prime_list,prime);
append(expo_list,__CZ__prime_divisors(n,prime));
prime = nextprime(prime);
}while(prime <= n);

/* size of the largest exponent in bits  */
k1 = highbit(expo_list[0]);
k2 = size(prime_list)-1;

for(;k1>=0;k1--){
/*
Square the result every row.
One might skip the first time because 1^2 = 1.
*/
result *= result;
for(k=0; k<=k2; k++) {
/*
check every exponent for a set bit and
multiply the result with the respective prime
*/
if((expo_list[k] & (1 << k1)) != 0) {
result *= prime_list[k];
}
}
}
result <<= shift;
return result;
}


That was it! Really! Promised!
Well …

[1]Peter B. Borwein, On the Complexity of Calculating Factorials, Journal of Algorithms, 3, 1985

# The Gamma Function with Spouge’s Approximation

It was promised, by me, here, personally, to implement Spouge’s approximation of
Euler’s Gamma-function in the script language offered by David I. Bell’s
arbitrary precision calculator calc.
I did.
For a short oversight over the algorithm use, see Spouge’s_approximation over at Wikipedia. WordPress’ code colouring has no template for calc-scripting, so I just use C++, it’s similar enough.

Spouge’s approximation is:
$\displaystyle{ \Gamma(z+1) = (z+a)^{z+1/2} e^{-(z+a)} \left[ c_0 + \sum_{k=1}^{a-1} \frac{c_k}{z+k} + \varepsilon_a(z) \right] }$
Let’s see what we have here:

• A leading term $(z+a)^{z+1/2} e^{-(z+a)}$, precomputable
• The first coefficient $c_0 = \sqrt{2 \pi}$ is a constant, precomputable
• All further coefficients $c_k=\frac{(-1)^{k-1}}{(k-1)!}(-k+a)^{k-1/2}e^{-k+a}$ are constants, too, precomputable

To keep those precious coefficients that we calculated at the sweat of our brows, we put them a in a global list for easy recycling.

/*
Cache for the variable independent coefficients in the sum.
*/
global Ck = list();


We calculate the natural logarithm of the function $\log\left(\Gamma\left(z\right)\right)$ —which is different from $\log\Gamma\left(z\right)$ ! —for the right half of the complex plane, that is $\mathfrak{R}\left(z\right) > 0$.

define lngammarp(z)
{
local epsilon accuracy a  factrl fact1 sum coeffk n ret;


It seems that in some versions of Calc the placement of the left curly bracket is not as free as it should be but must be where I put it and as you can see: the variables must be declared before use in the way seen above, i.e. without commas.

Next we calculate the needed accuracy in decimal digits plus the obligatory angstzuschlag[1]. The function that can be used to read and set the current precision is epsilon(). It prints the absolute value of the smallest representable number if it is called without an argument. Another useful function is digits() that returns the number of the digits of the argument and both together digits( 1 / epsilon() ) returns the number of decimal digits, so
epsilon( 10^(- digits( 1 / epsilon() )+1 ) )  does not change anything. The coefficients are computed with a certain precision which we want to keep. The precision I mean.

  epsilon  = epsilon();
accuracy = digits(int(1/epsilon)) + 3;
if(size(Ck) == 0){
append(Ck,accuracy);
}


The calculation of $a$, as described elsewhere in detail, needs only low precision, so it might be a good idea to reduce 1/epsilon to a small number, calculate $a$ and return to the old, higher precision afterwards.

  epsilon(1e-18);
a = ceil(1.252850440912568095810522965 * accuracy);


Next switch to some higher precision. But how much is some? Well, it is a known problem, the calculation of the precision that is needed in relation to the argument z to compute enough correct digits. The main culprit here is the alternating sum. There is not much that can be done for a sum of arbitrary length. There are some ways for sums of small fixed length, see for example the way the people of the math-library at Boost ( John Maddock, Paul A. Bristow, Hubert Holin and Xiaogang Zhang according to the footer at that page) implemented the Gamma functions by way of the original Lanczos’ approximation in a more tricky way.

But in this simple implementation we just take 50% more for the wanted accuracy for
$abs(z) < 1e100$. With this implementation that means that at least digits(int(1/epsilon) are correct, starting from the beginning, that is the highest digit not the first decimal.

Please keep in mind that calc was not meant to be a highly versatile and still fast arbitrary precision floating point calculator but rather a tool for generating and manipulating primes, at least that’s how I understood it.
There are other tools that fit that description better (e.g.: PARI/GP but be aware that lngamma() in PARI/GP has a different branch-cut so our log(gamma(z)) is not necessarily equal to PARI/GP’s lngamma(z) on the complete complex plane. Yes, I just repeated myself) but the gamma-function is such a generic tool, it just needs to be in every tool-box. As are the Bessel functions, but I’ll leave that for another time and post.

Where have I been? Uh, yes, setting the right amount of precision. And when we have done that we can start to compute the constants:

  epsilon(epsilon*10^(-(digits(1/epsilon)//2))); /*integer division*/
fact1 = (ln(z + a) * (z + 0.5)) - (z + a) - ln(z); /* Leading term */
factrl = 1.0;          /* The factorial */
sum = sqrt(2*pi());    /* The first coefficient c_0  */


Calc has integer division. Normal floating point division is done with a single slash 3/2 == 1.5 and integer division with two slashes 3//2 == 1.

Before we are able to fill the list with our coefficients we have to clean up from the work done before, for it could have been done for some lower precision.

  /*
TODO: delete coefficients if precision is lower than the last call.
CLOSED: memory savings not significant, 640k is enough for
everybody!
*/
if(size(Ck) != 0 && Ck[0] < accuracy){
while(size(Ck) != 1){
remove(Ck);
}
Ck[0] == accuracy;
}


The list Ck is a linked list, remove() removes the last entry in the list. The list does not grow very large and is filled with just integers, so no headaches here or cursing of the authors.
The computation of rest of the the coefficients is rather simple and nearly straight-forward.

  for (n = 1; n < a; n++){
z++;
if(size(Ck) == n){
coeffk   = ( ( (a - n)^(n - 0.5) ) * exp(a - n) )/factrl;
append(Ck,coeffk);
} else {
coeffk = Ck[n];
}
sum    += ( coeffk / (z ) );
factrl *= -n; /* the "alternating" in "alternating sum" */
}


That was all! Now put it all together, reset the precision, and return the result.

  ret = ln(sum) + fact1; /* ln() is the natural logarithm base e, log() is base 10!  */
epsilon(epsilon);
return ret;
}


For the left half of the complex plane, some needed checks and balances and the obligatory etc. we build a wrapper. Two wrappers to be exact. The first one is $\log\left(\Gamma\left(z\right)\right)$ for $\{z\vert z\in\mathbf{C}\vee-z\not\in\mathbf{N}\}$.

define lngamma(z)
{
local ret eps;
/* z \in \mathbf{Z}*/
if(isint(z)){
/*The gamma-function has poles at zero and the negative integers*/
if(z <=0)
return 1/0;
else{
/* may hold up accuracy a bit longer, but YMMV */
if(z < 20)
return ln( (z+1)! );
else
return lngammarp(z);
}
}
else{
if(re(z) > 0){
return lngammarp(z);
}
else{


The reflection formula is known to be $\Gamma(1-z) \Gamma(z) = \frac{\pi}{\sin(\pi z)}$.

        eps = epsilon();
epsilon(eps*1e-3);
ret = ln(pi()) - ln( sin(pi() *z ) )   -  lngammarp(1-z)  ;
epsilon(eps);
return ret;
}
}
}


A very simple wrapper for the normal Gamma-function:

define gamma(z)
{
if(isint(z)){
if(z <=0)
return 1/0;
else{
/* may hold up accuracy a bit longer, but YMMV */
if(z < 20)
return (z+1)!*1.0;
else
return exp(lngammarp(z));
}
}
else{
if(re(z) > 0){
return exp(lngammarp(z));
}
else{
return pi()/( sin ( pi() * z )* exp(lngammarp(1-z) ) );
}
}
}


which can be shortened to

define gamma(z)
{
return exp(lngamma(z));
}


and we are done!

Next: The logarithmic derivatives of the Gamma-function generalized as the polygamma-function. I will implement it by way of the Hurwitz-zeta function, the two-variable zeta function, with the exception of the digamma function which will get a special treatment. No, I will not use the already computed Spouge coefficients for the digamma function, I found a different formula faster.

[1] “Angstzuschlag” is the colloquial geman word for “corrosion allowance”.

# On Doing It Yourself

### When the output of Mathematica® is correct but of not much use

When I do some calculations on the command line, it is in most cases some quick computations involving numbers with different bases in a precision that is not very high but higher than what the normal calculator offers. Starting up PARI/GP every time is distracting and PARI/GP does not offer an easy way to enter numbers in different bases; dc and bc are slightly more complicated to use—although they are quite excellent in shell-scripts!—and all others are either insufficient in power, usage and/or price so it is calc what I use regularly.

It has some downsides, too. One of these disadvantages is the lack of functions that are included in almost every other arbitrary precision calculator because calc had been written for a special need originally. See the webpage of calc I linked to above for further information.

Another advantage of calc is that the syntax is quite C-like and allows for easy implementation of the functions and algorithms you need. At least that’s what I hope for 😉

The most common implementation of an approximation of the gamma function when arbitrary precision is needed, is the variation of James Stirling’s approximation found by John L. Spouge[1]. It is some time ago, that I implemented it and I had to look it up. The first page that came up after asking Google for some help, was the page of the entry of Literate Prgrams for an implementation in Mathematica®. No, I really cannot remember what I’ve asked Google to get that result. I found out later, that Rosetta Code has a rudimentary but working example implementation in C.

Now, where do I am going to get to with such a long-winded introduction, you might ask, and I will tell you: on the Literate-Programming page I found a short explanation of the algorithm which includes the following gem:

Conservatively omitting the factor $a^{-1/2}$ and rearranging gives us the formula…

They talk about the error and the number of coefficients needed:

(1.0) $\displaystyle{ \epsilon \le a^{-1/2} (2 \pi)^{-(a+1/2)}}$

for which they give the approximation of the parameter $a$ as:

(1.1) $\displaystyle{ a = -\frac{\log \epsilon}{\log 2 \pi}}$

If you enter the inequality into the text field over at wolframalpha.com and tell it to solve for a you’ll get an exact but unusable result. Changing the inequality to an equality you’ll get another, simpler but still unusable result involving the Lambert-W function. If you want to play with it, here is a JavaScript implementation of the approximation on the positive real line (click on it to unfold it):

Math.lambertW = function(x){
/*
On the Lambert W Function (1996)
by R. M. Corless , G. H. Gonnet , D. E. G. Hare , D. J. Jeffrey , D. E. Knuth
*/
var ret = 0.0;
var a   = Math.log(x);   //L1
var b   = Math.log(Math.log(x));//L2
var c   = a-b + b/a;
var l1  = (b*(-2 + b))/(2*a*a);
var l2  = (b*(6-9*b+2*b*b))/(6*a*a*a);
var l3  = (b*(-12+36*b-22*b*b+3*b*b*b))/(12*a*a*a*a);
var l4  = (b*(60-300*b+350*b*b-125*b*b*b+12*b*b*b*b))/(60*a*a*a*a*a*a);

return c+l1+l2+l3+l4;
};


So I decided to do it by hand. Why? Well, if you are seriously questioning it read no further, for what I do is not for you.
But I do not use anything not taught in highschool and even give the needed rules for the logarithms I use here, so you are heartily invited to give it a glimpse. You might even find an error 😉

At first make the inequality an equality because we are interested in the largest possible error only. Expand all of the exponents to make it more readable. For me at least but YMMV, as the saying goes.

(2.1) $\displaystyle{ \epsilon = \frac{1}{\sqrt{a}\,\pi^{a+1/2}}}$

The term $\sqrt{a}$ can be omitted here, that is indeed correct. I’ll give a short sketch of a proof at the end.

(2.2) $\displaystyle{ \epsilon = \frac{1}{2\pi^{a+\frac{1}{2}}}}$

To get rid of the exponent we have to take the logarithm of both sides. Any logarithm will do.

(2.3) $\displaystyle{ \log\epsilon = \log\left(\frac{1}{2\pi^{a+\frac{1}{2}}}\right)}$

Expand it, such that we get the exponents back we dropped so prematurely above.
Rules: $\log (a/b) = \log a - \log b$ and $\log (a^b) = log(a)\, b$

(2.4) $\displaystyle{ \log\epsilon = \log{1}-\log(2\pi) \left(a+\frac{1}{2}\right)}$

The logarithm of 1 (one) is 0 (zero) and can be omitted for brevity. Now move the constant $-\log{2\pi}$ to the left of the equation.

(2.5) $\displaystyle{ -\frac{\log\epsilon}{\log(2\pi)} = a + \frac{1}{2}}$

Subtract $\frac{1}{2}$ and we are done so far.

(2.6) $\displaystyle{-\frac{log\epsilon}{\log(2\pi)} - \frac{1}{2} = a}$

Because of the conditions $a \in \mathbb{N}$ together with $a > 2$ we can omit the $\frac{1}{2}$, too.

What? Uh, the sketch for the proof for dropping the poor $\sqrt{a}$, yes.
We start again from equation (2.1). To make it more readable we move the square root to the left to get

(3.1) $\displaystyle{ \epsilon\sqrt{a} = \frac{1}{2\pi^{a+\frac{1}{2}}}}$

Take the logarithm of both sides and expand
Rule: $\log(\sqrt[n]{a}) = \frac{\log a}{n}$ which is just a special case of $\log (a^b) = log(a)\, b$

(3.2) $\displaystyle{ \log\epsilon + \frac{\log a}{2} = -\log (2\pi) (a+\frac{1}{2}) }$

Uh, no, move $\sqrt{a}$ back to the right, sorry.

(3.3) $\displaystyle{ \log\epsilon = -\log (2\pi) \left(a+\frac{1}{2}\right) - \frac{\log a}{2} }$

Divide by $-\log (2\pi)$ as in equation (2.5) and expand

(3.4) $\displaystyle{ -\frac{\log\epsilon}{\log (2\pi)} = -\frac{2}{\log a\log(2\pi)} + a+\frac{1}{2} }$

Subtract $\frac{1}{2}$ and rearrange the right side to get

(3.5) $\displaystyle{ -\frac{\log\epsilon}{\log (2\pi)} -\frac{1}{2} = a - \frac{2}{\log a\log(2\pi)}}$

Now we can easily see by way of the inequality

(3.6) $\displaystyle{ a - \frac{2}{\log a\log(2\pi)} \le a}\quad\text{with}\quad a\in\mathbb{N}\, \wedge\, a>2$

that the term $\sqrt{a}$ had been dropped conservatively but rightfully $\blacksquare$

If you waited the whole time for the calc-script: you will find it in the repository if the authors of calc` accept it; here, in another posting, otherwise.

[1] Spouge, John L. (1994). Computation of the Gamma, Digamma, and Trigamma Functions. SIAM Journal on Numerical Analysis 31 (3): 931–000.