# The Gamma Function with Spouge’s Approximation

It was promised, by me, here, personally, to implement Spouge’s approximation of
Euler’s Gamma-function in the script language offered by David I. Bell’s
arbitrary precision calculator calc.
I did.
For a short oversight over the algorithm use, see Spouge’s_approximation over at Wikipedia. WordPress’ code colouring has no template for calc-scripting, so I just use C++, it’s similar enough.

Spouge’s approximation is:
$\displaystyle{ \Gamma(z+1) = (z+a)^{z+1/2} e^{-(z+a)} \left[ c_0 + \sum_{k=1}^{a-1} \frac{c_k}{z+k} + \varepsilon_a(z) \right] }$
Let’s see what we have here:

• A leading term $(z+a)^{z+1/2} e^{-(z+a)}$, precomputable
• The first coefficient $c_0 = \sqrt{2 \pi}$ is a constant, precomputable
• All further coefficients $c_k=\frac{(-1)^{k-1}}{(k-1)!}(-k+a)^{k-1/2}e^{-k+a}$ are constants, too, precomputable

To keep those precious coefficients that we calculated at the sweat of our brows, we put them a in a global list for easy recycling.

/*
Cache for the variable independent coefficients in the sum.
*/
global Ck = list();


We calculate the natural logarithm of the function $\log\left(\Gamma\left(z\right)\right)$ —which is different from $\log\Gamma\left(z\right)$ ! —for the right half of the complex plane, that is $\mathfrak{R}\left(z\right) > 0$.

define lngammarp(z)
{
local epsilon accuracy a  factrl fact1 sum coeffk n ret;


It seems that in some versions of Calc the placement of the left curly bracket is not as free as it should be but must be where I put it and as you can see: the variables must be declared before use in the way seen above, i.e. without commas.

Next we calculate the needed accuracy in decimal digits plus the obligatory angstzuschlag[1]. The function that can be used to read and set the current precision is epsilon(). It prints the absolute value of the smallest representable number if it is called without an argument. Another useful function is digits() that returns the number of the digits of the argument and both together digits( 1 / epsilon() ) returns the number of decimal digits, so
epsilon( 10^(- digits( 1 / epsilon() )+1 ) )  does not change anything. The coefficients are computed with a certain precision which we want to keep. The precision I mean.

  epsilon  = epsilon();
accuracy = digits(int(1/epsilon)) + 3;
if(size(Ck) == 0){
append(Ck,accuracy);
}


The calculation of $a$, as described elsewhere in detail, needs only low precision, so it might be a good idea to reduce 1/epsilon to a small number, calculate $a$ and return to the old, higher precision afterwards.

  epsilon(1e-18);
a = ceil(1.252850440912568095810522965 * accuracy);


Next switch to some higher precision. But how much is some? Well, it is a known problem, the calculation of the precision that is needed in relation to the argument z to compute enough correct digits. The main culprit here is the alternating sum. There is not much that can be done for a sum of arbitrary length. There are some ways for sums of small fixed length, see for example the way the people of the math-library at Boost ( John Maddock, Paul A. Bristow, Hubert Holin and Xiaogang Zhang according to the footer at that page) implemented the Gamma functions by way of the original Lanczos’ approximation in a more tricky way.

But in this simple implementation we just take 50% more for the wanted accuracy for
$abs(z) < 1e100$. With this implementation that means that at least digits(int(1/epsilon) are correct, starting from the beginning, that is the highest digit not the first decimal.

Please keep in mind that calc was not meant to be a highly versatile and still fast arbitrary precision floating point calculator but rather a tool for generating and manipulating primes, at least that’s how I understood it.
There are other tools that fit that description better (e.g.: PARI/GP but be aware that lngamma() in PARI/GP has a different branch-cut so our log(gamma(z)) is not necessarily equal to PARI/GP’s lngamma(z) on the complete complex plane. Yes, I just repeated myself) but the gamma-function is such a generic tool, it just needs to be in every tool-box. As are the Bessel functions, but I’ll leave that for another time and post.

Where have I been? Uh, yes, setting the right amount of precision. And when we have done that we can start to compute the constants:

  epsilon(epsilon*10^(-(digits(1/epsilon)//2))); /*integer division*/
fact1 = (ln(z + a) * (z + 0.5)) - (z + a) - ln(z); /* Leading term */
factrl = 1.0;          /* The factorial */
sum = sqrt(2*pi());    /* The first coefficient c_0  */


Calc has integer division. Normal floating point division is done with a single slash 3/2 == 1.5 and integer division with two slashes 3//2 == 1.

Before we are able to fill the list with our coefficients we have to clean up from the work done before, for it could have been done for some lower precision.

  /*
TODO: delete coefficients if precision is lower than the last call.
CLOSED: memory savings not significant, 640k is enough for
everybody!
*/
if(size(Ck) != 0 && Ck[0] < accuracy){
while(size(Ck) != 1){
remove(Ck);
}
Ck[0] == accuracy;
}


The list Ck is a linked list, remove() removes the last entry in the list. The list does not grow very large and is filled with just integers, so no headaches here or cursing of the authors.
The computation of rest of the the coefficients is rather simple and nearly straight-forward.

  for (n = 1; n < a; n++){
z++;
if(size(Ck) == n){
coeffk   = ( ( (a - n)^(n - 0.5) ) * exp(a - n) )/factrl;
append(Ck,coeffk);
} else {
coeffk = Ck[n];
}
sum    += ( coeffk / (z ) );
factrl *= -n; /* the "alternating" in "alternating sum" */
}


That was all! Now put it all together, reset the precision, and return the result.

  ret = ln(sum) + fact1; /* ln() is the natural logarithm base e, log() is base 10!  */
epsilon(epsilon);
return ret;
}


For the left half of the complex plane, some needed checks and balances and the obligatory etc. we build a wrapper. Two wrappers to be exact. The first one is $\log\left(\Gamma\left(z\right)\right)$ for $\{z\vert z\in\mathbf{C}\vee-z\not\in\mathbf{N}\}$.

define lngamma(z)
{
local ret eps;
/* z \in \mathbf{Z}*/
if(isint(z)){
/*The gamma-function has poles at zero and the negative integers*/
if(z <=0)
return 1/0;
else{
/* may hold up accuracy a bit longer, but YMMV */
if(z < 20)
return ln( (z+1)! );
else
return lngammarp(z);
}
}
else{
if(re(z) > 0){
return lngammarp(z);
}
else{


The reflection formula is known to be $\Gamma(1-z) \Gamma(z) = \frac{\pi}{\sin(\pi z)}$.

        eps = epsilon();
epsilon(eps*1e-3);
ret = ln(pi()) - ln( sin(pi() *z ) )   -  lngammarp(1-z)  ;
epsilon(eps);
return ret;
}
}
}


A very simple wrapper for the normal Gamma-function:

define gamma(z)
{
if(isint(z)){
if(z <=0)
return 1/0;
else{
/* may hold up accuracy a bit longer, but YMMV */
if(z < 20)
return (z+1)!*1.0;
else
return exp(lngammarp(z));
}
}
else{
if(re(z) > 0){
return exp(lngammarp(z));
}
else{
return pi()/( sin ( pi() * z )* exp(lngammarp(1-z) ) );
}
}
}


which can be shortened to

define gamma(z)
{
return exp(lngamma(z));
}


and we are done!

Next: The logarithmic derivatives of the Gamma-function generalized as the polygamma-function. I will implement it by way of the Hurwitz-zeta function, the two-variable zeta function, with the exception of the digamma function which will get a special treatment. No, I will not use the already computed Spouge coefficients for the digamma function, I found a different formula faster.

[1] “Angstzuschlag” is the colloquial geman word for “corrosion allowance”.

# Hungry Bumblebees Are Not So Humble

Hungry bumblebee, trying to break open a rhododendron bud.

I found this little critter flying around our purple rhododendron, visiting the nectar providing blossoms and tried to make a picture of it. It took me some time to persuade my cheap little diggi-snapper to switch to the macro-function. After the first nice picture of the bumblebee visiting an open flower and doing what bumblebees do inside an open flower (let me assure you: it looks quite a bit like…uhm…) the insects jumped to the adjacent blossom which was still closed. We botanists call that thing a bud. So it sat there, right on the bud seemingly doing nothing, taking a break or something. If it had not been for the nice profile I would have waited until it visits the next open flower (I think I’m still bound to the very finite number of pictures available on celluloid in my thoughts instead of the over thousand that fit on the 8GB card tucked in the camera). It was only later at the large monitor that I saw the curious behaviour.
With the help of the large monitor it was also possible to identify the species: Bombus terrestris which has a long proboscis of about 8-9 mm (don’t know how much that is in old money) which could be long enough.
I made a handful more pictures which you can see (in full resolution now) at my Flickr account. Please follow the link in the Flickr-plugin.

PS: And at some day in a future far away I will be able to spell “Rhododendron” right the very first time.

# On Doing It Yourself

### When the output of Mathematica® is correct but of not much use

When I do some calculations on the command line, it is in most cases some quick computations involving numbers with different bases in a precision that is not very high but higher than what the normal calculator offers. Starting up PARI/GP every time is distracting and PARI/GP does not offer an easy way to enter numbers in different bases; dc and bc are slightly more complicated to use—although they are quite excellent in shell-scripts!—and all others are either insufficient in power, usage and/or price so it is calc what I use regularly.

It has some downsides, too. One of these disadvantages is the lack of functions that are included in almost every other arbitrary precision calculator because calc had been written for a special need originally. See the webpage of calc I linked to above for further information.

Another advantage of calc is that the syntax is quite C-like and allows for easy implementation of the functions and algorithms you need. At least that’s what I hope for 😉

The most common implementation of an approximation of the gamma function when arbitrary precision is needed, is the variation of James Stirling’s approximation found by John L. Spouge[1]. It is some time ago, that I implemented it and I had to look it up. The first page that came up after asking Google for some help, was the page of the entry of Literate Prgrams for an implementation in Mathematica®. No, I really cannot remember what I’ve asked Google to get that result. I found out later, that Rosetta Code has a rudimentary but working example implementation in C.

Now, where do I am going to get to with such a long-winded introduction, you might ask, and I will tell you: on the Literate-Programming page I found a short explanation of the algorithm which includes the following gem:

Conservatively omitting the factor $a^{-1/2}$ and rearranging gives us the formula…

They talk about the error and the number of coefficients needed:

(1.0) $\displaystyle{ \epsilon \le a^{-1/2} (2 \pi)^{-(a+1/2)}}$

for which they give the approximation of the parameter $a$ as:

(1.1) $\displaystyle{ a = -\frac{\log \epsilon}{\log 2 \pi}}$

If you enter the inequality into the text field over at wolframalpha.com and tell it to solve for a you’ll get an exact but unusable result. Changing the inequality to an equality you’ll get another, simpler but still unusable result involving the Lambert-W function. If you want to play with it, here is a JavaScript implementation of the approximation on the positive real line (click on it to unfold it):

Math.lambertW = function(x){
/*
On the Lambert W Function (1996)
by R. M. Corless , G. H. Gonnet , D. E. G. Hare , D. J. Jeffrey , D. E. Knuth
*/
var ret = 0.0;
var a   = Math.log(x);   //L1
var b   = Math.log(Math.log(x));//L2
var c   = a-b + b/a;
var l1  = (b*(-2 + b))/(2*a*a);
var l2  = (b*(6-9*b+2*b*b))/(6*a*a*a);
var l3  = (b*(-12+36*b-22*b*b+3*b*b*b))/(12*a*a*a*a);
var l4  = (b*(60-300*b+350*b*b-125*b*b*b+12*b*b*b*b))/(60*a*a*a*a*a*a);

return c+l1+l2+l3+l4;
};


So I decided to do it by hand. Why? Well, if you are seriously questioning it read no further, for what I do is not for you.
But I do not use anything not taught in highschool and even give the needed rules for the logarithms I use here, so you are heartily invited to give it a glimpse. You might even find an error 😉

At first make the inequality an equality because we are interested in the largest possible error only. Expand all of the exponents to make it more readable. For me at least but YMMV, as the saying goes.

(2.1) $\displaystyle{ \epsilon = \frac{1}{\sqrt{a}\,\pi^{a+1/2}}}$

The term $\sqrt{a}$ can be omitted here, that is indeed correct. I’ll give a short sketch of a proof at the end.

(2.2) $\displaystyle{ \epsilon = \frac{1}{2\pi^{a+\frac{1}{2}}}}$

To get rid of the exponent we have to take the logarithm of both sides. Any logarithm will do.

(2.3) $\displaystyle{ \log\epsilon = \log\left(\frac{1}{2\pi^{a+\frac{1}{2}}}\right)}$

Expand it, such that we get the exponents back we dropped so prematurely above.
Rules: $\log (a/b) = \log a - \log b$ and $\log (a^b) = log(a)\, b$

(2.4) $\displaystyle{ \log\epsilon = \log{1}-\log(2\pi) \left(a+\frac{1}{2}\right)}$

The logarithm of 1 (one) is 0 (zero) and can be omitted for brevity. Now move the constant $-\log{2\pi}$ to the left of the equation.

(2.5) $\displaystyle{ -\frac{\log\epsilon}{\log(2\pi)} = a + \frac{1}{2}}$

Subtract $\frac{1}{2}$ and we are done so far.

(2.6) $\displaystyle{-\frac{log\epsilon}{\log(2\pi)} - \frac{1}{2} = a}$

Because of the conditions $a \in \mathbb{N}$ together with $a > 2$ we can omit the $\frac{1}{2}$, too.

What? Uh, the sketch for the proof for dropping the poor $\sqrt{a}$, yes.
We start again from equation (2.1). To make it more readable we move the square root to the left to get

(3.1) $\displaystyle{ \epsilon\sqrt{a} = \frac{1}{2\pi^{a+\frac{1}{2}}}}$

Take the logarithm of both sides and expand
Rule: $\log(\sqrt[n]{a}) = \frac{\log a}{n}$ which is just a special case of $\log (a^b) = log(a)\, b$

(3.2) $\displaystyle{ \log\epsilon + \frac{\log a}{2} = -\log (2\pi) (a+\frac{1}{2}) }$

Uh, no, move $\sqrt{a}$ back to the right, sorry.

(3.3) $\displaystyle{ \log\epsilon = -\log (2\pi) \left(a+\frac{1}{2}\right) - \frac{\log a}{2} }$

Divide by $-\log (2\pi)$ as in equation (2.5) and expand

(3.4) $\displaystyle{ -\frac{\log\epsilon}{\log (2\pi)} = -\frac{2}{\log a\log(2\pi)} + a+\frac{1}{2} }$

Subtract $\frac{1}{2}$ and rearrange the right side to get

(3.5) $\displaystyle{ -\frac{\log\epsilon}{\log (2\pi)} -\frac{1}{2} = a - \frac{2}{\log a\log(2\pi)}}$

Now we can easily see by way of the inequality

(3.6) $\displaystyle{ a - \frac{2}{\log a\log(2\pi)} \le a}\quad\text{with}\quad a\in\mathbb{N}\, \wedge\, a>2$

that the term $\sqrt{a}$ had been dropped conservatively but rightfully $\blacksquare$

If you waited the whole time for the calc-script: you will find it in the repository if the authors of calc accept it; here, in another posting, otherwise.

[1] Spouge, John L. (1994). Computation of the Gamma, Digamma, and Trigamma Functions. SIAM Journal on Numerical Analysis 31 (3): 931–000.

# Arachnoday

Barn funnel weaver (Tegenaria domestica), male

I do not know if the identification is correct, the picture is of low quality and the spider disappeared before I was able fetch more details (you need the face and the underside to get a more certain identification).

# From Day to Day

The surprising results of

mkdir date '+%Y-%m-%d' && cd date '+%Y-%m-%d'


at around midnight should have been expected.
How embarrassing!
It is—of course!—a slightly better idea to do something in the line
of:

TMPDATE=date '+%Y-%m-%d';mkdir $TMPDATE && cd$TMPDATE


Or get silly and do it with a full blown script:

##########################################################################
# Create an empty directory named after the current date in the          #
# format: the four digit year, a minus-sign, the numerical month started #
# at one (1), a minus sign, and the numerical day of the month started   #
# with one (1)                                                           #
# Returns 1 for failure and 0 for success                                #
# Place in ~/.bashrc (or some other startup file) to make it work        #
# If you have chosen ~/.bashrc re-read it with                          #
#  . ~/.bashrc                                                           #
# to get a sense of achievement                                          #
# Please note, that there is a period in front of ~/.bashrc!             #
##########################################################################

mkdatedir(){
# Full path to the date(1) program
DATEP="/bin/date"
# Full path to the mkdir(1) program
MKDIRP="/bin/mkdir"
# Full path to the ls(1) program
LSP="/bin/ls"
# Arguments to ls(1)
LSA=" -d"
# Format of the date
DATEF=' +%Y-%m-%d'

#########################
# Actual program logic: #
#########################

# Get the date as a string and keep it in the temporary variable named
# TMPDATE

TMPDATE=$($DATEP $DATEF) # Uncomment for debugging purposes # echo "TMPDATE=$TMPDATE"

# Try to make the directory
$($MKDIRP $TMPDATE > /dev/null 2>&1 ) # save the return value of mkdir(1) in the variable MKDIRR MKDIRR=$?
# Check for errors
if [ $MKDIRR -eq 1 ] then echo "Something went wrong while trying to create the directory$TMPDATE"
TMPLS=$($LSP $LSA) if [$TMPLS  ]
then
fi
return 1
fi

# Jump right into the directory just successfully build
cd $TMPDATE # TODO: write a check to check if "cd" failed # Uncomment for debugging purposes # echo$($LSP$LSA ../\$TMPDATE)

# echo "Success!"
return 0
}


Yes, you’re right, if I say “silly” I mean silly!
So, if you ever find this code,or something similar in weird- and/or silliness: send it to Alex Papadimoulis’ site if you haven’t done so already.

# Now, that is bold!

Sleeping! In the middle of my herb bed!

That’s just…
I don’t even…

And if you think that it is weird with this cat sleeping in its litter box, you might a take a closer look:

Holstein cat, still nappin’

Can you see it? It might be visible better with a shot taken from the other side:

And still nappin’, one wont’ believe it

See where it laid its head on? Yes, that’s a stone. Quartz, to be a bit more exact, rolled round by the powers of the water of the river Rhine, buried under some soil when the Rhine meandered away, digged out again, and placed carefully in my herb bed to keep the cats—especially this cat—from using it as their litter box. And now? Now this brute rests its head on it like on a well aired cushion filled with the finest eiderdowns! If it weren’t for the proof by these pictures, one would have a hard (ha ha!) time to believe it!