With representing the sum of all primes smaller than , the number is a member of the twin prime . The numbers and are also twin primes: and respectively.

It is unknown if that information is of any use but as the saying goes: “You’ll never know!”.

Leave a reply

With representing the sum of all primes smaller than , the number is a member of the twin prime . The numbers and are also twin primes: and respectively.

It is unknown if that information is of any use but as the saying goes: “You’ll never know!”.

Advertisements

The sum as described in the Wikipedia article has a closed form. It took me some time but after some twiddling with the sum itself I decided to exponentiate the whole thing to get rid of the logarithm and had a simple product I was able to handle.

With the digit and the base:

This can be expressed as a fraction of two rising factorials (Pochhammer symbol).

The rising factorial can be expressed as a fraction of two Gamma functions

With which we get

By setting we can already see that the result will get smaller the larger gets with the obvious limit

The arguments of the Gamma functions get quite large which makes the results hard to handle but we already need the logarithm of the product, that means we can use the logarithm of the Gamma function and save one step (we are on the real-line, so I used the equality to avoid an even denser parentheses fence than it already is)

We can simplify a bit with because . Furthermore which can save us a multiplication here and there. So far we got

or

In the special case I used it for was a power of two such that none of the exponentiations had to be calculated, the numbers could be *build* in O(1).

Calling the whole thing we can compute the probability that the n^{th} digit is with . For general use the unsimplified version above for .

Beware: the numbers get really large, you will need a slightly higher amount of precision for the loggamma function if you calculate with a large base. I had 2^{26} as the base and had trouble to compute the values for places higher than the 20^{th} in PARI/GP (it said `*** lngamma: division by zero`

). It worked in calc with a precision of 300 decimal digits but it is sloooow. Who wrote that lame gamma function? Who? Me? Myself? Really? Ouch! ðŸ˜‰

The last post in this sequence was about some more general way to work with the prime factorization of the factorials and about the Narayana numbers:

Which expands to

Using and *vice versa* we can shove more things together and get

Get the non-factorials out of the way

With the prime factorization of and with their arithmetic resembling the relation of the arithmetic of numbers and the arithmetic of their logarithms (e.g.: ) we get (dropping the fraction with the non-factorials)

We have to be carefull with the term for it can overflow, so we should divide the prime factorization by and by instead of multiplying out and divide by that all together.

The actual implementation is left as an exercise for the reader.

Or just wait until I do it myself in the next days ðŸ˜‰

[UPDATE at Mo 29. Jul 17:09:30 CEST 2013] fixed a small bug.

We did a fast glimpsâ€”albeit with working codeâ€”at the implementation of the gamma function with Spouge’s formula lastly. This code implements the logarithm of the gamma function with the principal branches of the complex logarithm. Another function, named uses different principal branches that are more useful. You will find that it is this function that is implemented in most programs that offer arbitrary precision.

To go from to is not easy:

It is equivalent in the strip

The imaginary part of negative values are simply multiples of

The interesting part comes with the rest of the complex plane:

The function returns an integer. To compute these size of the aforementioned integer the following simple integral has to be solved:

with the unit step function

with the first logarithmic derivative of the Gamma function

and finally a kind of a derivative of the Heaviside step function and has the following limit

All that information should suffice to handle the integral. Not that it wouldn’t be a good training, but others have solved the problem already, namely D. E. G. Hare in “Computing the Principal Branch of log-Gamma” published in the Journal of Algorithms 25(2):221-236 in 1997.

Attached to a simple Gamma function with the Lanczos coefficients

define kroneckerdelta(i,j){ if(isnull(j)){ if(i==0) return 1; else return 0; } if(i!=j) return 0; else return 1; } define __CZ__lngamma_lanczos(z){ local a k g zghalf lanczos; mat lanczos[15] = { 9.9999999999999709182e-1, 5.7156235665862923516e1, -5.9597960355475491248e1, 1.4136097974741747173e1, -4.9191381609762019978e-1, 3.3994649984811888638e-5, 4.6523628927048576010e-5, -9.8374475304879566105e-5, 1.5808870322491249322e-4, -2.1026444172410489480e-4, 2.1743961811521265523e-4, -1.6431810653676390482e-4, 8.4418223983852751308e-5, -2.6190838401581411237e-5, 3.6899182659531626821e-6 }; g = 607/128; z = z-1; a = 0; for(k = 12; k >= 1; k--){ a += lanczos[k]/(z+k); } a += lanczos[0]; zghalf = z + g + .5; return ( ln(sqrt(2*pi())) + ln(a) -zghalf ) + (z+.5)*ln( zghalf ); } define lngamma(z){ local ret eps flag corr; flag = 0; if(isint(z)){ if(z <=0) return newerror("gamma(z): z is a negative integer"); else{ /* may hold up accuracy a bit longer, but YMMV */ if(z < 20) return ln((z-1)!); else return __CZ__lngamma_lanczos(z); } } else{ if(re(z)<0){ if(im(z) && im(z)<0){ z = conj(z); flag = 1; } ret = ln( pi() / sin(pi() *z ) ) -__CZ__lngamma_lanczos(1-z); if(!im(z)){ if(abs(z) <= 1/2) ret = re(ret) - pi()*1i; else if( isodd(floor(abs(re(z)))) ){ ret = re(ret) + ( ceil(abs(z)) * pi() * 1i); } else if( iseven(floor(abs(re(z)))) ){ /* < n+1/2 */ if(iseven(floor(abs(z)))){ ret = re(ret) + ( ceil(abs(z)-1/2 -1 ) * pi() * 1i); } else{ ret = re(ret) + ( ceil(abs(z) -1/2 ) * pi() * 1i); } } } else{ corr = ceil( re(z)/2 -3/4 - kroneckerdelta(im(z))/4); ret = ret +2*(corr *pi() )*1i; } if(flag == 1) ret = conj(ret); return ret; } return __CZ__lngamma_lanczos(z); } }

My problem is, that this trick does not work with the Spouge coefficients. At least not for me. I didn’t find out if it is a bug in my implementation (most probably) or a more technical problem. The paper mentioned above talks about the Stirling series. It uses Bernoulli numbers which are quite expensive to compute and the series is not convergent which makes it quite tricky to find the right number of terms. D.E.G. Hare showed in section 4.1 a trick to evaluate the error in a simple way for .

/* Compute error for Stirling series for "z" with "k" terms*/ define R(z,k){ local a b ab stirlingterm; stirlingterm = bernoulli(2*k)/( 2*k*(2*k-1)*z^(2*k)); a = abs( stirlingterm ); b = (cos(.5*arg(z)^(2*k))); ab = a/b; /* a/b might round to zero */ if( abs(ab) < epsilon()){ return digits(1/epsilon()); } return digits(1/(a/b)); }

It’s been a bit since I started the series and, freely admitted, forgot to add the last part, the actual implementation of Peter B. Borwein’s algorithm[1] for computing factorials.

Some changes from the past: the language changed from Javascript to the script language implemented in Calc. It is easier to read and can be used as *runnable pseudo-code*. And it is a bit faster than my Javascript implementation of a big-integer library.

A short summary of the algorithm goes:

- Factorize the factorial
- Exponentiate the primes according to the factorization
- Compute the product of the above powers

To factorize a factorial no factorization of the fully computed factorial is needed. Obviously. The number theoretical background is given at Wikipedia, at Wolfram`s and elsewhere, too.

There are several methods to do that but simple trial division is good enough; these are small numbers: the largest possible exponent is the exponent of the smaller prime, that would be 2 and since cannot exceed if is a prime factor so cannot exceed . Proof omitted, ask Adrien-Marie Legendre if you need one.

The time complexity to do the computation is O(log n) or, in this case, *exactly* integer divisions with prime the base of the logarithm, therefore only half of the primes have exponents . We might treat these separately.

/* A simple list to keep things...uhm...simple?*/ global __CZ__primelist = list(); /* Helper for factorial/primorial: fill list with primes in range a,b */ define __CZ__fill_prime_list(a,b) { local k; k=a; /* nextprime(k) means next prime even if k is prime. This is different from the behaviour of other programs e.g.; PARI/GP */ if(isprime(k))k--; while(1){ k = nextprime(k); if(k > b) break; append(__CZ__primelist,k ); } } /* Helper for factorial: how often prime p divides the factorial of n */ define __CZ__prime_divisors(n,p) { local q,m; q = n; m = 0; if (p > n) return 0; if (p > n/2) return 1; while (q >= p) { q = q//p; /* double forward slash is integer division */ m += q; } return m; }

To make it more complete and easier to follow we get the primes from a generated list, you might use your own implementation of a sieve or similar methods to get the primes.

Now that we have that ready we can go on, power up, and exponentiate the primes according to the results from above.

/* Helper for factorial: exponentiate p^k */ define __CZ__exponentiate(prime,exponent) { local bit result temp; result = 1; bit = digits(exponent,2); /* strip leading zeros */ while((exponent & (1 << bit)) == 0) bit--; for( ; bit>=0 ; bit--){ result = result^2; /* square if bit is zero, multiply otherwise */ if((exponent & (1 << bit)) != 0){ result *= prime; } } return result; }

The method here is the so called binary exponentiation and has a bit (ha!) of optimization. A simpler and more readable piece of code is probably

define exponentiate(base,exponent) { local result e b; result = 1; b = base; e = exponent; while(e != 0){ if(isodd(e)){ result = result * b; } e = e>>1; if(e != 0){ b = b^2; } } return result; }

This method allows for less multiplications and even better: more squaring which can be done faster than multiplication.

Apropos faster: we already have the number of times 2 divides and because shifting is cheap in modern computers it is advisable to use that fact and drop from the product and just shift by at the end. The second half of the primes have exponent 1 (one) and it is a better idea to treat the separately as a fraction of two primorials and compute the product with the help of binary splitting.

Now, lets get it all together.

local prime result prime_raised shift cut; result = 1; prime = 2; /* checks and balances omitted */ shift = __CZ__prime_divisors(n,prime); prime = 3; cut = n//2; do { prime_raised = __CZ__exponentiate(prime,__CZ__prime_divisors(n,prime)); result *= prime_raised; prime = nextprime(prime); } while(prime <= cut); result *= primorial( cut, n); result <<= shift; return result

What? Oh, sorry, the primorial, yes.

define primorial( a, b) { local C1; /* Checks and balances omitted bt keep in mind that the number of primes you get from functions like nextprime() might be limited. Here the limit is 2^32 so the last prime you can get with nextprime() is 4294967291. */ __CZ__fill_prime_list(a,b); C1 = size(__CZ__primelist)-1; return __CZ__primorial__lowlevel( 0, C1,1) }

The actual product gets, as promised, calculated with a simple binary splitting algorithm:

/* Helper for primorial: do the product with binary splitting TODO: do it without the intermediate list */ define __CZ__primorial__lowlevel( a, b ,p) { local c; if( b == a) return p ; if( b-a > 1){ c= (b + a) >> 1; return __CZ__primorial__lowlevel( a , c , __CZ__primelist[a] ) * __CZ__primorial__lowlevel( c+1 , b , __CZ__primelist[b] ) ; } return __CZ__primelist[a] * __CZ__primelist[b]; }

As we can see it is the very same algorithm as the binary splitting algorithm for the factorial itself with the single difference that `a,b`

are not to be multiplied themselves but act as indices to the ordered(!) list of primes.

This method would lower the time complexity of the complete algorithm if we exponentiate the primes in-place and run the above algorithm over the whole list instead of the upper half only.

We should grab the chance and make some things more general, for example the product of the list of primes can easily be the product of any ordered list. Please be carefull if is possible to persuade your language to pass arguments by pointer and not by copying the whole thingamabob every f’ing time.

define product_list( a, b ,p,list) { local c; if( b == a) return p ; if( b-a > 1){ c= (b + a) >> 1; return product_list( a , c , list[a],list ) * product_list( c+1 , b , list[b],list ) ; } return list[a] * list[b]; }

It is also possible now to skip the global variable which is in most cases a bad idea to have; only in rare cases it is a *very* bad idea.

define factorial2(n){ local prime result shift cut prime_list k; result = 1; prime = 2; prime_list = list(); shift = __CZ__prime_divisors(n,prime); prime = 3; cut = n//2; /* first half with exponentiation */ do { append(prime_list,__CZ__exponentiate(prime,__CZ__prime_divisors(n,prime))); prime = nextprime(prime); } while(prime <= cut); /* first half rare primes only */ do{ append(prime_list,prime ); prime = nextprime(prime); }while(prime <= n); k = size(prime_list)-1; result = product_list( 0, k,1,prime_list); result <<= shift; return result; }

One final note: the respective sizes of the factors in the multiplication is not very equal. That means that the use of the common methods for fast multiplication (e.g.: FFT) which depends on equal size of the multiplicands can be tricky. Simple padding is, uhm, simple but raises the n in M(n).

Well, a theme for a later posting, I guess ðŸ˜‰

**UPDATE** just an hour later.

Ok, I’ll do it now and here.

I admit: I withhold the essential trick that makes it faster than everything known before: nested squaring.

If you imaging the whole list of primes with their respective exponents and lay it flat, exponents down, primes up, and you’ll get the following:

2 | 3 | 5 | 7 | … |

0 | 1 | 1 | 1 | … |

1 | 0 | 1 | 0 | … |

… | … | … | … |

What we do now is reading it row-wise. Multiply the result with the respective prime if the bit is set, do nothing otherwise and when you come at the end of the row square the result, rinse and repeat.

Now we have only fast squaring and multiplication with very small primes which can be done in O(n) if the prime is smaller than one LIMB which has in most cases 32 bit if not 64.

define factorial3(n){ local prime result shift prime_list expo_list k k1 k2; result = 1; prime = 2; prime_list = list(); expo_list = list(); /* Checks and balances omitted */ shift = __CZ__prime_divisors(n,prime); prime = 3; do { append(prime_list,prime); append(expo_list,__CZ__prime_divisors(n,prime)); prime = nextprime(prime); }while(prime <= n); /* size of the largest exponent in bits */ k1 = highbit(expo_list[0]); k2 = size(prime_list)-1; for(;k1>=0;k1--){ /* Square the result every row. One might skip the first time because 1^2 = 1. */ result *= result; for(k=0; k<=k2; k++) { /* check every exponent for a set bit and multiply the result with the respective prime */ if((expo_list[k] & (1 << k1)) != 0) { result *= prime_list[k]; } } } result <<= shift; return result; }

That was it! Really! Promised!

Well …

[1]Peter B. Borwein, *On the Complexity of Calculating Factorials, Journal of Algorithms, 3, 1985*

It was promised, by me, here, personally, to implement Spouge’s approximation of

Euler’s Gamma-function in the script language offered by David I. Bell’s

arbitrary precision calculator calc.

I did.

For a short oversight over the algorithm use, see Spouge’s_approximation over at Wikipedia. WordPress’ code colouring has no template for calc-scripting, so I just use C++, it’s similar enough.

Spouge’s approximation is:

Let’s see what we have here:

- A leading term , precomputable
- The first coefficient is a constant, precomputable
- All further coefficients are constants, too, precomputable

To keep those precious coefficients that we calculated at the sweat of our brows, we put them a in a global list for easy recycling.

/* Cache for the variable independent coefficients in the sum. */ global Ck = list();

We calculate the natural logarithm of the function —which is different from ! —for the right half of the complex plane, that is .

define lngammarp(z) { local epsilon accuracy a factrl fact1 sum coeffk n ret;

It seems that in some versions of Calc the placement of the left curly bracket is not as free as it should be but must be where I put it and as you can see: the variables must be declared before use in the way seen above, i.e. without commas.

Next we calculate the needed accuracy in decimal digits plus the obligatory angstzuschlag[1]. The function that can be used to read and set the current precision is `epsilon()`

. It prints the absolute value of the smallest representable number if it is called without an argument. Another useful function is `digits()`

that returns the number of the digits of the argument and both together `digits( 1 / epsilon() )`

returns the number of decimal digits, so

`epsilon( 10^(- digits( 1 / epsilon() )+1 ) ) `

does not change anything. The coefficients are computed with a certain precision which we want to keep. The precision I mean.

epsilon = epsilon(); accuracy = digits(int(1/epsilon)) + 3; if(size(Ck) == 0){ append(Ck,accuracy); }

The calculation of , as described elsewhere in detail, needs only low precision, so it might be a good idea to reduce `1/epsilon`

to a small number, calculate and return to the old, higher precision afterwards.

epsilon(1e-18); a = ceil(1.252850440912568095810522965 * accuracy);

Next switch to some higher precision. But how much is some? Well, it is a known problem, the calculation of the precision that is needed in relation to the argument z to compute enough correct digits. The main culprit here is the alternating sum. There is not much that can be done for a sum of arbitrary length. There are some ways for sums of small fixed length, see for example the way the people of the math-library at Boost ( John Maddock, Paul A. Bristow, Hubert Holin and Xiaogang Zhang according to the footer at that page) implemented the Gamma functions by way of the original Lanczos’ approximation in a more tricky way.

But in this simple implementation we just take 50% more for the wanted accuracy for

. With this implementation that means that at least `digits(int(1/epsilon)`

are correct, starting from the beginning, that is the highest digit not the first decimal.

Please keep in mind that calc was not meant to be a highly versatile and still fast arbitrary precision floating point calculator but rather a tool for generating and manipulating primes, at least that’s how I understood it.

There are other tools that fit that description better (e.g.: PARI/GP but be aware that `lngamma()`

in PARI/GP has a different branch-cut so our `log(gamma(z))`

is not necessarily equal to PARI/GP’s `lngamma(z)`

on the complete complex plane. Yes, I just repeated myself) but the gamma-function is such a generic tool, it just needs to be in every tool-box. As are the Bessel functions, but I’ll leave that for another time and post.

Where have I been? Uh, yes, setting the right amount of precision. And when we have done that we can start to compute the constants:

epsilon(epsilon*10^(-(digits(1/epsilon)//2))); /*integer division*/ fact1 = (ln(z + a) * (z + 0.5)) - (z + a) - ln(z); /* Leading term */ factrl = 1.0; /* The factorial */ sum = sqrt(2*pi()); /* The first coefficient c_0 */

`Calc`

has integer division. Normal floating point division is done with a single slash `3/2 == 1.5`

and integer division with two slashes `3//2 == 1`

.

Before we are able to fill the list with our coefficients we have to clean up from the work done before, for it could have been done for some lower precision.

/* TODO: delete coefficients if precision is lower than the last call. CLOSED: memory savings not significant, 640k is enough for everybody! */ if(size(Ck) != 0 && Ck[0] < accuracy){ while(size(Ck) != 1){ remove(Ck); } Ck[0] == accuracy; }

The list `Ck`

is a linked list, `remove()`

removes the last entry in the list. The list does not grow very large and is filled with just integers, so no headaches here or cursing of the authors.

The computation of rest of the the coefficients is rather simple and nearly straight-forward.

for (n = 1; n < a; n++){ z++; if(size(Ck) == n){ coeffk = ( ( (a - n)^(n - 0.5) ) * exp(a - n) )/factrl; append(Ck,coeffk); } else { coeffk = Ck[n]; } sum += ( coeffk / (z ) ); factrl *= -n; /* the "alternating" in "alternating sum" */ }

That was all! Now put it all together, reset the precision, and return the result.

ret = ln(sum) + fact1; /* ln() is the natural logarithm base e, log() is base 10! */ epsilon(epsilon); return ret; }

For the left half of the complex plane, some needed checks and balances and the obligatory etc. we build a wrapper. Two wrappers to be exact. The first one is for .

define lngamma(z) { local ret eps; /* z \in \mathbf{Z}*/ if(isint(z)){ /*The gamma-function has poles at zero and the negative integers*/ if(z <=0) return 1/0; else{ /* may hold up accuracy a bit longer, but YMMV */ if(z < 20) return ln( (z+1)! ); else return lngammarp(z); } } else{ if(re(z) > 0){ return lngammarp(z); } else{

The reflection formula is known to be .

eps = epsilon(); epsilon(eps*1e-3); ret = ln(pi()) - ln( sin(pi() *z ) ) - lngammarp(1-z) ; epsilon(eps); return ret; } } }

A very simple wrapper for the normal Gamma-function:

define gamma(z) { if(isint(z)){ if(z <=0) return 1/0; else{ /* may hold up accuracy a bit longer, but YMMV */ if(z < 20) return (z+1)!*1.0; else return exp(lngammarp(z)); } } else{ if(re(z) > 0){ return exp(lngammarp(z)); } else{ return pi()/( sin ( pi() * z )* exp(lngammarp(1-z) ) ); } } }

which can be shortened to

define gamma(z) { return exp(lngamma(z)); }

and we are done!

Next: The logarithmic derivatives of the Gamma-function generalized as the polygamma-function. I will implement it by way of the Hurwitz-zeta function, the two-variable zeta function, with the exception of the digamma function which will get a special treatment. No, I will not use the already computed Spouge coefficients for the digamma function, I found a different formula faster.

[1] “Angstzuschlag” is the colloquial geman word for “corrosion allowance”.

When I do some calculations on the command line, it is in most cases some quick computations involving numbers with different bases in a precision that is not very high but higher than what the normal calculator offers. Starting up PARI/GP every time is distracting and PARI/GP does not offer an easy way to enter numbers in different bases; dc and bc are slightly more complicated to use—although they are quite excellent in shell-scripts!—and all others are either insufficient in power, usage and/or price so it is calc what I use regularly.

It has some downsides, too. One of these disadvantages is the lack of functions that are included in almost every other arbitrary precision calculator because `calc`

had been written for a special need originally. See the webpage of `calc`

I linked to above for further information.

Another advantage of `calc`

is that the syntax is quite C-like and allows for easy implementation of the functions and algorithms you need. At least that’s what I hope for ðŸ˜‰

The most common implementation of an approximation of the gamma function when arbitrary precision is needed, is the variation of James Stirling’s approximation found by John L. Spouge[1]. It is some time ago, that I implemented it and I had to look it up. The first page that came up after asking Google for some help, was the page of the entry of Literate Prgrams for an implementation in MathematicaÂ®. No, I really cannot remember what I’ve asked Google to get that result. I found out later, that Rosetta Code has a rudimentary but working example implementation in C.

My Google-Foo is *really* lacking…

Now, where do I am going to get to with such a long-winded introduction, you might ask, and I will tell you: on the Literate-Programming page I found a short explanation of the algorithm which includes the following gem:

Conservatively omitting the factor and rearranging gives us the formula…

They talk about the error and the number of coefficients needed:

(1.0)

for which they give the approximation of the parameter as:

(1.1)

If you enter the inequality into the text field over at wolframalpha.com and tell it to solve for a you’ll get an exact but unusable result. Changing the inequality to an equality you’ll get another, simpler but still unusable result involving the Lambert-W function. If you want to play with it, here is a JavaScript implementation of the approximation on the positive real line (click on it to unfold it):

Math.lambertW = function(x){ /* On the Lambert W Function (1996) by R. M. Corless , G. H. Gonnet , D. E. G. Hare , D. J. Jeffrey , D. E. Knuth ADVANCES IN COMPUTATIONAL MATHEMATICS */ var ret = 0.0; var a = Math.log(x); //L1 var b = Math.log(Math.log(x));//L2 var c = a-b + b/a; var l1 = (b*(-2 + b))/(2*a*a); var l2 = (b*(6-9*b+2*b*b))/(6*a*a*a); var l3 = (b*(-12+36*b-22*b*b+3*b*b*b))/(12*a*a*a*a); var l4 = (b*(60-300*b+350*b*b-125*b*b*b+12*b*b*b*b))/(60*a*a*a*a*a*a); return c+l1+l2+l3+l4; };

So I decided to do it by hand. Why? Well, if you are seriously questioning it read no further, for what I do is not for you.

But I do not use anything not taught in highschool and even give the needed rules for the logarithms I use here, so you are heartily invited to give it a glimpse. You might even find an error ðŸ˜‰

At first make the inequality an equality because we are interested in the largest possible error only. Expand all of the exponents to make it more readable. For me at least but YMMV, as the saying goes.

(2.1)

The term can be omitted here, that is indeed correct. I’ll give a short sketch of a proof at the end.

(2.2)

To get rid of the exponent we have to take the logarithm of both sides. Any logarithm will do.

(2.3)

Expand it, such that we get the exponents back we dropped so prematurely above.

Rules: and

(2.4)

The logarithm of 1 (one) is 0 (zero) and can be omitted for brevity. Now move the constant to the left of the equation.

(2.5)

Subtract and we are done so far.

(2.6)

Because of the conditions together with we can omit the , too.

What? Uh, the sketch for the proof for dropping the poor , yes.

We start again from equation (2.1). To make it more readable we move the square root to the left to get

(3.1)

Take the logarithm of both sides and expand

Rule: which is just a special case of

(3.2)

Uh, no, move back to the right, sorry.

(3.3)

Divide by as in equation (2.5) and expand

(3.4)

Subtract and rearrange the right side to get

(3.5)

Now we can easily see by way of the inequality

(3.6)

that the term had been dropped conservatively but rightfully

If you waited the whole time for the `calc`

-script: you will find it in the repository if the authors of `calc`

accept it; here, in another posting, otherwise.

[1] Spouge, John L. (1994). *Computation of the Gamma, Digamma, and Trigamma Functions*. SIAM Journal on Numerical Analysis 31 (3): 931â€“000.