Whatsapp with the kids today?

]]>

Cleaned the working place.

It is quite nice and clean sand, I can use it for the remodelling of our garden pond.

]]>

There is absolutely no need to drill into it and a waste of time and money (I paid for the meter drilled including the necessary amount of well casing). Also: the lowest part of the well casing is a filter. In most cases it is just the same pipe as the well casing with fine slits (3mm is enough here but your mileage may vary) but there are more expensive solutions, too. Any length of this filter inside the watertight layer if of little use and reduces the area where the groundwater can flow in.

Fifth step: case the well, that is: put a pipe into the hole to avoid the collapse of the whole. The drill string should be hollow to allow for it and needs a drill bit that you can screw out from above.

The pipe is something you can buy at e.g.: Amazon, at least that’s what I thought but despite a lot of hits for “Brunnenrohr” at amazon.de I got nothing at amazon.com for “well casing” except for the caps for the top.

Sixth step: clean up the whole mess before you go on

Next: securing the well with the help of a bag of concrete.

]]>

Second step: mark it.

Third step: empty out everything that’s not a hole.

We should have a nice little hole at this point. As this hole is deep enough to reach groundwater level and beyond we can make a lot more out of it, for example: a well.

A nice little how-to, a bit more useful than this will be put in my next post.

]]>

The method for the radix conversion of floating point numbers described more theoretically here and more practically here has one large drawback: albeit always correct it is abysmally slow for large magnitudes.

Is there a faster way to do it? Yes, there is!

But you need to have six Bigfloat-functions implemented: a function to convert small integers, a function for addition, one for multiplication, one for exponentiation with an integer exponent which is nothing but a shortcut for multiple multiplications, division, and finally normalization.

Converting small integers is simple, I implemented it in the constructor of my JavaScript implementation of a Bigfloat directly:

function Bigfloat(n) { // keep an extra sign and save some typing // Addendum: after lot of code writing the author can conclude that the // comment above about the amount of typing is complete and // utter... this.sign = MP_ZPOS; this.mantissa = new Bigint(0); this.exponent = 1; this.precision = MPF_PRECISION; if (arguments.length == 1 && n.isInt() && Math.abs(n) < MP_INT_MAX) { this.mantissa = Math.abs(n).toBigint(); this.mantissa.lShiftInplace(MPF_PRECISION - Math.abs(n).highBit() - 1); this.mantissa.sign = (n < 0) ? MP_NEG : MP_ZPOS; this.sign = this.mantissa.sign; this.exponent = -this.precision + Math.abs(n).highBit() + 1; } }

No rounding necessary (but I set the minimum precision to 4 limbs, that is: 104 bits) which makes normalizing simple and straight-forward.

Addition is probably the most complicated basic function for a floating point number. The principle is quite simple: align the bits of the mantissas, add them, and normalize the result. The first question you have to answer yourself is how to align the bits. Shift the larger one to the left side (make it larger) until it fits the smaller one or shift the smaller one to the right (make it smaller) until it fits the larger one?

The most reasonable one seems to be the second: make the little one smaller until it fits or vanishes completely. It seems reasonable because it makes not much sense to add one millionth to one if the precision can only hold hundredths: `1.00 + 0.000001 = 1.00`

.

But you will loose the addend completely.

The other way around keeps all information but the amount of information can get very, very, large. It is not much with our example above but the difference can grow up to the full size of the exponent possible. With my implementation for example, that would be a difference of 2^{53}-1 bits or roundabout 5,643,501,183,366,341 decimal digits which should make clear that such a method would be in urgent need of some limitations. But where to put them? If you have an answer to that feel free to post it or better: publish it and please send me a copy.

It is a good compromise to use the first method and add some guard digits. How many? Well, it depends, sadly. But you already need to do almost every computation on whole limbs so just add one limb (26 bits for me, 32 or even 64 bits for native implementations) and call it a day.

You can do it generally or inside every elementary function but if you want to do it inside the function take care to initialize every variable of the function after the raise of the precision, even make copies of the arguments of the function. Doing that is in most cases cheaper than having to do e.g.: another iteration of a Newton-like algorithm.

Multiplication on the other side is probably the simplest function: just multiply the mantissas, add the exponents and normalize the result.

Exponentiation with an integer exponent is also simple: just port the exponentiation from the Bigint implementation to Bigfloats.

Division gets done by computing the multiplicative inverse, also known as the reciprocal and multiply it with the numerator.

Computing the reciprocal gets done with some rounds of Newton bracketing. A problem here is the same as for all Newton-like algorithms: to find a good initial value.

If the magnitude of the number is inside something we already have a function implemented for, like JavaScript’s native number representation, it is simple: just use it.

That sounds simpler than it is as I had to find out the hard way.

If the Bigfloat is too large, converting it to a native number will result in zero which itself will result in a division by zero. The first workaround was to replace the zero with a small value but that value might be way off and the Newton rounds might not converge in the given number of rounds.

At the second thought I remembered one of the approximation rules from school (do they still teach them?) which is . The Bigfloat type works with a mantissa and an exponent, too, so I am able to apply this rule here.

This needs the two additional functions to convert a Bigfloat to a native number and vice versa. I implemented it by operating at the gut-level directly but a simple loop (divide by ten and catch the digit that falls out) would do it, too.

Leaves the moralizing function&hellip&wait…what? Ok, I’ll let is stand

Uhm&helli;so the *normalizing* function is where all the magic happens.

A bit of highly commented code might help, I think:

Bigfloat.prototype.normalize = function() { var cb, diff; var err; var c; // If the current precision differs from the precision of the // number to be normalized set it to the current precision // but to the minimum precision at least if (this.precision != MPF_PRECISION && this.precision >= MPF_PRECISION_MIN) { this.precision = MPF_PRECISION; } else if(this.precision < MPF_PRECISION_MIN){ this.precision = MPF_PRECISION_MIN; } // no need to normalize zero because zero is the // special value 0e1 as recommended by Prof. D. Knuth. if (this.isZero()) { return MP_OKAY; } // size of the mantissa in bits // highBit() returns the zero based position of the // highest set bit, hence the addition of one cb = this.mantissa.highBit() + 1; // we have more bits than we need if (cb > this.precision) { // compute the difference diff = cb - this.precision; // add the magnitude to the exponent this.exponent += diff; // a JavaScript specific check for overflow if (!this.exponent.isInt()) { this.setInf(); this.sign = MP_ZPOS; return MPFE_OVERFLOW; } // Rounding needs information that gets lost in the next step // so get it before. Wee need to know if the bit diff-1 is set c = this.mantissa.dp[Math.floor(diff / MP_DIGIT_BIT)] & (1 << (diff % MP_DIGIT_BIT)); // Shift right know, that is: divide by 2^diff, truncate the result this.mantissa.rShiftInplace(diff); // if the bit is set, round up if (c != 0) { this.mantissa.incr(); return MP_OKAY; } else { return MP_OKAY; } } // We have less bits than needed else if (cb < this.precision) { // Make a proper zero. // This avoids results like 0e-123 which some // people prefer, YMMV if (this.mantissa.isZero() == MP_YES) { this.exponent = 1; return MP_OKAY; } else { // do the very same like above just in the // opposite direction diff = this.precision - cb; this.exponent -= diff; if (!this.exponent.isInt()) { this.setInf(); this.sign = MP_NEG; return MPFE_UNDERFLOW; } this.mantissa.lShiftInplace(diff); return MP_OKAY; } } // if(cb == precision) nothing to do return MP_OKAY; };

Another function which is not really necessary here but will ease our work is a function to compare two Bigfloats. The principle is simple, too, as is a nasty little caveat: the two Bigfloats must be of the same precision or it will cause some really curious errors, so check for it first and normalize before comparing.

The comparing algorithm depends on the complexity of the individual methods. Comparing the sign is quite simple and comparing the mantissas is the most expensive. With this costs in mind I compared the signs together with a check if one is zero, if both are the same compare the exponents and if those are the same compare the mantissas.

Now to the actual radix conversion. To make things simple I restricted the input to base ten but the only thing where this restriction counts is in the string parser, the computation is the same for all bases. A string parser for some more bases is shown in the former algorithm linked to at the beginning of this post.

I think a highly commented code might be useful here. The parser itself is slightly different from the last one but not that much.

String.prototype.toBigfloatFast = function(base) { var ten, a, b, len, k, e, c, str, digit, decimal, expo, asign, exposign, ret, fdigs, oldeps, table, bigbase; // as said above: rise precision at the very beginning of the function oldeps = epsilon(); // TODO: may not be enough for very large exponents, // so: compute necessary precision more precisely epsilon(oldeps + 10); // gathers the digits a = new Bigfloat(0); len = this.length; decimal = -1; // flag gets set if an exponent exists expo = undefined; // sign of significant, also used as a flag asign = 0; // sign of exponent also used as a flag exposign = 0; // number of digits in fraction part fdigs = 0; // map character to value. ASCII only table = [ -1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1, 123,-1, 124, 125,-1, 0, 1, // 123 = "+", 124 = "-", 125 = "." 2, 3, 4, 5, 6, 7, 8, 9,-1,-1, -1,-1,-1,-1,-1, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1, 126,-1,-1,-1,-1,-1,-1, 10, 11, 12, // 126 = uppercase "p" 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 126,-1,-1,-1,-1,-1,-1,-1, // 126 = lowercase "p" -1,-1,-1,-1,-1,-1,-1,-1 ]; str = this; // TODO: checks & balances if (arguments.length == 0) { base = 10; } // max. base is 24 because of the character "p" used // for the exponent mark if(base < 2 || base > 23){ throw new RangeError("Base outside of range in String.toBigfloatFast"); } // base is restricted to ten for this example base = 10; bigbase = base.toBigfloat(); // Not needed because of we use of a table to map // the values directly // str = str.toLowerCase(); for (k = 0; k < len; k++) { // strip unicode // TODO: check if it is a Unicode character c = str.charCodeAt(k) & 0xff; c = table[c]; if (c < 0) { throw new RangeError("Unknown character in String.toBigfloat"); } switch (c) { case 123: // plus sign ("+") case 124: // minus sign ("-") if (typeof expo === "undefined") { if (asign != 0) { throw new RangeError( "Second decimal sign found in String.toBigfloat" ); } else { asign = (c == 124) ? -1 : 1; } } else { if (exposign != 0) { throw new RangeError( "Second exponent sign found in String.toBigfloat" ); } else { exposign = (c == 124) ? -1 : 1; } } break; case 0: case 1: case 2: case 3: case 4: case 5: case 6: case 7: case 8: case 9: if (typeof expo === "undefined") { // TODO: cache it? digit = new Bigfloat(c); a = a.mul(bigbase).add(digit); if (decimal != -1) { fdigs++; } } else { // only exponents in base ten allowed expo = expo * 10 + c; } break; case 125: // decimal mark ('.') if (decimal != -1) { throw new RangeError( "Second decimal mark found in String.toBigfloat" ); } decimal = k; break; case 14: // exponent mark ('e') if (typeof expo !== "undefined") { throw new RangeError( "Second exponent mark found in String.toBigfloat" ); } expo = 0; break; default: throw new RangeError( "Unknown character in String.toBigfloat " + str.charAt(k)); } } ret = a; // Ignore the exponent if the mantissa is zero. // This avoids results like 0e-123 which some people // might prefer, so YMMV if (ret.isZero()) { return ret; } // We have some fractional digits and need // to adjust the magnitude by shifting right // the number of fractional digits // TODO: for base 2 (two) use shift instead if (fdigs != 0) { b = bigbase.pow(fdigs); ret = ret.div(b); } // Set the sign if (asign == -1) { ret = ret.neg(); } // We have an exponent, add it if (typeof expo !== "undefined") { if (exposign == -1) { expo = -expo; } // shift by the amount set by the exponent // the pow() function knows what to do with negative // exponents but yours might not in which case you // have to check for the sign of the exponent and // act accordingly b = bigbase.pow(expo); ret = ret.mul(b); } // As said above: restore the precision first, than normalize epsilon(oldeps); ret.normalize(); return ret; };

This function is, despite its name, slower than the old algorithm for small input but the exact definition of *small* is one of the YMMV cases. I have set the cutoffs to 100 decimal digits string length and an exponent larger than plus-minus 100.

]]>

The file has a lot of comments already, not much to say here, but nevertheless…

The lexer is relatively straight forward. No complicated start conditions (they can get nested very deeply if you do not watch them very carefully) or anything else.

With one exception already explained in the comments: the numbers you want to parse need to be unsigned. The sign gets added later by an unary function, so it gets not parsed as `(-123e-32)`

(nodes are enclosed in parentheses) but as `(-)(123e-32)`

.

The construction of the number-literal is overly complicated, too but I might want to add other bases later (binary and hexadecimal at least) which are simply added now without an error-prone rewrite.

The JISON lexer is slightly different from Flex but it offers the option `%options flex`

(matches the longest match instead of JISON’s first-rule match. But, as a comment in the source of angular-dragdrop suggests: “The safest thing to do is have more important rules before less important rules, which is why . is last”). There are more differences, described in JISON’s “documentation”

.

The strings are very simple. No difference between single and double quotes and only the basic escape characters. The high complexity of the construction has the same reason as for the numbers: I want to be able to *easily* make additions (e.g.: Unicode escapes) later.

The whole language is a stripped down version of ECMAScript 5.1 with some differences:

- Variable declaration has the keyword
`let`

instead of`var`

- Function declaration has the keyword
`define`

instead of`function`

- It has the ability to include files (highly restricted in the JavaScript version, of course)
- The scope of variables is blockwise (everything between “{” and “}” has its own scope) not functionwise
- A bit of syntactic sugar, e.g.: a matrix array type e.g.:
`let identity_3 = [1,0,0;0,1,0;0,0,1]`

- It may get types (e.g.: int, float, matrix…) if I can be… what’s the word?
- It may get storage classifiers (e.g.: static, extern, global, local) if I can be…still can’t remember the word, but I had it on the tip of my tongue.
- I am still unsure about objects (the things with the dots, y’know?)
- Everything is stricter now: semicolons at the end of the statements are mandatory as are brackets around every block.
- Function definitions in the lowest scope only and on their own. No function definitions inside functions, for example. This has some disadvantages but not many for a highly numerically oriented language.
- There is an additional operator
`//`

(together with`//=`

) for explicit integer division - The power symbol is the double-asterix
`**`

instead of`^`

which is used for boolean operations (XOR) - The hash character
`#`

gets used as the length operator (cardinality). e.g.:`print(#123);`

will either print`1`

(number of numbers, the default),`3`

(decimal digits) or`3`

(number of units) depending on the configuration;`a = [1, 2, 3];print(#a);`

will print`3`

;`a = [1, 2, 3; 4 , 5, 6];print(#a);`

will print`3`

or`2,3`

depending on configuration. Fo the last case the form of the matrix is relevant:`a = [1, 2, 3; 4 , 5; 6, 7, 8];print(#a);`

will print`3,[3, 2, 3]`

(nested array). This exact behaviour of this operator might change, I am still unsure.

The C-version will most probably have a `goto`

added (too hard to implement one in JavaScript, although not impossible).

Next in this series: printing the AST in a way that allows the result to run as a JavaScript program. (Only real and complex numbers to make it not too complicated for a start)

[1] Honestly: I doubt it as much as you do

]]>

Anonymous CS-student in *Sighs of the Pusillanimous*, vol. CLXXXI

]]>

is a very slowly converging one, especially close to the branchpoints at -1 and 1.

The MacLaurin series for arccos is the same as for arcsin because of

so it has the very same problem.

The series for the arctan on the other side is simpler to compute.

Another good reason: I have it already implemented

The arcsin is related to the atan by

which seems to be of no help at all because the fraction does not get very small and the series for atan is also quite slow near -1 and 1. We can use another relation for this range.

Seems not of much use but we can expand the root

The closer x comes to the branchpoint the larger the fraction, but with

the actual value gets close to zero when x goes close to one and so

which is correct and as intended. The point where is at and at this point .

My cutoff points will be for and for .

No code yet, just twiddling with math, sorry

For the real part of the result is always and the imaginary part . This comes from one of the definitions

I will use this definition for complex arguments.

]]>

With the digit and the base:

This can be expressed as a fraction of two rising factorials (Pochhammer symbol).

The rising factorial can be expressed as a fraction of two Gamma functions

With which we get

By setting we can already see that the result will get smaller the larger gets with the obvious limit

The arguments of the Gamma functions get quite large which makes the results hard to handle but we already need the logarithm of the product, that means we can use the logarithm of the Gamma function and save one step (we are on the real-line, so I used the equality to avoid an even denser parentheses fence than it already is)

We can simplify a bit with because . Furthermore which can save us a multiplication here and there. So far we got

or

In the special case I used it for was a power of two such that none of the exponentiations had to be calculated, the numbers could be *build* in O(1).

Calling the whole thing we can compute the probability that the n^{th} digit is with . For general use the unsimplified version above for .

Beware: the numbers get really large, you will need a slightly higher amount of precision for the loggamma function if you calculate with a large base. I had 2^{26} as the base and had trouble to compute the values for places higher than the 20^{th} in PARI/GP (it said `*** lngamma: division by zero`

). It worked in calc with a precision of 300 decimal digits but it is sloooow. Who wrote that lame gamma function? Who? Me? Myself? Really? Ouch!

]]>

`typeof`

operator are manifold. It is so restricted, that its only use is in testing for `undefined`

. The main culprit cause these restrictions is the loose typing together with the automatic type coercion: `"2" + 2`

results in `22`

, `"2" == 2`

is `true`

, `t = 2;typeof t`

returns `"number"`

and `t = new Number(2);typeof t`

returns `"object"`

to name just a few.The worst case is probably:

// somewhere on top of the code var a = 2; // some thousand lines and/or several scripts later var b = new Number(2); if(a === b){ // CPR: Cardiopulmonary resuscitation console.log("Go on with CPR!"); } else { console.log("He's dead, Jim!"), }

There are several approaches to be found in the net, but not one of them does all I want. And I want *all*, of course

The method to catch the case `t = 2;typeof t`

returning `"number"`

and `t = new Number(2);typeof t`

returning `"object"`

can be solved with the `Object.prototype.toString`

method applied to the object in question.

It will return a well defined string specified in ECMAScript 5.1 in 5.2.4.2. OK, as “well defined” as a committee can go, but it is sufficient.

function xtypeof(obj){ var tmp = Object.prototype.toString.call(obj); return tmp.slice(8, -1).toLowerCase(); }

If you fear a different implementation with a different character encoding (ECMAScript’s basic is 16-bit Unicode) you can go the long way with a regular expression.

So the complete JavaScript object zoo is entirely covered by this method! Well, not entirely…one small village…sorry, couldn’t resist.

But all non-native objects are still “object”s, not what I wanted for my own prototypes.

The `instanceof`

operator works for these, of course but you have to know them in advance or you get in trouble with the nasty `undefined`

otherwise you could just use a list. That is not a problem, you *can* use a list if you use strings and use `eval()`

to make them variables. Catch the error that `obj`

is undefined with `try/catch`

and call it a day.

Another method is the use of `Object.prototype.constructor`

which returns the constructor and use a regex to get the name. That is more complicated but avoids the use of `eval()`

and is not bound to a finite list, but has other caveats. See this post at stackoverflow for more information.

In Firefox version 35.0

function Bla() { this.foo = 123; } Bla.prototype.baz = function () { this.foo = 999; } var t = new Bla(); function Zoo() { this.critter = 'cheeta'; }; Bla.prototype = new Zoo(); var k = new Bla(); // console.log('xtypeof(k) = ' + xtypeof(k)); // bla console.log('k instanceof Bla = ' + (k instanceof Bla)); // true console.log('k instanceof Zoo = ' + (k instanceof Zoo)); // true console.log('call k = ' + Object.prototype.toString.call(k)); // [object Object] console.log('name k = ' + k.constructor.name); // Zoo

Interesting is the fact that `k`

is both an instance of `Bla`

and one of `Zoo`

and `constructor.name`

returns `Zoo`

. That means we have several choices but not a single fits-all one, at least not a short one.

I have no cases of inheritance indistinguishable by `instanceof`

in my code and only a couple of constructors, half a dozen at most if I counted them correctly. So it is either `Object.prototype.toString.call`

together with a list (and `eval()`

if I need it more general for testing) or the method using `constructor`

. The `constructor`

is a bit problematic as it seems, so it is the frist method with `Object.prototype.toString.call`

and a list.

Here is the resulting code where I used `typeof`

as a short, fast first test. Two problems with `typeof`

here: the type of `undefined`

is `"undefined"`

and the type of `null`

is `"object"`

. The latter gets caught by `Object.prototype.toString.call`

(`obj instanceof Null`

results in a ReferenceError in Firefox 35.0), the first one is no problem at all.

function xtypeof(obj) { 'use strict'; // try it the traditional way. var tmp = typeof obj; if (tmp !== 'object') { return tmp; } else { // try the toString prototype tmp = Object.prototype.toString.call(obj); // It is one of the build-ins if (tmp !== '[object Object]') { return tmp.slice(8, - 1).toLowerCase(); } else { // Put your own objects here // The key is the lowercased name of the object, the // value is the correctly typed name of the object. // The value must be a String, hence in quotes. var list = { bigint: 'Bigint', bigfloat: 'Bigfloat', bigrational: 'Bigrational', complex: 'Complex', bla: 'Bla' }; for (var p in list) { try { // Yes, kids, eval() is eeeeevil! // Undefined entries will cause a ReferenceError. tmp = eval(list[p]); } catch (e) { // Nothing to catch here because the evidence // of non-existance is sufficient for our needs, // so let's... continue; } if (obj instanceof tmp) { return p; } } return 'object'; } } }

Despite all care taken, this is a slow function, so use it sparingly. If you know the range of objects possible in the input it is most probably more useful and definitely faster to check for them directly. E.g.:

Complex.prototype.add = function(x){ if (!(x instanceof Complex)){ // Assuming there exists a toComplex() method for // all possible input. Otherwise a TypeError gets thrown x = x.toComplex(); } };

Or the other way around

Bigint.prototype.add = function(x){ // Bigfloat must be defined already // Use a try/catch construct otherwise if (x instanceof Bigfloat){ return this.toBigfloat().add(x); } };

]]>