Large Numbers Gets Treated in Strange Ways

Hi!

Running KM v11.0.2 (on Mac OS Ventura 13.2) I seem to have stumbled across this weird bug in how calculations treat large numbers with more than fifteen digits.

I first stumbled across this behaviour using the Decimal Format Token, but as (overly thoroughly) showcased in the macro/image bellow, it seems to occur with all forms of calculations, but not always in the same ways. Or I am at least not able to grasp the pattern on the behaviour showcased bellow.

Calculation Bug? Macro (v11.0.2)

Calculation Bug?.kmmacros (11 KB)

Macro Image Showcasing Weird Results

This is of course kind of a rare one to encounter, but I thought I should report it.

A couple more examples, that at least to me makes this one seem even stranger:

Calculation Bug- a couple more examples.kmmacros (4.1 KB)

Macro Image β€” Also showcasing that this behaviour is not only an occurence in the variable preview, but is also ending up as the actual result of the calculation

Interesting. I am not the Architect of KM, but I'll bet what's happening is this: A "numeric variable" is stored just as a string. But there are certain situations when (inside the KM Engine, or the KM Editor) a number is actively converted (for short term purposes) to a binary or real number. And when that conversion happens, the application doing the conversion (Engine or Editor) loses precision.

But I believe that in either case, the number itself doesn't change. You may think it's changing because you are seeing it look different. But there are ways to check if the value has really changed or not.

In your second screenshot, you are setting a variable as a result of a calculation. That's where the conversion happens, inside the code that evaluates the number. But consider this:

image

In this case the Display Text action does NOT display 123456789123456789 but the IF action does indeed play the sound. That would seem impossible, but it's because any time an arithmetic operation occurs, there must be a round error at some point.

The last two green boxes in your first screenshot are explained by the fact that there is no rounding causing an error. You are staying within the precision limit of the internal numeric formats that KM uses.

I didn't look at all your examples, but I think this explains all of them. Both the editor and the engine have to round things whenever they have to convert your number to a numeric format.

1 Like

Read the last line of this post.

IEEE 754 double-precision binary floating-point format: binary64

Double-precision binary floating-point is a commonly used format on PCs, due to its wider range over single-precision floating point, in spite of its performance and bandwidth cost. It is commonly known simply as double. The IEEE 754 standard specifies a binary64 as having:

The sign bit determines the sign of the number (including when this number is zero, which is signed).

The exponent field is an 11-bit unsigned integer from 0 to 2047, in biased form: an exponent value of 1023 represents the actual zero. Exponents range from βˆ’1022 to +1023 because exponents of βˆ’1023 (all 0s) and +1024 (all 1s) are reserved for special numbers.

The 53-bit significand precision gives from 15 to 17 significant decimal digits precision (2βˆ’53 β‰ˆ 1.11 Γ— 10βˆ’16).

2 Likes

It definitely looks like you are on to something here @Airy! But one thing is that large numbers loose precision, as numbers often do where large numbers in many situations get displayed in standard form, as a multiplication of a power of 10 for human readability. That computers does a variant of this makes sense, but I kind of feel like there is a bug when I as a human gets exposed to these numbers.

And it is one thing when a number looses precision, as when 9999999999999999 (~1*10^16) gets rounded to 10000000000000000 (1*10^16), but another thing when 123456789000000000000 (~1.23*10^20) gets turned into 9223372036854775807 (~9.22*10^18), as happens in my orange example from my second post. As a human I also find 999999999999990000 getting displayed as 999999999999990016, where 999999999999000000 gets displayed as 999999999999000064, to be odd.

Then there is of course also the question if the numbers gets altered, or just displayed as altered, that you raise with your If Then, @Airy. But still all of this seems strange to me.

It'd be very interesting hearing your thoughts about these observations, @peternlewis

For full picture, chapter and verse:

IEEE754 Floating Point – Bartosz Ciechanowski

and

Float Exposed

1 Like

As noted, Keyboard Maestro uses double to store values internally, and double has about 15 digits of precision, as well as using 64-bit for integer calculations (about 19 digits total).

So if you try to do anything that expects more than 15 digits of precision, or in cases where integers are involved, more than 19 digits total, you are going to be disappointed and/or surprised at the results.

So in the case of 12345678900000000000, that exceeds the size of a 64 bit integer and in this case you are doing integer maths, so the result is an overflow.

Anything that exceeds precision or digits will result in potentially incorrect numbers, and this is not treated as an error (it basically never is in computer speak).

Try it in Numbers for example, make a cell =12345678912345678912.

1 Like

Thank you, all three of you, for identifying the cause here, and for providing insight! All of this is very new to me, and the provided links about binary float have been an interesting read, although it is allot to take in, and much of it passes way over my heads current elevation. I'd never have thought that the "bug", in reality, was more of a glitch in the matrix!

Until this experience I've lulled myself into the idea that the world of computers is a precise, tidy and sturdy landscape. Only for it now to crumble up completely β€” How any decimal number, that is not a simple multiple of a power of two, can be stored with any precision, now kind of feels like a pure miracle/coincidence β€” But I guess my despair is only a product of still being in kind of a Kierkegaardian second stage, haha, where I have not yet re-realized the beauty, and the order of it all!

Luckily 15 decimal digits of precision is pretty solid! And I have not yet met a real world task that gets limited by this inherent "imprecision" (I only stumbled upon the outer bounds of the matrix here through hammering the 9 to create lots of digits to check if a number sorting macro of mine added the correct number of lead zeroes).

Lastly, me being a bit of a curious fellow: Does anyone know why %Calculate%123456789000000000000% turnes into, or displays as, 123456788999999995904, while %Dec1%123456789000000000000% (or %Dec100% for that matter) ends up as 9223372036854775807? (%Dec100% of course displayed with a bunch of lead zeroes.)
Is it mostly that with numbers this size we are here in a landscape where everything is up in the air, precisionwise? Or are there some interesting logic to be found behind this?

There are standard ways of representing arbitrarily big integers (some programming languages support a special bigInt type) but flexible – indefinitely stretchable – representations of numeric values are computationally a little more expensive or inefficient to manage than finite 64bit chunks, so they are sensibly not the default.

Given that estimates for the count of grains of sand on this planet are only in the range of 7.5 x 10^18, it's probably fair to say that 64bit representations are more than enough for most of the numbers that you will need to deal with in the average macro.

(If not, consider reaching for things like Mathematica, Matlab, or Haskell)

1 Like

Absolutely, I am more than happy with all the bits we have! This is now only me having had my curiosity risen from meeting this new aspect of how computers compute numbers, for the very first time. bigInt and arbitrary-precision arithmetic is of course also all new to me. Interesting to read a little bit here now about how large numbers can be tackled!

Not that I'll likely ever need it, but thanks also for pointing me in the direction of programs that might handle even larger numbers in different ways, and with more precision. To me it was definitely eye opening trying out Peter's proposition to run these same large numbers in the Numbers.app.

I think my curiosity/reaction here is not really about wanting/needing the precision. I think it is mostly a matter of me being a creature that can type or scribble out a sixteen digit number in mere seconds, having difficulties 'accepting' a computer struggling with the same number. Not being critical about it, only realising how little I understand β€” How easy it is typing out digits that within a second have entered the territory of a number large enough for it to be impossible to really comprehend, and such.

Of course most real use cases of needing lots of digits stored as integers can take advantage of computers perfect ability to keep numbers stored as characters/text β€” Which in any case probably is the better analog to my human ability to scribble/type out lots of digits / "large numbers".

and not dissimilar to choosing the right resolution (in a given context) for digital photographs

The first remains as a double and so has 15 digits of precision.

The second is dealt with as an integer (The Dec tokens and the like (Hex, Oct, Bin) only works with integers). As such it is in places limited to a 64 bit integer, roughly 19 digits in total. Your number exceeds that and so it β€œwraps” and you get arbitrary wrong numbers.

2 Likes