Floating point accuracy and calculate Pi

Discussion in 'Teaching Programming with B4X' started by emexes, Jul 17, 2019.

  1. emexes

    emexes Well-Known Member Licensed User

    This is from an offline conversation, but might be useful (and/or of interest) more generally. The BASIC code is PowerBasic dialect, but it's a short edit from that to B4X.

    You're not alone, except that mostly it is people doing financial accounting that discover it when using single-precision floating-point to adding small numbers to large totals.

    The problem is that 0.1 (ie 1/10th) cannot be exactly represented in binary, just the same as 1/3rd cannot be exactly represented in decimal.

    Single-precision floats give 24 binary digits of precision which translates to approximately 7 decimal digits of precision. Double-precision gives 52 binary digits of precision = approximately 15 decimal digits.

    Accounting applications can usually get away with using double-precision numbers - 15 decimal digits - say 2 guard digits for rounding leaves 13 usable digits = 11 digits for dollars and 2 digits for cents = good for values up to $100 billion accurate to the last cent.

    Some languages have fixed-point or decimal floating-point (aka BCD = binary coded decimal) numbers, specifically for handling financial values. I remember TurboPascal had them, and more recently, that Microsoft BASICs have a CURRENCY type, which were 64 bit integers divided by 10000 to give 4 decimal places (2 for cents, and 2 for just-in-case :) )

    Big-number applications like your 100-digit pi calculation, are not going to work with double-precision (~15 decimal digits) or even quadruple-precision (~33 decimal digits) numbers, regardless of whether they are in binary or decimal. There are math packages around for doing big-number calculations, usually effectively limited only by the available memory. I haven't seen any native ones for B4X, but presumably a Java one could be used. Obviously there is a speed hit, though, eg, if you multiply two 10,000 digit numbers, then that is 100 million digit-times-digit multiplications, although any decent library would be doing it in chunks of digits eg 32 bit operations mean you can add 9 decimal digits at a time.

    Way back when I was a teenager, my school had an Apple II. I ran a program from a magazine to calculate pi, and over a weekend I got two full pages of digits (~8000), of which at least the first 100 were correct when compared to the encyclopaedia.

    I just did a search now and found this readable page (many pages aren't :-/ ) which I translated in to BASIC and... what took 60 hours on an Apple II took just under 4 seconds now on my seven-years-old laptop = 54000 times faster = 1.45x speedup every year which seems a pretty good match with Moore's Law.

    https://crypto.stanford.edu/pbc/notes/pi/code.html
    Code:
    'int r[2800 + 1];
    'int i, k;
    'int b, d;
    'int c = 0;

    DIM R(0 TO 2800AS LONG
    DIM I AS LONG, K AS LONG
    DIM B AS LONG, D AS LONG
    DIM C AS LONG    '= 0 by default

    'for (i = 0; i < 2800; i++) {
    '    r[i] = 2000;
    '}

    I = 
    0
    DO WHILE I < 2800
        R(I) = 
    2000
        I = I + 
    1
    LOOP

    'for (k = 2800; k > 0; k -= 14) {
    '    d = 0;
    '
    '    i = k;

    K = 
    2800
    DO WHILE K > 0
        D = 
    0

        I = K

    '    for (;;) {
    '        d += r[i] * 10000;
    '        b = 2 * i - 1;

        
    DO
            D = D + R(I) * 
    10000
            B = 
    2 * I - 1

    '        r[i] = d % b;
    '        d /= b;
    '        i--;

            R(I) = D 
    MOD B
            D = D \ B
            I = I - 
    1

    '        if (i == 0) break;
            IF I = 0 THEN
                
    EXIT DO
            
    END IF

    '        d *= i;
            D = D * I

    '    }
        LOOP

    '    printf("%.4d", c + d / 10000);
        PRINT RIGHT$("0000" + LTRIM$(STR$(C + D \ 10000)),4);

    '    c = d % 10000;
        C = D MOD 10000

    '}
        K = K - 14
    LOOP
    Code:
    31415926535897932384626433832795028841971693993751058209749445923078164062862089
    98628034825342117067982148086513282306647093844609550582231725359408128481117450
    28410270193852110555964462294895493038196442881097566593344612847564823378678316
    52712019091456485669234603486104543266482133936072602491412737245870066063155881
    74881520920962829254091715364367892590360011330530548820466521384146951941511609
    43305727036575959195309218611738193261179310511854807446237996274956735188575272
    48912279381830119491298336733624406566430860213949463952247371907021798609437027
    70539217176293176752384674818467669405132000568127145263560827785771342757789609
    17363717872146844090122495343014654958537105079227968925892354201995611212902196
    08640344181598136297747713099605187072113499999983729780499510597317328160963185
    If you used floating-point numbers with the above code, I'd expect you'd run into the same problem. That code calculates pi in 4-digit chunks, and there are intermediate results that need 8 digits of precision, which a single-precision floating point number can't provide. Double-precision might fix the issue for your original code.

    For the above code, LONGs (32-bit integers) give 9 decimal digits of precision, cf the 8 required, so... no worries :)
     
    aeric, José J. Aguilar and Erel like this.
  2. Erel

    Erel Administrator Staff Member Licensed User

Loading...
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice