Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That decimal literals represent binary floating point numbers is defined by the language, not the hardware.

I consider it a basic point of language design literacy that exact decimal literals without some other indicators should not represent an inexact type that isn't designed to represent decimal data.

Infuriatingly, very few languages get that right. But perl 6 isn't unique in doing so; Scheme had been doing it right for decades.



(Actually, Scheme doesn't quite get this right: decimals are inexact by default and only rationals expressed as divisions of integers are exact, though it is less cumbersome to specify an exact decimal than in most popular language -- its a literal modifier, not a function call.)

e.g.:

  0.2
is inexact, while

  #e0.2
is exact.


The use of an annotative '#e' to get an exact number contrasts with Perl 6's choices for the default interpretation of number literals. The Perl 6 choice adopts scientific notation to correspond to floats:

    200000000000000000000  # exact (integer)  2 * 10 ** 20 
    0.00000000000000000002 # exact (rational) 2 * 10 ** -20
    0.2e-19                # inexact (float)  2 * 10 ** -20
    2e20                   # inexact (float)  2 * 10 ** 20




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: