Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Yet another alternative to floating-point numbers (wordsandbuttons.online)
33 points by okaleniuk on Oct 31, 2020 | hide | past | favorite | 24 comments


Why isn't it more widely used to "simply" use 64bit integers behind the scenes and then chose some (possibly user-defined?) precision? If you for example treat a 64-bit unsigned integer as a measure of length/distance in nanometers then you can measure distances up to 18.446.744 kilometers. If you chose micrometers as your smallest unit then you get to ~17 light hours (two times the diameter of neptune's orbit (thanks Wolfram Alpha). Wouldn't it be better with a value representation that is more predictable like that? You are guaranteed a precision to however many decimal places that you see fit for your specific task and you don't have to worry about all sorts of pitfalls regarding the internal representation. Algorithms to display these values in human readable form becomes trivial as well.


The problem comes when you need to mix precision and now all your multiplication and division needs to keep track of the decimal point through the computation. This isn't terribly uncommon, usually when your computation needs much wider range than your input and output. Exponentials and logarithms are bad about this.

So what you do is either keep track of the precision manually in your call stack, but then you get the bright idea to pass it as data along with your integer. And you don't need 64 bits of precision because that's a lot, you can live with 52 and use the extra bits to hold your exponent. And lo and behold you've discovered floating point.


> If you for example treat a 64-bit unsigned integer as a measure of length/distance in nanometers then you can measure distances up to 18.446.744 kilometers.

If your calculations require you be able to square any of that unit, the largest you can go without running out of range in your intermediate calculations is cut to ... 4 meters. God help you if you need a higher power than 2.

Same deal for taking the product of several numbers for an area or volume.

Working in fixed point has its benefits and it has its costs. Not so unlike manual memory management, but automatic memory management doesn't benefit so much from amazing hardware acceleration that you miss out on with manual management.


It's very common in embedded systems, especially where you need safety and reliability, which are very strongly intertwined with predictability of computation errors.

If you look at ADA, a language made for safety critical systems programming, it has even a very nice abstraction for these:

https://www.adaic.org/resources/add_content/standards/05rm/h...


Also very common in embedded systems that simply do not have any floating point capabilities at all. For my CS dissertation I had to implement part of an OpenGL pipeline on a VLIW DSP designed for video coding, and all it had was 32-bit integer ALU’s. Dealing with dynamic range was pretty much half of the total effort to map the reference code that used floating point to fixed point with mixed precision.


For the "user defined precision" part it would be convenient to use a few bits to store where the point is located in such a fixed-point number, and that's essentially reinventing floating point numbers ;)


people use fixed point all the time. I think it’s just easier to start using floating point numbers without thinking about it and it works like 80% of the time


Fixed point is very common, at least in some areas.


GCC has some support for fixed-point arithmetic, as an extension to the C language. Unfortunately only a very few target architectures are supported, not including AMD64. [0] (Unfortunately this page doesn't list which architectures are supported.)

As carlmr mentioned, the Ada language has built-in support for fixed-point arithmetic types.

[0] https://gcc.gnu.org/onlinedocs/gcc/Fixed-Point.html


You don't need special compiler support for fixed-point-math, just left- and right-shift to "move the point around".


I worked on interval arithmetic and interval arithmetic-like alternatives to floating point (I was the first independent implementor of gustafson's "the end of error").

They get hairy really fast, and that's why ultimately when it came time to re-rethink floating point, we settled again on something with fixed bit with and inexact results.


What do you mean, 'they get hairy really fast' ?


For starters, what happens to your numerical algorithm when it contains less-than comparisons? Intervals don't form an order.


It's not a total (heh) disaster, since many intervals are comparable with many other intervals. Floating point implicitly "solves" this problem by truncating the implicit interval onto a member of a fixed discrete set. It does so quite cleverly, but ultimately it still silently loses information; sometimes (admittedly rarely) you might want the correct result "I was unable to compute an answer due to lack of input precision" that happens when you try and work out whether [0,2] is less than [1,3].


Forget ordering. The useful equals relation becomes ternary.


The Taubin smoothing patent expired in 2014 btw. https://patents.google.com/patent/US5506947A/en

And the better Siemens method/patent DE19929752B4 also.



TLDR: The author has rediscovered interval arithmetic[0] with rational numbers.

[0] https://en.wikipedia.org/wiki/Interval_arithmetic


More a decade ago I was involved in High Frequency Trading (HFT) and we did this on a regular basis.


It has a taste of 1D homogeneous coordinates


If one of the roots is close to 0, then I suppose you can offset the entire equation by setting x'=x+1 and retry.


You really can't: that may well cause some other root to blow up. Rootfinding may be one of the simplest nontrivial problems in numerical analysis, and yet it still goes wrong plenty often.


Why not use something like this? Am i missing something?

bellard.org/libbf


Arbitrary-precision numbers still fall short of real numbers. Most math functions like sqrt or sin can return an irrational number even for rational inputs, so you always need to track the error. Even libbf itself does the same for internal calculation (keyword: Ziv's rounding test).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: