Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This transcript is very interesting, and in particular I didn't know about Unums (Universal Numbers), which look amazing.

Would be interesting to get more bench (speed / memory) comparing float with unum.



I've been aware about the unum stuff for a while, but I've never delved deeply into it, and some of my hot takes on it:

* Gustafson's unum project seems to be invariably pitched in a cult-like this-is-the-one-true-way [1] manner which makes it hard to evaluate dispassionately.

* There seem to be several versions of unums, the latest of which is not-even-a-unum anymore but instead a regular floating-point number with different distribution called 'posits.' That Gustafson seems to have changed the vision so many times suggests to me that early criticisms were in fact correct.

* Conversations I've had with numerical experts about interval arithmetic is that it generally doesn't work [as a replacement for where we use floating-point today]--intervals tend to blow up to infinity, especially as it's difficult to account for correlated error (which is the point of this article here, actually).

A lot of Gustafson's pitch seems to be "you shouldn't need a numerical analyst to write math," which is naturally going to rile up a numerical analyst (like Kahan). But the numerical analyst's retort that you're still liable to get into trouble if you're not savvy enough to know the pitfalls is equally true of Gustafson's proposal; there's no magic bullet that makes problems go away.

From my perspective, the real problem is that we lack the tooling to let programmers discover issues with their numerics code, no matter how they're implemented. Gustafson and Kahan are talking past each other on this problem, with Kahan rightly pointing to all the functionality that IEEE 754 added to enable this functionality, and Gustafson rightly pointing out that those features are unused and largely unusable, and Kahan (probably rightly) pointing out that unums' promise of a magic bullet to numerics comes with issues of its own.

[1] This is possibly meant just as a joke, but it's the kind of unfunny joke that instead makes me wonder about the character of the person who presents it, like how asshole-as-a-character internet personalities turn out to frequently also be assholes in person.


In 1980 I started working with a straightforward algorithm that was computerized then by agreeing on its implementation in a few pages of 32-bit double-precision floating-point Fortran code.

Up until then, aggregate data had been manually compiled and published in kilos of handbooks over the decades.

This was the first acceptable computer approach since it was the exact same pre-1980 algorithm, and expected to play a part in correct 20-digit decimal billable amounts based on computer data which had been meticulously rounded to 4 decimal places, which is what took up most of the Fortran code.

Well, I needed to do the same calculations on an 8-bit Radio Shack Pocket Computer. And there was only 512 bytes of user space for my TRS-80 Basic code.

The exact algorithm would fit, but not any of the standard multi-step rounding procedure. The floating point output was not often good to 4 decimal places.

Massaged it iteratively until the algorithm was no longer fully recognizable. Still no good.

Changed from floats to integers. This also saved more memory for workspace.

I was no mathematician, and in order to get integers to do the whole thing, leaving only the need for a final move of decimal point, it was not easy.

Ended up with a very dissimilar representation of the algorithm, using numbers specifically geared to the problem at hand, nothing universal like Gustafson.

When I read his material I was intrigued that one of the objectives was to obtain more numerical accuracy from lesser-bit computers himself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: