I am not sure that I understand what you're saying. Given a charitable reading of the presentation, the author seems to be saying that his standard explicitly specifies range of values, as opposed to the IEEE Float. It did not seem like he was saying that he could explicitly represent an infinite amount of exact, distinct values using a finite number of bits. Low level data structures are not my area of expertise, so did I misunderstand what he was saying or what you are saying?
Under that reading, he's saying nothing: just represent the interval [-inf, inf] as "0" and call it a day.
So assuming that he's saying anything at all, he's at least being imprecise, and the actually claim should be something like "represent any dyadic interval using a finite number of bits".
He's a bit showy about the format. Wish he would just put out a technical paper.
Anyway, I guess his motivation might be "you can represent any real number (with finite bits, and therefore finite precision)". In the book, he presents an interesting case: little 4-bit versions of the Unum that can represent:
Putting together a pair of them, the book outlines simple interval arithmetic (where pairs of numbers can represent any interval between numbers on the line above, and single numbers can represent some of the closed intervals as above). The reason these are kind of neat is that using the standard Unum algorithms (without any fudging), you can get "correct" (albeit terribly imprecise) results for many real number computations. Questions like "is there a number satisfying a numerical predicate in some range?" or the value of a trigonmetric or exponential expression will come out "correct" (but you might get an answer like (-inf, inf)). If things work out as well as he claims (and demonstrates for some cases), then you can basically do the math to figure out how precise you want to be and choose an appropriate specialization of the format - or take advantage of the format's flexibility and do computations starting at a low precision and increasing precision until you are satisfied. In particular, it's kind of cool that you can do computations with little 8-bit intervals, and possibly circumvent doing more expensive computations (e.g. if you test if a property will hold anywhere in the Unum range and it won't hold anywhere, assuming you (and Gustafson) have done the math right, you can avoid doing more expensive checks with increased precision).
Anyway, point is, the presentations are kind of flashy and misleading - and you're right, you can't represent any real number (just finitely representable dyadic intervals)... but the format itself _does_ seem promising...
If I add (1/2, 1) to (1, 2), then I should get (3/2,3), which is not representable with any of the above values.
So what is the result and in what sense is it correct?
As I understand it, it's just "floating" interval arithmetic, so like in interval arithmetic you should understand a representation [a,b] as 'The number is contained in the interval [a,b]'. In other words, interval arithmetic with carefully assigned floating point limits such those don't violate the interval arithmetic and are concise.
Not sure how it would go, I think it depends if you maintain the precision or increase it after this operation. I guess 2 bit: (1/2,1)+(1,2)=(1,inf) (?) and 4 bit (2 exp and 2 mantissa?): (1/2,1)+(1,2)=(3/2,3). Maybe there's a built in check to compare how short your interval can get and stop at a reasonable precision (in this case there's no point going for more than 4 bits).
Honestly I find it quite elegant and is at least trying to solve a big issue with bandwidth limitations. I do wish the exposition was more clear/straightforward.
Ah, that makes sense. The 4-bit representations do not suffice by themselves, but imply the support of some or all of the smaller representations as well, and I guess you always need a representation for [-inf,inf].
First of all, a correction: the 4-bit unums can represent any of
-inf, (-inf, -2), -2, (-2, -1), (-1, 0), 0, (0, 1), (1, 2), 2, (2, inf), inf, and both quiet and signaling NaN. They do not represent ±1/2 or use ±1/2 as an endpoint.
If you add, say, 1 to (1,2), you get (2, inf). The open interval means it does not CONTAIN infinity, but ends at a finite value too large to represent. If the largest positive real you can represent is 2, then (2, inf) is a mathematically correct answer.
D'oh, missed the NaNs and wrote out a list without referring to the book... then checked the length of the list to make sure it had 8 things. Silly me!