Floating Point Expectations

Aryeh M. Weiss aryeh at eddie.mit.edu
Tue May 29 23:21:19 AEST 1990


In article <995 at s8.Morgan.COM> amull at Morgan.COM (Andrew P. Mullhaupt) writes:
>In article <1990May24.132423.3080 at eddie.mit.edu>, aryeh at eddie.mit.edu (Aryeh M. Weiss) writes:
>> 
>> Only numbers that can be expressed exactly in base 2 will produce
>> "good" results.  0.1 and 0.01 are prime examples of numbers that
>> cannot.  (A common textbook example is accumulating 0.01 100 times and
>> not getting 1.0.) 
>
>Not so fast there! There are still computers around with BCD (binary
>coded decimal) floating point, both in hardware and software. There

... Well, it IS a textbook example.  I've seen it.

>are even machines which do not have an ordinary radix, such as the
>Wang 'logarithmic' floating point representation. What you really
>intend to say here is that that floating point numbers which are
>rational fractions with denominators given by a power of the radix
>may escape rounding. 

I have seen BCD FP representations in software and I have also seen
BCD fixed point representations in hardware.  I was not aware of the
Wang format.  I should have qualified my remarks to IEEE-like (base 2)
FP formats.  Certainly there have been as many FP formats as manufacturers.
It is always good practice to absolutely avoid assumptions about the
underlying hardware.  The other point is that computations that are
algebraically identical cannot be assumed to be numerically identical.
Algebraic expressions sometimes must be rearranged and performed in careful
order to preserve numerical precision.  A typical problem is taking the
difference of two large but numerically close numbers, resulting in loss
of precision in the small difference.  This is what often screws up
*unsophisticated* linear algebra algorithms.

Anyway this is certainly getting out of bandwidth.

-- 



More information about the Comp.unix.wizards mailing list