Questions about NCEG

Jim Kingdon kingdon at pogo.ai.mit.edu
Wed May 30 15:24:49 AEST 1990


In comp.std.c, O'Keefe writes:

    (1) If you stick to the letter of the IEEE 754 and IEEE 854
    standards, conversion of numeric literals from decimal to binary
    (or possibly, in the case of 854, to internal decimal) is a *run*
    *time* operation,

Looking at 754-1985 I don't see how that interpretation makes sense.
Section 4.2 says "An implementation shall also [in addition to round
to nearest] provide three user-selectable directed rounding modes".
"User", as defined in this standard, can be the compiler, rather than
the program you are writing.  This is made explicit in section 4.3,
"the user, which may be a high-level language compiler".  So when
section 5.6 says "the procedures used for binary <-> decimal
conversion should give the same results regardless of whether the
conversion is performed during language translation (interpretation,
compilation, or assembly) or during program execution (run-time and
itneractive input output)" it doesn't say what rounding mode the
compiler has to use.  754 just says that the compiler needs to be able
to choose a rounding mode.  Presumably the compiler will either just
pick one (and if you're lucky document which one it is), or it will
provide some sort of directive to control it ("#pragma rounding_mode",
"__rounding(round_toward_infinity)", etc).

This is based on 754; I have no idea whether 854 is similar.



More information about the Comp.std.c mailing list