64 bits, COBOL, and 18 digits

Dick Dunn rcd at ico.isc.com
Tue Feb 13 07:11:32 AEST 1990


ejp at bohra.cpg.oz (Esmond Pitt) writes:
> markh at attctc.Dallas.TX.US (Mark Harrison) writes:
> >...storing numeric values with 18-digit precision, ala COBOL and
> >the IBM mainframe.  This can be accomplished in 64 bits, and is probably
> >the reason "they" chose 18 digits as their maximum precision.
> 
> According to a fellow who had been on the original IBM project in the
> fifties, the 18 digits came about because of using BCD (4-bit decimal)
> representation, in two 36-bit words.

(Hmmm...what about the sign?)

This is quite a digression, but...
The question of "why 18 digits?" came up at the History of Programming
Languages conference some years back.  I can't remember whether it was
Grace Hopper or Jean Sammett who answered, but the real answer was
specifically NON-machine-oriented:  It was large enough to deal with
anticipated needs (?the national debt?) and it didn't give any particular
advantage for any particular hardware.  That is, it was larger than the
"convenient" sizes for hardware of those days.
-- 
Dick Dunn     rcd at ico.isc.com    uucp: {ncar,nbires}!ico!rcd     (303)449-2870
   ...Mr. Natural says, "Use the right tool for the job."



More information about the Comp.unix.wizards mailing list