Integer division

Matthew P. Wiener weemba at brahms.BERKELEY.EDU
Sun Feb 2 14:29:52 AEST 1986


WARNING!  The following article contains heavy flamage and sarcasm
and no smiley faces.  And it is not guaranteed to be interesting,
accurate or informative.  Any worse and it would have been rot13ed.
Read at your own risk.

Last chance to hit the 'n' key!

In article <4917 at alice.UUCP> td at alice.UucP (Tom Duff) writes:
>Pardon my flamage, but what sort of nonsense is this:
>[in reference to divide instructions that give -(a/b)=(-a)/b]
>>I have NEVER seen an instance where the first one is preferable.  Not
>>only is it not preferable, it is just incorrect.
>Wrong!

Which sentence is wrong?  It is an undeniable fact that *I* have never
seen such instances, but you do include that as some "sort of nonsense".

>        That's the definition.  It can't be incorrect.  It might be different
>from what a number theorist wants, but by no stretch of the imagination can
>it be called incorrect.  A mathematician should be able to to handle this
>elementary concept.

Of course it can be incorrect!  'correct' has the meanings of 'logically
correct', in particular definitions must be tautologically correct, normally
the proper usage within pure mathematics, and secondly of 'proper to do',
as in "If I define f(a):=xyz, that is incorrect, because what users want
is f(a):=xyzz".  Which sense seems appropriate here?

To quote from E W Dijkstra, "How Do We Tell Truths that Might Hurt",
"... an exceptionally good mastery of one's native tongue is the most
vital asset of a competent programmer".  I must thank Tom Duff for
illustrating that assertion so vividly.

>>Why such a routine
>>has been allowed to be 50% inaccurate in every existing language all
>>these years is beyond me.
>Well, it's that way because that's the way it's defined in the ANSI Fortran
>standard, and Fortran is probably a Good Thing for a computer to support --

And of course it is important that C and other languages copy Fortran's
mistakes.  That way we won't have to strain our brains that much.  I mean,
why bother implementing what users want?

Or am I confused, and Fortran did everything perfectly right off the bat,
and every language since then has only confused people (like myself)?

By the way, just out of curiosity mind you, this question popped into my
head out of nowhere, but is anyone out there still using card readers?

>certainly more important than niggling know-nothing number-theoretic nonsense.

Oh wow, a Spiro Agnew fan!  Or is William Safire your ghost-poster?

>Why does Fortran do it that way?
>Probably because the IBM 701 did it that way.

Let's all take a deep bow for backwards compatibility!  <Clap Clap>

>                                               Why did the IBM 701
>do it that way?  Well, at the time people thought that a divide
>instruction that satisfied certain identities was more important
>than mod function behavior.

Is that opinion or fact?  I've sent the question off to the man who
wrote the original Fortran compiler.

>                             Certainly in most of the applications
>for which Fortran was designed (i.e. engineering numerical calculations)
>the behavior of the mod function is of minimal interest.

Of course it is of minimal interest in most applications!  So what is wrong
with getting it right--excuse me, in case my digression earlier wasn't that
clear--what is wrong with implementing the more common application that does
occur?

>In any case, why should you be worried that some operation you want to do
>isn't primitive.  Most programming languages don't provide arithmetic
>on multivariate polynomials with arbitrary precision rational coefficients
>either (which I want more often than I want a number-theoretic mod function.)

And if they did, and all did it incorrectly--can you guess which meaning
I'm using?--you'd be annoyed too.

>In any case, it's fairly easy to write:
>	a=b%c
>	if(a<0) a+=c
>I can't believe that you couldn't discover this code sequence yourself.
>(Note that it works whether the range of b%c is [0,c) or (-c,c) -- the
>C language definition allows either.)

Your beliefs are accurate.  What I can't believe is that I should have to
do something so stupid myself each time.  So close, and yet so far.
 
>>[Whether CS people should even be *allowed* to make such mathematical
>>decisions is another question.  In C on UNIX, for example, one has
>>log(-x) == log(x), a rather dangerous identity, not based on anything
>>comprehensible.  Thus, the implementation of general exponentiation,
>>a**b = pow(a,b) = exp( b*log(a) ) will silently return the wrong value
>>if a is negative.  (Try taking cube roots this way!)]
>This sort of nonsense makes me wonder whether the writer should be
>allowed to make *any* sort of decision at all.  No plausible definition
>of the log function will let you use it to take cube roots of arbitrary
>reals in this manner.

I agree, both about the "sort of nonsense" advocated inducing wonder and
the impossibility of defining log to take arbitrary *odd* roots.  [To take
cube roots plausibly requires defining log(-a):=log(a)+3*pi*i.]  The example
comes from a numerical analysis class I was teaching, where the students
solved y'=y^third.  I forgot that a lot of the students would not know that
a**b cannot be used with a<0, and those who programmed in C got silently
burned because of that "rather dangerous identity".

And this is an example of my complaint.  If one is doing a *mathematical*
problem on the computer, one should not have to keep second guessing what
the language is doing with the *mathematics*!  We all can argue about the
little things in languages that bug us--does ';' terminate or separate,
for example--but certain little things, like what is a%b when a<0, don't
seem to be decided without regard for their mathematical reasonableness.
And then try to find a description in the manual of what was actually
implemented!  [The WORST offenders are random number generators.  I have
ended up writing my own because the one given is proprietary etc.  UGH!]

In the cube root of negative numbers example, there is no implementation
that returns the correct--back to the logical sense--answer, so the proper
thing to do is crash the program, and not return something for the sake of
returning something.

The CRAY-1's floating point multiply shaves many nanoseconds by a method
that only gets 47 out of the 48 bit mantissa.  That is clearly incorrect.
In this case, I can only admire Seymour's boldness and imagination to take
this step, and the lost bit seems worth it.

Is there a similar reason to have (-a)/b == -(a/b) ?

>On a higher level of discourse, this writer (Matthew P Whiner) seems
      ^^^^^^                                            ^^^^^^
Emerson once said "Consistency is the hobgoblin of little minds".  So sue
me for having a little mind.

>to think that mathematicians enjoy some sort of moral and intellectual
>superiority to engineers and computer scientists.

Well, maybe a little.  It depends on the engineer/computer scientist.

>                                                   Usually, this
>attitude is a symptom of envy, since mathematicians are so hard to
>employ, can't get decent salaries when they do find work, and have
>a much harder time raising grant money.  The smart ones embrace
>computer science rather than denigrating it.  The dull ones just
>say ``Computer Science? Pfui: that's not mathematics,'' thus demonstrating
>their lack of understanding of the nature of mathematics and of
>computer science.

What a bunch of bullshit.

>In summary:
>	It is better to remain silent and be thought a fool than
>to speak up and remove all doubt.

I agree!

By the way, Tom Duff, have YOU ever seen an example where a%b<0 is preferred?

ucbvax!brahms!weemba	Matthew P Wiener/UCB Math Dept/Berkeley CA 94720



More information about the Comp.lang.c mailing list