Cachable functions (was: Re: const, volatile, etc [was Re: #defines with parameters]

Art Boyne boyne at hplvli.HP.COM
Thu Jan 5 01:45:50 AEST 1989


pcg at aber-cs.UUCP (Piercarlo Grandi) writes:

> Let me state for the nth time that I am not against *optimizers*, I don't
> advocate sloppy code generators. I am against *aggressive* optimizers. I
> don't think they are worth the effort, the cost, the reliability problems,
> the results. I have said that an *optimizer* is something that does a
> competent job of code generation, by exploiting *language* and *machine*
> dependent opportunities (I even ventured to suggest some classification of
> good/bad optimizations).

> Aggressive optimization to me is what attempts to exploit aspects of the
> *program* or *algorithm*. This I do not like because it requires the
> optimizer to "understand" in some sense the program, and I reckon that a
> programmers should do that, and an optimizer can only "understand" static
> aspects of a program and the algorithm embodied in it, and experience
> suggests that KISS applies also to compilers, i.e. that the more intelligent
> an optimizer is the buggier it is likely to be and the harder the bugs.

While I agree that the more "aggressive" the optimizer, to use your term, the
more buggy it *tends* to be (but not necessarily), I have to disagree with
your conclusion that "aggressive" optimization is not worth the effort.

After working for the last 5 years with a brain-dead (not just damaged) C
compiler (*) that has been proven to generate 2-3 *times* more code than I would
in assembler, and is bug-ridden besides, I would *love* to have *any* reasonable
optimizing compiler.  The code I write goes into microprocessor-based instruments,
translated: ROM.  The last three instruments introduced from our lab have 256K,
384K, and 512K of ROM, respectively.  An aggressive optimizing compiler could
have cut that probably in half, or better.  Let's face it folks: ROM isn't cheap,
and twice the code size means, in general, half the performance of the instrument.
While the difference of 2x in a program on your PC or mainframe may be annoying,
a difference of 2x in the throughput of a test system can amount to millions of $$
to a high-volume manufacturer (and the loss of a sale to us!).  Working around any
bugs in the optimizer would have been insignificant compared the pains of rewriting
in assembler and/or doing algorithmic handstands in order to get performance up to snuff.
(As a reference point, the product I have been associated with now has 100K of C, 100K
of assembler, and 35K of machine-generated parser tables).

(*) In case you're wondering why we used it - it was the only one supported by
    the emulation hardware we are using.  The newer hardware, which we don't have,
    has a much more reliable compiler that still generates 1.5 to 2 times more code.

These are my opinions and do not necessarily reflect the views of my employer.

Art Boyne, boyne at hplvdz.HP.COM



More information about the Comp.lang.c mailing list