Self-modifying code

Robert Colwell colwell at mfci.UUCP
Thu Jul 28 11:22:25 AEST 1988


In article <61459 at sun.uucp> guy at gorodish.Sun.COM (Guy Harris) writes:
>> >Yes, but does this need "architectural" support, at least in the sense of
>> >"instruction set architecture"?  If a compiler for a "conventional" machine
>> >can do that level of validation, subscript-range checking features in the
>> >instruction set would be seldom, if ever, used.
>> >
>> >If "architecture" includes the compiler, then I agree that "architectural"
>> >support for this would be nice.
>> 
>> But the whole point of capability machines (to name a single example)
>> is that one cannot cover the space of all interesting
>> exception-possibilities using only a compiler, no matter how smart.
>> For one thing, the program could be coercing data types back and
>> forth such that checking an operation on a type can only be done at
>> the time the operation is applied.
>
>You missed the point.  As I said in an earlier posting, you can spend some
>increased speed by putting in code to do the appropriate checks on a
>"conventional" machine.  If the compiler can eliminate most of those checks,
>so much the better; you spend less of the increased speed.

Actually, I didn't miss the point, I ignored it because I wanted to
deal with something I consider more important.  Yes you can put in
more checks.  And if you aren't going to let me get anything better
than that for a programming environment, then I'll take it as an
improvement over the C/Unixes that currently exist.

But I have a lot of doubts about this approach.  To me it's like
observing that an earth dam has 300 holes, and proposing to fix them
one-by-one, each hole needing some different fix from the one before.
The whole point of capabilities (at least as the 432 implemented
them) was to provide a seamless programming environment, with exactly
the same abstraction (that of an "object") presented to the
programmer, no matter in which direction she looked (left or right
through the code, or down to the hardware).  You can't get this by
throwing a large number of bandaids at the problem, even if you say
they don't cost much in performance.

>I don't see that, for example, doing these checks in microcode, as opposed to
>regular code, is *ipso facto* the only way to do things.  In some ways, that's
>what I've been asking: what reason is there to believe that capability
>machines and the like are the *only* way, or even just the *best* way, to
>achieve the kind of programming environment I suspect most, if not all, of us
>would like to see?

Actually, I didn't say it was the only way.  In fact, all I've been
trying to argue is that there are reasons to believe that there are
things worth considering from that kind of research and that type of
programming environment.  Sometimes I feel like all we do is debate
the race cars, when it's the production cars that represent all the
money.

[discussion about specific checks elided; this message is too long]

>I agree that you may not be able to catch all examples of incorrect code.
>However, if you can avoid doing any checking in those cases where you *can*
>catch them at compile time, you may be better off.  The further questions that
>capability machines pose are:

You won't catch me disagreeing with this, even for capability
machines.  In fact, that was a major point of our article in the 13th
Computer Arch Symp (I think it was 13 -- the one in Tokyo).

>	1) How much "safer" are they than "conventional" machines plus
>	   compilers that generate run-time checks?
>
>and
>
>	2) If this extra safety is expensive, is it worth the cost?

Absolutely.  I tried to quantify the cost of 432-style object
orientation in my thesis.  In a nutshell, I concluded that that style
of machine be from 1 to 4 times slower than a conventional
unprotected architecture made of equivalent implementation
technology.  (1 times slower means same speed).  That's an order of
magnitude faster than the 432 really was, but that doesn't count (see
next issue of ACM TOCS to see why).

You could do their transparent multiprocessing and buy that factor of
4 back, but if you got above 6 or 7 you'd saturate the bus
(memory-to-memory architecture).  Things get hard to sort out after
that.

Is that performance hit worth it?  Who knows?  I'd say it's probably
worth it a lot more often than most people currently think.  An awful
lot of code got written and debugged on machines that were a lot slower
than the could-have-been 432.  You have to allow for a natural
tendency (which I think largely fuels this whole industry) to always
want more and to refuse to ever go backwards in anything, speed,
memory, disk, sw complexity...

>(Remember, the sort of embedded systems to which you've referred in the past do
>have real-time constraints, so there is presumably some minimum performance
>required; as the jobs they do get more sophisticated, the minimum performance
>required increases.)

Yeah, well, the Space Shuttle doesn't exactly have blazing hardware.
But it's running some of the most sophisticated software I
know of.  And telephone switchers aren't mini-crays by any means.

Bob Colwell            mfci!colwell at uunet.uucp
Multiflow Computer
175 N. Main St.
Branford, CT 06405     203-488-6090



More information about the Comp.lang.c mailing list