Self-modifying code

Guy Harris guy at gorodish.Sun.COM
Wed Jul 27 04:12:16 AEST 1988


> >Yes, but does this need "architectural" support, at least in the sense of
> >"instruction set architecture"?  If a compiler for a "conventional" machine
> >can do that level of validation, subscript-range checking features in the
> >instruction set would be seldom, if ever, used.
> >
> >If "architecture" includes the compiler, then I agree that "architectural"
> >support for this would be nice.
> 
> But the whole point of capability machines (to name a single example)
> is that one cannot cover the space of all interesting
> exception-possibilities using only a compiler, no matter how smart.
> For one thing, the program could be coercing data types back and
> forth such that checking an operation on a type can only be done at
> the time the operation is applied.

You missed the point.  As I said in an earlier posting, you can spend some
increased speed by putting in code to do the appropriate checks on a
"conventional" machine.  If the compiler can eliminate most of those checks,
so much the better; you spend less of the increased speed.

I don't see that, for example, doing these checks in microcode, as opposed to
regular code, is *ipso facto* the only way to do things.  In some ways, that's
what I've been asking: what reason is there to believe that capability
machines and the like are the *only* way, or even just the *best* way, to
achieve the kind of programming environment I suspect most, if not all, of us
would like to see?

> But a more fundamental matter is how one manages the development of a
> large software project with dozens of programmers contributing to a
> single large end product.  The modules are all separately compiled,
> so there is no question of the compiler helping out much.
> 
> Given access to all the symbol tables, you could imagine the linker
> doing some reasonable checks of consistency (number and types of
> args, for instance), but even that fails when pointers are being
> passed (pointers to functions, even).

I'm not so sure of that.  I don't see how (assuming a "reasonable" language and
"reasonable" linkers) doing the check at link time is intrinsically any
different from doing it at compile time, if (as per the assumptions listed) the
linker has all the information about types that was available to the compiler.
I also don't see that pointers are that bad a problem, assuming pointers are
typed (again, assuming a "reasonable" language); in ANSI C, for instance,
"pointer to 'int'-valued function taking an 'int' argument" is a different type
from "pointer to 'int'-valued function taking a 'float' argument".

> You can catch a lot of the common cases with good programming style,
> as you note above.  But you can't catch them all, and the question
> that capability machines pose is "how close can we come to an
> airtight programming environment, and how much would it cost"?
> (simplistic paraphrase, I know; maybe it'll draw some capability
> people out of their woodwork abstraction!)

I agree that you may not be able to catch all examples of incorrect code.
However, if you can avoid doing any checking in those cases where you *can*
catch them at compile time, you may be better off.  The further questions that
capability machines pose are:

	1) How much "safer" are they than "conventional" machines plus
	   compilers that generate run-time checks?

and

	2) If this extra safety is expensive, is it worth the cost?

(Remember, the sort of embedded systems to which you've referred in the past do
have real-time constraints, so there is presumably some minimum performance
required; as the jobs they do get more sophisticated, the minimum performance
required increases.)



More information about the Comp.lang.c mailing list