Self-modifying code

Robert Colwell colwell at mfci.UUCP
Fri Jul 22 23:09:01 AEST 1988


In article <60782 at sun.uucp> guy at gorodish.Sun.COM (Guy Harris) writes:
>> Second, it seems like only yesterday when we (the royal we) CPU
>> architects were so concerned with trying to narrow the semantic gap
>> between what a programmer was trying to express and what the
>> machine/language was coercing her into.  Languages like Ada and
>> machine architectures like capability machines were intended to
>> address this perceived need.
>
>A naive (and not rhetorical) question: what evidence is there to indicate the
>degree to which "narrowing the semantic gap" with capability machines and the
>like would improve the productivity of programmers or the reliability of
>programs, and to which other techniques (e.g., making a very fast conventional
>machine, possibly but not necessarily using RISC technology, and using that
>speed to improve the programming environment with better software) achieve the
>same goal?

As far as I know, there is no evidence that you would necessarily
find compelling, but then I could say that same thing about almost
everything else in this field, too.  There are, on the other hand,
some good reasons to believe that we can do better than imperative
languages running on essentially unprotected architectures.

I know I can't do this topic justice in this forum, but here's my
quick take on the subject.  Think about the different computer
languages you have used, and what was good or bad about each.  Backus
argued (in his famous paper on functional languages) that one of the
reasons that Lisp is a good language is that you don't have to
mentally execute a program (as you do with imperative languages) in
order to convince yourself that it will do what is wanted.  The
language allows you to more closely express what is desired, so you
test correctness by inspection.  And yes, there are cases where it's
more obvious/easier-to-express something in C than Lisp.  But both
examples serve to illustrate the point -- you, as a programmer, have
some virtual entity you want to realize (data structure or
algorithm), and the closer you can get to realizing exactly what you
have in mind, the more likely the code is to be correct,
maintainable, and understandable (and the smaller the semantic gap
between what you want and what you can express).

That's partly an allegory.  The capability people argue that the same
thing extends into all areas of computer systems.  Recall the classic
arguments about turning off runtime bounds checking to reclaim that
lost performance -- why should a programmer, in the best of all
possible worlds, have to worry about things like that?  It doesn't
help any of the major productivity metrics.

Say what you want about Ada on the 432 (and I've said plenty
already), as a programming environment (forget the speed problems
temporarily) it was really nice.  Your program had access to only the
data structures you specified, and only in the ways you specified,
and if your code screwed up, the machine could tell you where and in
what way.  To me, the hardest bugs to find are those where you fix
one bug (in someone else's code, of course) and inadvertently break
it in some obscure way such that something completely unrelated is
getting trashed.  For this to even be possible (in my opinion) means
that the programming environment is lacking something crucial,
notwithstanding that about 95% of all programmers on this planet see
just such an environment.  The environment is failing to express what
the programmer wanted, and it's a combined failure of the machine
architecture, the language, and probably the OS too.  The semantic
gap argument says that the more the desired and achieved environments
differ, the larger the quantity of bugs, and the worse their
obscurity will be.

I know you're tempted at this point to say that even if one grants
this as a shortcoming of current architectures/environments, there
have been no successes so far at addressing it.  That's another topic
of conversation, I think; all I'm trying to do here is offer some of
the justification for why a lot of research has gone into other ways
to compute (that don't have sheer instrs-executed-per-unit-time as
their primary figure of merit).

All the strong-type-checking stuff built into some languages has the
same motivation as above.

For me, the bottom line is that, as usual, there aren't any easy
answers (because if there were somebody would've patented them by
now), but we shouldn't lose track of the corresponding questions just
on that basis.  The problem is getting worse, after all -- more, not
fewer, lives currently depend on the correctness of software than
ever before, and that trend will continue (fly-by-computer airplanes,
dynamically-unstable planes, military systems, ...).

I assume that the reason people like you and me are concentrating on
performance is that that's what sells.  I don't think that needs
defending, but I also see few reasons to believe that sheer
performance alone will provide the cures to the problems I tried to
outline above.

Bob Colwell            mfci!colwell at uunet.uucp
Multiflow Computer
175 N. Main St.
Branford, CT 06405     203-488-6090



More information about the Comp.lang.c mailing list