Self-modifying code

Guy Harris guy at gorodish.Sun.COM
Fri Jul 22 04:42:01 AEST 1988


> >A naive (and not rhetorical) question: what evidence is there to indicate
> >the degree to which "narrowing the semantic gap" with capability machines
> >and the like would improve the productivity of programmers or the
> >reliability of programs, and to which other techniques [fast machines, good
> >software] achieve the same goal?
> 
> Some protection and tracing features are MUCH slower in software.
> These features are also useful in writing wonderful debuggers. You
> need a bus snooper or ICE with an 8088 to achieve debugger features
> that can be done in software on the 68020 or 80386. And per-task
> memory protection is a big win in any environment, but impossible in
> software. 

OK, maybe I didn't make myself clear.  I'm not referring to features you find
on many random processors or systems out there, such as memory management units
or "trap on transfer of control/trap on reference to particular location"
breakpoints.  I'm referring to the sort of architectural features you find in
machines that, say, require references to objects to go through a "descriptor"
for the object, to do bounds checking and the like.

One instantiation of the question would be "are you better off doing that or
having a fast machine plus a compiler that can e.g.  generate bounds checking
code but avoid doing it in some cases where it's 'known' that you don't need
it?"  For instance, don't bother doing it in

	double	a[SIZE], b[SIZE], c[SIZE];

	for (i = 0; i < SIZE; i++)
		a[i] = b[i] + c[i];

(or the FORTRAN, or Ada, or COBOL, or... equivalent), since you know "i" will
always be within the bounds of all the arrays.

Not "do you think you'd be better off" or "do you have an analysis that makes
it 'intuitively obvious' that you'd be better off", but "has anybody compared
actual implementations of the two approaches, and concluded that, with
everything taken into account, you're better off with one or the other?"  I.e.,
taking into account the fact that microcode and hardware is like software in
that it is possible for it to be buggy (either designed incorrectly or
implemented incorrectly), and taking into account the time it takes to design
that system, and....



More information about the Comp.lang.c mailing list