Self-modifying code

David Keppel pardo at june.cs.washington.edu
Sat Jul 16 06:12:07 AEST 1988


In article <1087 at ficc.UUCP> peter at ficc.UUCP (Peter da Silva) writes:
>[ Why are two caches (I, D) better than one (I+D)?

For a given real-estate, a non-snoopy I-cache will hold more data
bits and be faster than a snoopy I-cache.

+ For a given real-estate, the I-cache hit rate will be better.  This
  makes the average instruction fetch time lower.  This may be a win
  if instruction fetch rate is at all a bottleneck.

+ For a given cache size, the real-estate can be smaller.  This means:
 + The cache may be cheaper.
 + The cache may be small-enough to put closer to the instruction
   register, making it effectively faster.

+ The logic is simpler and more regular.
  + Faster design times.
  + Fewer bugs.
  + Easier to incorporate into VLSI designs (==faster).
  + Less logic to drive => faster, less power.

+ The cache doesn't need to be connected to as many things.
  + More freedom to place the cache => faster, cheaper.
  + Less external logic to drive => smaller, faster, cheaper.

+ Aliasing and data distribution are less of (none) a problem.
  This lets you build (with much less pain)
  + Heirarchical cache organizations (faster).
  + Virtually-addressed caches (faster).

I-caches are particularly useful on machines that make heavy use
of memory.  This includes:
+ Many high-performance processors.
+ Shared-memory multiprocessors.

The primary problem (that I'm aware of) with non-snoopy I-caches
is that you must manually flush the I-cache every time that an
instruction is changed.  (Strictly speaking, every time you change an
I-space address that has been touched since the last cold start).

Does that answer your question ?-)

	;-D on  ( Faster )  Pardo



More information about the Comp.lang.c mailing list