shared libraries (was tracing system calls)

Charles Hedrick hedrick at athos.rutgers.edu
Sat Sep 10 12:46:22 AEST 1988


james at bigtex.UUCP (James Van Artsdalen) asks about the overhead of the
position-independent code used in supporting Sun's shared library
scheme.  I don't think that's likely to be an issue.  There are
several ways of doing position-independent code.  One is to use PC
relative addressing where you would have used absolute before.  That
is, suppose you've got
  load r1,foo
Normally you'd expect the loader to relocate foo to an absolute
address.  If you've got PC-relative addressing, you can instead have
it take the difference between foo's location and the location of the
instruction itself (which is always the same, no matter where the
sharable library happens to be) and use a PC-relative mode.  If I read
the 68000 instruction book correctly, PC-relative addressing is just
as fast as absolute.  I think the Intel chips tend to use PC-relative
a lot more, and so I'd think there would be no overhead there either.

If the machine can't do that, then you can make everything indexed by
a register (the old IBM/360 scheme).  For Intel chips you could
arrange to load segment registers with the address of the code.  This
involves slight overhead, since when you call a routine in the shared
library, you have to load a register with the address of the library,
but presumably that just adds an instruction or so to those calls.

Finally, if all else fails, you can use a run-time loader to resolve
symbols.  Sun has such a thing.  By various clever techniques they
avoid having to do very much run-time relocation, but when all else
fails, they can do fixups that are based on the address where you have
mapped in the sharable library.  The words that need to be fixed up
are put into a contiguous area, so that the fixups leave the majority
of the code pure.

I'd be willing to bet that on any of the common architectures a
combination of these techniques can reduce the overhead to the point
where it isn't noticable.



More information about the Comp.unix.wizards mailing list