looking for >32-bit address space [and how will C handle it]

David Collier-Brown dave at lethe.UUCP
Thu Apr 6 12:39:35 AEST 1989


In article <12289 at reed.UUCP> mdr at reed.UUCP (Mike Rutenberg) writes:
| Are there any micros or chipsets out there that support an address space
| larger than 32 bits?

>From article <16568 at winchester.mips.COM>, by mash at mips.COM (John Mashey):
| The real issue is address-space extension versus software & implementation
| technology.
| 	1) Segments are the obvious way to do the extension, but they
| 	have their drawbacks for general use, compared with flat-address-space
| 	models.  In particular, everybody's idea of segmentation seems
| 	different, and so portable code seems nontrivial.
  Well, I'm not sure **everyone's** is different, but there's
certainly lots of (possibly sillly) variations.  Reminds you of the
early Christian Church, perchance?
  Portability is another matter: the model I'll discuss is portable
to anything that can run c++ or Ada[tm].

| 	2) Flat 64-bit addressing has been, and will be for a lonnng time,
| 	too costly for most micros.
  I suspect for many mainframe manufacturers, too.

| One interesting issue, for some ways out, is what the 64-bit model ought
| to be be: maybe some of the mini-super and supercomputer folks can give us
| some hints here:
| 	What's the C programming model for machines with 64-bit pointers?
| 		how do you say 8-, 16-, 32, and 64-bit ints?
| 		(char and short are fine.  Now, are 64s long-longs,
| 		or just longs?  are 32s longs?  which one is int?
| 		how much code breaks under these various cases?
| 			user code
| 			operating system code
| 			networking code
| 		Is there any chance of standardization?

  The C model seems to work well with a simple progression.  Much
user-mode code works on a 36-bit machine (with 72 bit longs) but
dies due to enbedded ordering/size-relationship assumptions on
machines with 16 bit ints and 64 bit longs.  Source: experience with
honeybuns (36) and rumor (16:64).

| 
| The other interesting input that people might be able to give is what they
| really do on the earlier-mentioned machines that have segments. [...]
| Anyway, it would be nice to get real experience and data.

  Well, Mutlicks had to add two-level names to the programmer's view
of the world, but little else. They used a different notation for
two-level names in code as opposed to data, but the concepts seemed
very very similar to the "." notation of concurrent pascal, C and
Ada:
	io_module.element_size	is a datum within some enclosing
				construct, possibly as large as
				a segment.  (C, Ada, Conc. Pascal)
	io_module_$element_size	is the PL/1 equivalent: some sort of
				natural maximum size that is specific
				to the io module...

	io_module.read(stuff)	is a procedure call in language x
	io_module_$read(stuff)	is one in Multics PL/1

  In general, a segment in this view is just a top-level named
thing, code or data, which is visible to the programmer and has
substructure.  You use it for grouping related things or things
"used with" others.

  The problems with Multics segments were:
	They had size limitations which the programmers could
see[boo!]. A segment was also a file[yay!], which caused the birth
of the multi-segment file[duh].  (This was predicted, but people
didn't believe they would suffer from the limitations as soon as
they did).
	There were lots of them.  The system did a good job dealing
with them, but people soon learned to "bind" executables together
into segments-as-packages for simple efficiency. (Packages were part
of the design of the OS: I'm talking about mere applications
programmers like me learning to use "binder").

  The advantage was a natural means of referencing multi-level
things, including "classes" and "packages", ideas which have now
come to the fore.  

--dave (This is a **bit** different from intel segments) c-b
	



More information about the Comp.lang.c mailing list