Files > 4GB

A. Lester Buck buck at siswat.UUCP
Mon Nov 12 12:52:40 AEST 1990


In article <1990Nov9.170337.9484 at onion.pdx.com>, jeff at onion.pdx.com (Jeff Beadles) writes:
> In <1008 at intelisc.isc.intel.com> cfj at isc.intel.com (Charlie Johnson) writes:
> 
> >I'm curious if the companies who support Unix on large systems made the
> >necessary file system changes to allow individual files which are larger 
> >than 4 gigabytes ??  You'd have to at least stretch the file size in the
> >inode beyond 32 bits and possibly mess around in the super block.  Any
> >comments ??
> 
> Well, that would take one big disk :-)  Unix files can not span physical disk
> partitions, at least on more common version of Unix. (Has anyone changed this?)
> This pretty well limits the file size more than the kernel internals.
> 
> Then again, the largest file that I've seen in "real-life" is a 247mb kernel
> core dump :-)
> 
> 	-Jeff
> -- 
> Jeff Beadles		jeff at onion.pdx.com

Anyone who has ever written a disk driver knows that the code to support
patching multiple volumes together is very easy.  The hard part is
adminstering it and making it available through utilities, etc.
AIX is doing that now, OSF/1 with the Logical Volume Manager is coming,
this is not a tough feature to add.

The HARD part is the file size limitation.  If you write your own
filesystem, you can make the file sizes whatever you want, and can
bring all the utilites and user code along with you.  But if you
want to be able to use the existing Unix utilities and user code
that "knows" that stat() returns a long for file size, you are stuck.
The only transparent method that supports old code is to have the
compiler support long as a 64 bit entity.  Cray has done this, I
believe.  This is a serious efficiency hit on most other machines,
where the architecture does not directly support 64 bit arithmetic.



-- 
A. Lester Buck    buck at siswat.lonestar.org  ...!uhnix1!lobster!siswat!buck



More information about the Comp.unix.large mailing list