Files > 4MB

Thad P Floryan thad at cup.portal.com
Sat Nov 17 21:51:55 AEST 1990


src at scuzzy.in-berlin.de (Heiko Blume) in
<1990Nov16.005428.12747 at scuzzy.in-berlin.de> writes:

	>In article <1990Nov11.225759.866 at ceres.physics.uiowa.edu> 
	>vvawh at convx1.lerc.nasa.gov (Tony Hackenberg) writes:
	>>     I think Amdahl's UTS2.1 will also have this ability.

	what a coincidence: i read today, that UTS 2.1 allows files as big
	as 6 TB (that TeraByte). *without* loosing in the compatibility issue.

	i wonder how they accomplish that. there must be some problems left
	like an old application doing a stat() on a 1TB file etc.

Probably they just store the same data 27 times, much like the recent postings
reminding us that "Amdahl's UTS2.1 will ..."   :-)

As long as we're getting silly (it's been a long week), how about this idea
for compression: compress your 1TB file down to 8 bytes simply by noting the
file contains only 0's and 1's, so throw-away all the zeroes with our knowledge
zero means nothing, then, since they're only 1's left, count THEM up and store
the count as a 64-bit double integer.  :-)

Seriously, the issue of "large" files is of concern to me since I'm porting
my company's major product to UNIX.  Many present clients' files typically
exceed 500MB and I'm curious how such files ARE handled on "typical" UNIX
systems in terms of backup recovery, performance, and any other germane topics.

My personal belief is that supporting such large files presents a maintenance
and performance nightmare; 'twould be better to have a "file of files" which
could point to many (smaller) physical files and treat them as "one" large
logical file; this technique could be extended ad infinitum for logical files
up to the limits of on-line mass storage.

Comments?

Thad Floryan [ thad at cup.portal.com (OR) ..!sun!portal!cup.portal.com!thad ]



More information about the Comp.unix.large mailing list