"bad ulimit" on all non-root "at" & "batch" jobs under 386/ix

Conor P. Cahill cpcahil at virtech.uucp
Sat Feb 3 11:31:12 AEST 1990


In article <28606 at bigtex.cactus.org> james at bigtex.cactus.org (James Van Artsdalen) writes:
>If the AT&T programmer had thought correctly, they would have realized
>that the correct thing to do would be to (1) count blocks actually
>allocated so that a program would really be allowed to write it's full
>ulimit. 

To implement this change they would have to do one of the following:

	1. change the file system inode so that it now has a slot to 
	   keep the "real" size of the file (# of used datablocks).
	   and make the appropriate kernel stuff to keep track of this
	   stuff and update the disk inode.

	   This is a no-no since it would break compatability with earlier
	   versions of the file system.

	2. change the kernel code so that when a file is opened the kernel	
	   reads through the inode, single, double, and triple indirect blocks
	   to calculate the "real" size of the file at open time.  This info
	   would be kept in the in-core inode.

	   This could be done, but there is a considerable amount of overhead
	   added to the system for little gain.

	3. change the kernel code so that it reads through the inode, single
	   double, and triple indirect blocks to calculate the "real" size 
	   whenever a write is attempted.  You could even modify this so
	   that it is done only when the "fake" size of the file is larger
	   than the ulimit.

	   This would probably have considerable larger overhead on the system
	   than the previous method.


Plus, if you implement this, you have the problem of what to do if a file
is at it's limit and the user tries to write a byte into one of the holey 
blocks.  Obviosly this write should fail but I think that error would be
much harder to explain than failing at the end of the file.

> It's clear that the file offset model doesn't really address
>the problem: it's a cheap way out. 

Yes it is a cheap way out and for most cases satisfies exactly what it was
designed for.  

I think there is always a case for setting some arbitrary limit (albeit 
suitably large so that it is not run into very often, if at all) for most
system resources including memory and disk space.

What the defaults should be is always up for argument.  Considering the 
amount of disk space available on todays machines, I think the ulimit
should be set to something like 10,000 by default.  Yes I know that is
my opinion and I know it is up for argument, but I think that a ulimit
of 10000 would be much less likely to be run into.  

Of course in another 3 years when everybody has 2 or 4 GB of disk 
space on their pc's and small workstations, the default ulimit should
be increased again.
system V ulimit 
-- 
+-----------------------------------------------------------------------+
| Conor P. Cahill     uunet!virtech!cpcahil      	703-430-9247	!
| Virtual Technologies Inc.,    P. O. Box 876,   Sterling, VA 22170     |
+-----------------------------------------------------------------------+



More information about the Comp.unix.i386 mailing list