ulimit (was: getty/login for callback)

Richard A. O'Keefe ok at quintus.UUCP
Fri Apr 21 19:14:35 AEST 1989


In article <1325 at nusdhub.UUCP> rwhite at nusdhub.UUCP (Robert C. White Jr.) writes:
>in article <1021 at quintus.UUCP>, ok at quintus.UUCP (Richard A. O'Keefe) says:
>> BUT you want to do [ulimit] on a per-file basis.

>The problem with this is that the code in the debug version would be
>substantially different than the eventual release material (changing
>limits if any and whatnot).

Why?  How can I call my program debugged if I haven't tested it on a
couple of real data sets as well as the tests I happened to think of?

>I believe that the general limit is a
>better idea because it allows for *uniform* and *scaled* runaway
>growth patterns to be more easily perceived.

My point is that the problem with a general limit is that it doesn't
allow some common failure modes to be detected *at*all*.  Returning to
by 1M -vs- 4k file, if there is a mistake in my program such that each
record for the 4k file is being written 20 times, I'm going to use a
fair bit of disc space I didn't intend to.  "Uniform and scaled runaway"
is not the only kind, you know.  Note that my proposal has ulimit as
a special case, so having a per-file limit inherited from a per-process
value but settable down by fcntl() would reduce to the present scheme if
you didn't choose to use that fcntl().

>The small limit for initial development; large limits for dynamic testing;
>and installation limits for final product -- type development paths
>do tend to produce strong code and procedure.

Test what you can as soon as you can on real data, that's my motto.



More information about the Comp.bugs.sys5 mailing list