Limiting file mapping memory

Mike Caplinger miranda!mc at moc.jpl.nasa.gov
Sun Dec 9 09:07:00 AEST 1990


In the good old days of, say, BSD 4.2, the space used by block I/O buffers
was a fixed, fairly small fraction of the available physical memory.  One
could read a huge file and not worry about getting processes paged out,
because file memory and process memory were two different things.

In the brave new world of SunOS 4.0 and beyond, the block I/O buffers are
gone, replaced by a memory-mapping scheme in which file blocks in memory
are the same as any other blocks in memory (except that they page against
their associated files, not the swap file), and so processes and files
compete with each other for resources.  This means that after reading a
big file, one may find that processes got paged out (and one is left with
a bunch of pages from a file that may never be looked at again.)

This is my question: is there a way to limit the number of pages the
system will use to perform file I/O, either on a global, per-process, or
per-file basis?  I haven't had much luck using setrlimit on resident-set
size, because the system still exceeds the RSS limits when other processes
are inactive.  The behavior I am trying to avoid is "run a program that
reads a big file, then wait 30 seconds for Emacs and your shells to page
back in."

No guesses except well-informed ones, please. Thanks.

	Mike Caplinger, ASU/Caltech Mars Observer Camera Project
	mc at moc.jpl.nasa.gov



More information about the Comp.sys.sun mailing list