NBUF and pstat

Kent Sandvik ksand at Apple.COM
Sat Jan 19 06:42:46 AEST 1991


In article <18804.2796de90 at windy.dsir.govt.nz> sramtrc at albert.dsir.govt.nz writes:
>In article <2676 at dftsrv.gsfc.nasa.gov>, jim at jagubox.gsfc.nasa.gov (Jim Jagielski) writes:
>> Anyway, this all leads to an interesting question... certainly, as far as
>> disk buffers are concerned, there is a point of diminishing returns where
>> increasing the amount of buffers adds very little or even DECREASES performance
>> (possibly). Does anyone have any good system tuning information for A/UX...
>> 25% memory for NBUF seems about right, but with large systems (32 megs) that
>> still leaves a good chunk of free memory... Of course, that isn't bad since
>> that means that swapping won't occur :)
>
>As I understand it the bigger the disk cache, the better the performance
>because the less the actual disk has to be accessed. Accessing RAM is faster
>than accessing iron so the more there is in RAM the better. And the more
>the NBUFS, the more the RAM available for caching. If you are doing program
>development this is really useful because the compiler, the include files,
>and all the tmp files stay in RAM and that's a lot of disk accesses that
>are saved. There are still some disk accesses that are not "necessary"
>because the ufs filesystem writes enough stuff to disk immediately to be
>able to maintain filesystem consistency in case of a crash.

This is true until you get to the point in the envelope where the 
amount of buffers residing in memory makes it hard to find more space,
so the system start paging, and ultimately swapping.

And swapping should be avoided, because swapping of large binaries takes
a long time.

This all is a big engineering science, and there's no simple rules. The
'10% of free memory' rule is an approximation. It all depends on the 
typical work load of the system, a lot of small binaries running, or 
a couple of big ones, how other resources are allocated, the speed of
the swap/page partition disk.

The best way it to do an empirical test, configure with various 
buffer sizes, do the same mix of applications as in real life, and 
do a test with for instance starting processes reading/writing to
disk with timing benchmarks running.

>I'm not sure what happens in the event of a crash with a large cache. I
>think the larger the cache the more data you lose. But definitely you do
>lose data in any crash. How much depends on how long since the last sync.
>I do kernel programming so I'm used to dealing with crashes so I'm in the
>habit of doing syncs before running dodgy software. Especially more so now
>that MacOS programs can crash the kernel. I include a sync in my makefiles
>in case I forget.

sync;sync;sync;  -). The system automagically syncs, don't know the
exact timing, but a typical sync interval for an UNIX system is about 
20 seconds. Some systems can be tunable concerning this interval as 
well (I know, A/UX does not have this, I sent an RFC for this long time
ago).

I put together an HyperCard stack concerning A/UX tuning some time ago
(for the A/UX sales support people in Australia). If there's interest
I could revitalize some of the information and republish it. 
UNIX systems tuning is a black art, and with the advent of systems
such as SysV.4 with dynamically allocated resource tables we should
maybe get rid of that nuisance.

regards,
Kent Sandvik



-- 
Kent Sandvik, Apple Computer Inc, Developer Technical Support
NET:ksand at apple.com, AppleLink: KSAND  DISCLAIMER: Private mumbo-jumbo
Zippy++ says: "Operator overloading is pretty useful for April 1st jokes"



More information about the Comp.unix.aux mailing list