Paging-space problems

rhoover at arnor.uucp rhoover at arnor.uucp
Thu Nov 15 09:38:20 AEST 1990


In article <MARC.90Nov14153807 at marc.watson.ibm.com>, marc at arnor.uucp writes:
|> malloc fails when the request causes the heap to exceed the 
|> ulimit for data.  It has nothing to do with paging space.
|> 
|> In AIX V3 the default data limit is quite large, which is why
|> it appears to behave differently.
|> 
|> Marc Auslander

Well, this is not true under sunos.  For example, consider the following program (called big.c):

#include <stdio.h>

main()
{
    while (malloc(1024*1024*4) != NULL)
	fprintf(stderr,"Another 4 meg\n");
    fprintf(stderr,"That's all folks\n");
}

cirrus% limit
cputime         unlimited
filesize        unlimited
datasize        524280 kbytes
stacksize       8192 kbytes
coredumpsize    unlimited
memoryuse       unlimited
cirrus% /etc/pstat -s
15312k allocated + 4816k reserved = 20128k used, 29772k available
cirrus% big
Another 4 meg
Another 4 meg
Another 4 meg
Another 4 meg
Another 4 meg
Another 4 meg
Another 4 meg
That's all folks
cirrus% 

Every unix system that I have ever used has returned 0 when malloc can no longer allocate usable memory.  When I malloc storage, I check for NULL and if my application has files to be written out, etc, I free some storage and clean up.

I see this malloc issue as one of compatability.  Programs should not have to be rewritten in order to run on IBM machines.  If the /6000 version of malloc is faster, then a new call ( vmalloc() ? ) should be provided for fast memory allocation under the new semantics.

Would you have been a happy camper if berkeley had replaced fork() with the vfork() semantics and had provided psfork() for compatability?

roger
rhoover at ibm.com



More information about the Comp.unix.aix mailing list