tar & ulimit are pissing me off.

Conor P. Cahill cpcahil at virtech.uucp
Wed Mar 14 00:40:12 AEST 1990


In article <183 at hacker.UUCP> steve at hacker.UUCP (Stephen M. Youndt) writes:
>The title says it all.  I've been trying to get the gcc archive on and off
>for the past 3 months, without any luck.  No matter what I do, either tar
>or ulimit seems to bite me.  There is a tunable parameter ULIMIT as well as
>what seems to be an undocumented command 'ulimit'.  Using 'ulimit 10000'

ulimit is not undocumented.  It is a "built-in" command to the shell
and documented on sh(1).

>allows me to create files of up to 5 Meg (approx), while changing the
>ULIMIT parameter doesn't seem to do anything at all.  The problem is that

You can change the ulimit parameter until you are blue in the face, unless
you recompile the kernel and reboot.  If you did do this, then the problem
is you /etc/default/login file which has a ulimit parameter also.

>the 'ulimit' command is not inherited by uucp (even when I put the command in
>an /etc/rc2.d/S* file and reboot the system).  So, the problem remains that

Setting a ulimit only effects children of the current process.  So when you 
placed a ulimit call in an /etc/rc2.d/S* file, it just took effect while that
file was being processed (and for any children of that process).

The proble with uucp is that even if you had changed the S75cron file (which
starts cron, which starts uucp via a call to uudemon.hr in uucp's crontab),
it wouldn't have any effect on uucp sessions that were initiated from anywhere
but cron (i.e. uucp logins, users forcing a uucp via a "Uutry -r system",etc).

>I can't receive files of over 2 Meg via uucp.  You might suggest at this point
>that I get the archive broken down into more manageable chunks.  Great idea!
>I tried this, and I received the archive fine.

I would suggest this anyway because if you are transferring a 20 meg file
and you have a problem in byte 19999999, the whole file must be retransmitted.
By spliting the file to 1MB portions, only the last 999999 characters would
have to be retransmitted.

>Problem #2 is that even though I can 
>
> ulimit 20000
> cat gcc-1.36.tar.Z.*[0,1,2,3,4,5,6,7,8,9,10] > gcc-1.36.tar.Z
> uncompress gcc-1.36.tar
> tar xf gcc-1.36.tar

Your problem is probably due to the nameing convention you chose.  If the 
files are named as you indicate, the problem is that the order of the
files that are placed into gcc.-1.36.tar.Z is as follows:

	gcc-1.36.tar.Z.0
	gcc-1.36.tar.Z.1
	gcc-1.36.tar.Z.10
	gcc-1.36.tar.Z.2 

BINGO - file.10 got placed before file.2.  However, since uncompress worked
correctly on the file I would tend to doubt this.  Why don't you try to unpack
the data using the following pipeline:

	cat [gcc files] | uncompress | tar -xovf -

This way you won't run into any ulimit problems.

>Is there is way to permanently set ULIMIT? 

Yes.  

	1. remove the ULIMIT line from /etc/default/login
	2. change the ULIMIT configuration parameter 
	3. rebuild the kernel
	4. reboot
	5. make sure there are no ulimit calls in /etc/profile and /etc/rc*

This works if you want your ulimit to be <= 12288 (~6MB).  If you want it
to be larger, you must modify the /etc/conf/cf.d/mtune file and change
the ULIMIT line to be something like the following:

	ULIMIT	3072	2048	whatever_max_you_want


-- 
Conor P. Cahill            (703)430-9247        Virtual Technologies, Inc.,
uunet!virtech!cpcahil                           46030 Manekin Plaza, Suite 160
                                                Sterling, VA 22170 



More information about the Comp.unix.i386 mailing list