Slow [ NFS ] file update

dupuy at cs.columbia.edu dupuy at cs.columbia.edu
Thu Jul 26 14:50:35 AEST 1990


dz at cornu.ucsb.edu (Daniel James Zerkle) writes:
> To summarize the operation:
> a. 1 calls program on 2
> b. 2 writes first 256 bytes of a file (open-write)
> c. 1 reads those 256 bytes (open-read-close)
> d. 2 writes rest of file (write-close-exit)
> e. 1 reads rest of file

eplunix!das at harvard.harvard.edu (David Steffens) replies:
| Our paradigm is similar, but not identical:
| a. 1 starts a program on 2 via rsh and pauses
| b. 2 opens a file on 3 for writing and signals 1 that it is ready
| c. 1 waits for 2 to signal ready then opens the same file on 3 for reading
| d. 2 then continuously writes variable length hunks of data
|    into the file on 3 and tells 1 how much was written each time
| e. 1 loops reading and processing each hunk of data written by 2

> Right now, I have 1 do a periodic check to see if NFS has gotten it
> straight how big the file is.  In other words, 1 may wait about a minute
> while the file size gets straightened out.  This is totally unacceptable.

| Some things which seem to improve the situation:
| 1. After writing a hunk of data on 2, it helps to have 2 do an fsync(2).
| 2. Before reading the data on 1, it helps to close the file, reopen it
|    and then seek to the end of the data already read.

| Apparently, 1 asks 3 for the
| attributes of the file when the file is first opened and then caches the
| results.  Since 1 now thinks that it knows  everything there is to know
| about the file, it doesn't bother to interrogate 3 for the current file
| attributes before each read, and thus doesn't see that the file has
| changed size.  As a consequence of this, 1 won't ask 3 for any piece of
| the file beyond the size it knows.  The close/open/seek seems to force 1
| to check the attributes of the file against reality on 3 and update its
| cache.

As David suspected, there be caches here.  What you are seeing here is the
NFS attribute cache, which normally acts to speed up programs like ls and
make (i.e. standard unix utilities, which use pipes or stream sockets to
communicate with each other, rather than NFS mounted files).  In SunOS 4.0
you can minimize the effects of the cache, by specifying "actimeo=1" as an
option to the mount on the client.  This invalidates the cache after 1
second, rather than the default 60 seconds.  In SunOS 4.1, you can disable
the cache entirely using the "noac" mount option (actually, you can do
this in 4.0, but it tickles a bug that will corrupt your file, if not your
filesystem).

However, might I suggest that since you wish to move data from a process
on machine X to a process on machine Y, that you use sockets to move the
data, instead of NFS-mounted files.  TCP is an excellent protocol, and the
performance you get should easily match the performance of UDP-based NFS,
and do substantially better over longhaul networks, in the presence of
noise, and NFS implementation glitches (TCP hasn't had any significant
ones since before NFS was born).

Using sockets can be as easy as using pipes to rsh, or in the case of your
programs:

	machine1$ rsh machine2 program2 | program1

If you need to have the data logged into a file, you could always put a
"tee" in the pipeline somewhere.  Hope these suggestions help.

inet: dupuy at cs.columbia.edu
uucp: ...!rutgers!cs.columbia.edu!dupuy



More information about the Comp.sys.sun mailing list