UNIX IPC Datagram Reliability under 4.2BSD

Brian Thomson thomson at uthub.UUCP
Sat Feb 11 08:35:19 AEST 1984


Chris Torek writes:
	"... UNIX IPC datagrams, in AF_UNIX, on 4.2, *are*
	reliable.  This is just a side-effect of the current implementation,
	but they might have noted this in the manual, ..."
	
Just in case anyone is mislead by this, let me reiterate that
UNIX IPC datagrams, in AF_UNIX, on 4.2, *ARE NOT* reliable.  As I posted
last month, there is no flow control.  If you send a message to a
socket that doesn't have sufficient buffer resources to hold it,
the datagram is silently discarded.

The discussion has since waxed effusive, with lots of justification for
datagram unreliability in networks.  A few also addressed the original
question, which asked why datagrams in the UNIX domain should be unreliable,
given that both endpoints of the communication are within the same processor.
It appears to me that there are three reasons:

    1) So the AF_UNIX domain can, in the future, be compatibly
       expanded via 'hidden' networking into a true inter-processor
       IPC mechanism.  UNIX datagrams could be directly implemented
       as network datagrams, and all your old programs would still
       work (but between processors now) because they didn't assume
       any more reliability than the network is willing to offer.

    2) Because SOCK_DGRAM means unreliable datagram, and shouldn't
       mean different things in different domains.

    3) The unreliability isn't gratuitous at all.  I can think of
       situations where I would want the packet to be discarded
       if the receiving process isn't keeping up.

Reason #1 is a good one, and has already been widely discussed.  Enough said.

Reason #2 smacks of being a head-in-the-clouds response.  People who
want to send messages without implementing their own higher-level protocols,
and without having to set up SOCK_STREAM connections, might well grumble
that this is a case of ideals at the expense of functionality.  But you
should remember that the Grand Plan for 4.x IPC includes FOUR socket types,
not two.  SOCK_RDM, the "reliable datagram" socket type, is what those
users really want.  And, if they had it, SOCK_DGRAM could remain blissfully
unreliable.  So, rather than complain that AF_UNIX's SOCK_DGRAMs are
wrong, they should be complaining that SOCK_RDM is unimplemented.  And
that is traceable to the lack of a reliable connectionless protocol in the
Internet domain.

Reason #3 represents the notion that SOCK_DGRAM sockets interact as loosely
as possible.  Returning to the UNIX domain case, the question becomes
"What should the kernel do if the receiving socket's buffers are full?"
It can either throw the data away or wait for the buffers to drain.  All
of SOCK_RDM, SOCK_SEQPACKET and SOCK_STREAM would wait.  But if I want
to insulate my important server process from lazy or malicious clients,
I may prefer the first.  If the process that asked for my service isn't
prepared to accept it, that's his problem and I don't want to be blocked
indefinitely by him.  Note that this isn't quite the same as a SOCK_RDM socket
with the non-blocking option enabled, because I am still willing to wait
for my own socket's transmit buffers to drain (for example).

Still, I would claim that in most cases when you want connectionless,
easy-to-use UNIX domain IPC you are really asking for an RDM implementation.
-- 
			Brian Thomson,	    CSRG Univ. of Toronto
			{linus,ihnp4,uw-beaver,floyd,utzoo}!utcsrgv!thomson



More information about the Comp.unix.wizards mailing list