protocols

Henry Spencer henry at utzoo.uucp
Fri Nov 24 07:56:42 AEST 1989


In article <92074 at pyramid.pyramid.com> romain at pyramid.pyramid.com (Romain Kang) writes:
>If someone actually writes something from scratch, it would be best to
>offer a protocol that is a) robust, and b) utilizes maximum bandwidth
>over both full and half duplex links.  'g' protocol looses on both
>counts... ...Likewise, SLIP is not engineered for hostile environments,

I would offer, though, that the last thing the world needs is *another*
new protocol.  One should first make efforts, heroic ones, to use an
existing protocol.  Good protocol design is much harder than it looks.
We should try to stand on other peoples' shoulders, not their feet.

UUCP g protocol's inability to deal with full-duplex links well is
fundamental and hard to fix; there is also the small problem that g protocol
is not too well documented.  There is much to be said for g-protocol
compatibility to ease conversion, but other than that it hasn't got a lot
going for it.

SLIP is a bad choice because it is about to be replaced by PPP.

PPP isn't at all bad, especially if combined with header compression a la
Van Jacobson.  It ought to get quite good use out of either full or
near-half duplex links, with suitable policies in software on each end.
It can and does avoid the traditional control characters, with provision
for negotiating this in case they aren't a problem.  Techniques for
getting high performance out of TCP/IP are fairly well understood, as
are questions of how to deal with poor-quality links.  Manufacturer
support (e.g. Telebit protocol spoofing) is likely.  Documentation is
widespread (well, for everything but PPP, and that's coming.)  Large-
scale use is inevitable, and the resulting experience and knowledge will
be valuable.  The one place where PPP does fall down is 7-bit links,
since that was explicitly not a design goal, but some simple encapsulation
technique should be able to solve that.  It is not necessary to design
a whole new protocol to get around such obstacles.

>and it has been already pointed out that the traditional TCP suite
>(SMTP, FTP, and NNTP) requires unacceptable dead time.

This is a separate issue from whether something like TCP/IP/PPP should
be the low-level protocol, however.  A more streamlined set of higher-
level protocols *would* appear to be in order, unless something clever
can be done.  The latter is not impossible; remember that the requirement
is for throughput, not necessarily response time on any individual
request, and an IP connection can support multiple activities in parallel.
I have not looked at the matter in depth, but given enough piled-up
traffic, it might well be possible to keep the link busy simply by doing
more than one transfer at a time, with startup and shutdown staggered
so that at least one transfer is usually in the "push data through" phase.
(If I had to bet, I'd bet on the streamlined protocols being better, but
paralleling existing ones is worth investigating.)
-- 
That's not a joke, that's      |     Henry Spencer at U of Toronto Zoology
NASA.  -Nick Szabo             | uunet!attcan!utzoo!henry henry at zoo.toronto.edu



More information about the Comp.org.usenix mailing list