mail to xenurus.gould.com

postmaster at urbana.mcd.mot.com postmaster at urbana.mcd.mot.com
Wed Jul 12 00:58:48 AEST 1989


The enclosed mail message was addressed to a system which is no longer 
in service.  We have attempted to forward your mail to the correct 
recipient(s).  If this is not possible, you will recieve additional 
mail at the time of failure. 

In the future, please use the system name "urbana.mcd.mot.com" instead. 

Please correct any mailing lists or alias files that may reference
any of the following obsolete system names:

		xenurus.gould.com
		fang.gould.com
		fang.urbana.gould.com
		vger.urbana.gould.com
		ccvaxa.gould.com
		ccvaxa.urbana.gould.com
		burt.urbana.gould.com
		mycroft.urbana.gould.com

If you have any further problems or questions about mail to this site,
please contact postmaster at urbana.mcd.mot.com. 

	thank you for your cooperation,

	postmaster at urbana.mcd.mot.com
	Motorola Microcomputer Division, Urbana Design Center


---------- text of forwarded message:

Received: from sem.brl.mil by placebo (5.61/1.34)
	id AA04291; Mon, 10 Jul 89 21:32:07 -0500
Received: by SEM.BRL.MIL id al11312; 10 Jul 89 15:26 EDT
Received: from SEM.BRL.MIL by SEM.brl.MIL id aa04939; 10 Jul 89 3:16 EDT
Received: from sem.brl.mil by SEM.BRL.MIL id aa04832; 10 Jul 89 2:45 EDT
Date:       Mon, 10 Jul 89 02:45:20 EST
From: The Moderator (Mike Muuss) <Unix-Wizards-Request at BRL.MIL>
To: UNIX-WIZARDS at BRL.MIL
Reply-To: UNIX-WIZARDS at BRL.MIL
Subject:    UNIX-WIZARDS Digest  V7#125
Message-Id:  <8907100245.aa04832 at SEM.BRL.MIL>

UNIX-WIZARDS Digest          Mon, 10 Jul 1989              V7#125

Today's Topics:
                                  gath
           Re: Algorithm needed: reading/writing a large file
                   ftruncate broken? - Sun-based NFS
                   Socket Extensibility to non TCP/IP
                          uucp delivery order?
         Re: What kinds of things would you want in the GNU OS?
                        Re: SLIP compression...
      Re: Using the uucp daemon (TCP/IP) on System V.3 with TCP/IP
               Re: chown (was: at files and permissions)
                 Re: Convert string time into seconds?
                      Re: at files and permissions
               Re: chown (was: at files and permissions)
                   Re: scsi rll trade off questions?
      Re: Using the uucp daemon (TCP/IP) on System V.3 with TCP/IP

-----------------------------------------------------------------

From: "barbara.tongue" <bgt at cbnewsh.att.com>
Subject: gath
Date: 8 Jul 89 19:30:28 GMT
Keywords: help!
To:       unix-wizards at sem.brl.mil

Folks,

In one of the tools directoryies on my machine, I discovered
the executable "gath."  Now, from what I've heard, gath is a
tool which "gathers" files, allows shell execution if lines
are prefaced with ~$, and can be used in combination with
troffed files.  Here is my question -

Let's say that I have dynamic flat-file database, whose fields
can be any combination of 17 variables.  I want to pipe that
into troff and get out a clean table with the headers correctly
inserted.  With definite input, that is no problem; for example,
I've written into my .tbl file what the header names are and
in what file the data is located.  The problem occurs when I
want to switch to using $1 as my file name; the command

	gath file.tbl file.data 

defines $1 as null.  (I'm calling $1 from file.tbl; I assume
that in itself is a problem.)

Does anyone know where the source code can be found?

I have no man page for this executable; can anyone help?

Much, much *much* thanks in advance,
-- 
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%   The Speaking Tongue, AT&T   %%  C Code.  C Code Run.  Run, Code, RUN! %%
%%    (..!att)!feathers!bgt      %%           PLEASE!!!!                   %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

-----------------------------

From: David Quarles <david at jc3b21.uucp>
Subject: ... HELP  HELP  ... TROUBLE PRINTING WITH eroff ...
Date: 8 Jul 89 14:29:39 GMT
Keywords: eroff printing more than one page
To:       unix-wizards at sem.brl.mil

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

I need some help on getting eroff to print the text (all of it) from a
regular text file on UNIX.

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

I have the book "Preparing documents with UNIX" by Brown et.al. but just
cannot figure out how to get the text to print continuously from page to
page.  This book covers troff and nroff but not 'eroff'.  I had hoped
there would be something in it that would help.

In talking to a couple of others at this site, this 'eroff' is
apparently third party software for UNIX.

What happens is that several lines get left off at the bottom of a page 
and then the second page does not have the missing lines but has just   
somehow skipped text.  ALL I WANT TO DO IS TO TAKE A TEXTFILE AND PRINT 
WITH eroff (for our HP Laserjet).  This program eroff does a very nice  
job with margins and font styles.

ANY  IDEAS  OUT  THERE ??   ANY ADVICE WILL BE GREATLY APPRECIATED !!

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
PLEASE  email  since our UNIX system purges the news sometimes before I
get a chance to read it ...
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

=-=-= Email: david at jc3b21.UUCP -=-=-=-=-=-=-=-= Dave =-=-=-=-=-=-=-=-=-=-= EOT

-----------------------------

From: Jeffrey Kegler <jeffrey at algor2.uucp>
Subject: Re: Algorithm needed: reading/writing a large file
Date: 9 Jul 89 06:05:41 GMT
To:       unix-wizards at sem.brl.mil

In article <207 at larry.sal.wisc.edu> jwp at larry.sal.wisc.edu.UUCP (Jeffrey W Percival) writes:

=> Please be careful with your paraphrasing.

I certainly promise to try.

=>  My question was about optimizing the process of rearranging a disk file
=> according to a *given* mapping.

Jeffrey P. (no relation) had implemented a suggestion made in the AT&T
Bell Laboratories Technical Journal, by J. P. Linderman, p. 258.  He
extracted the keys and record locations of an unsorted file (call it
U), sorted them, and them constructed the sorted file (call it S),
only to find the random seeks involved in the last phase horrifyingly
slow.

=> One helpful person suggested reading sequentially and writing randomly,
=> rather than vice-versa,

That would have been my first thought.

=> and I tried that but it didn't help.  I guess
=> the benefit gained from using the input stream buffering was canceled
=> out by the effective loss of the output stream buffering.

Oh well.

As a second try, allocate enough memory for N full length records, and
two arrays to be sorted together of N keys and N record locations.  Go
through the sorted keys and find the key and locations in U of the
first N records in the *sorted* file.  Sort them by record location in
U, the unsorted file, and read them in, in order by location in U,
writing them in memory in sorted order by key in the array of full
length records.  Then write those records out.  Repeat until all
records are written.  This will involve 1 sequential pass to write
file S, and M/N sequential passes to read file U.

A further improvement is to calculate how many sequential reads cost
the same as a random seek.  Call that ratio R.  Whenever performing
the algorithm above would require more than R sequential reads (this
is easily determined from the difference in the record locations),
perform a seek.

My guess at R for UNIX is around 2.5 times number of records per block.
Obviously the larger N the better this will work.  Note your original
try is this algorithm in the special case where N is 1.  If we could
run this algorithm in terms of physical disk block instead of logical
file location, this algorithm could really hum.

Further optimizations suggest themselves, but enough already.-- 

Jeffrey Kegler, President, Algorists,
jeffrey at algor2.UU.NET or uunet!algor2!jeffrey
1762 Wainwright DR, Reston VA 22090

-----------------------------

From: der Mouse <mouse at mcgill-vision.uucp>
Subject: ftruncate broken? - Sun-based NFS
Date: 9 Jul 89 05:13:16 GMT
To:       unix-wizards at sem.brl.mil

The ftruncate() call appears to be broken on at least some systems with
NFS implementations based on Sun's.  I've tried this on a Sun-3 with
release 3.5 and on a VAX running mtXinu 4.3+NFS.  I also tried it on a
MicroVAX running real 4.3, and it did not exhibit the broken behavior.
But it's not directly an NFS problem, because it happens even when the
file is on a ufs filesystem.

The problem is that ftruncate() fails if the file modes prohibit
writing, even if the file descriptor used does permit writing.  For
example, try the following program on a handy Sun.  Notice that (unless
you try it as super-user), the ftruncate call fails.  Try it on a 4.3
machine, though, and everything's fine.

(I checked the Sun manpage, and there's not even a note in the BUGS
section warning about this, so presumably someone thinks it should
work the way it does on 4.3.)

Anybody have a simple fix?  (Patch a couple of bytes to noops somewhere
in the OBJ/ files perhaps?)  Will it be fixed in newer releases (4.x)?
I'm about ready to try to work out a fix on the mtXinu system, to which
we have source, but that's not much help on the Suns.

	#include <sys/file.h>
	
	int fd;
	char junk[8192];
	
	main()
	{
	 unlink("test.file");
	 fd = open("test.file",O_RDWR|O_CREAT|O_TRUNC,0666);
	 if (fd < 0)
	  { perror("open/create test.file");
	    exit(1);
	  }
	 if (write(fd,&junk[0],8192) != 8192)
	  { perror("write #1");
	    exit(1);
	  }
	 if (fchmod(fd,0444) < 0)
	  { perror("fchmod");
	    exit(1);
	  }
	 if (write(fd,&junk[0],8192) != 8192)
	  { perror("write #2");
	    exit(1);
	  }
	 if (ftruncate(fd,(unsigned long int)16000) < 0)
	  { perror("ftruncate");
	    exit(1);
	  }
	 if (close(fd) < 0)
	  { perror("close");
	    exit(1);
	  }
	 exit(0);
	}

					der Mouse

			old: mcgill-vision!mouse
			new: mouse at larry.mcrcim.mcgill.edu

-----------------------------

From: Paul Hardiman <paul at bcsfse.uucp>
Subject: Socket Extensibility to non TCP/IP
Date: 7 Jul 89 19:34:20 GMT
To:       unix-wizards at sem.brl.mil


What is the story on using sockets on more than just TCP/IP.
Like for instance, one of the OSI protocals, MAP or TOP; and
X.25.-- 
  Paul Hardiman     ...!uw-beaver!ssc-vax!voodoo!bcsfse!paul
The above views are strictly my own.
============================================================

-----------------------------

From: Jim Rosenberg <jr at amanue.uucp>
Subject: uucp delivery order?
Date: 9 Jul 89 03:27:38 GMT
To:       unix-wizards at sem.brl.mil

I asked this question once before & got a thundering silence -- sorry if I
missed any replies, but I *still need to know*.  How can I guarantee that uucp
will deliver jobs to a remote system in the order in which they were queued?
More specifically, how can I issue a series of uux requests and be sure that
the uuxqt at the remote end will execute them in the same order?  I have HDB
at one end, and by the time the system is in production will have HDB at the
other end too.  The system will be Sytem V.3 in production, if that makes any
difference.

My *very strong impression* is that uucico simply uses the ordering you would
get by doing an ls -f C.* on the spool directory.  This is not at all
guaranteed to give the same order as ls -rt C.*, which is what I'd like.

If this suspicion is correct, then I could try to solve my problem by filling
up "holes" in the spool directory before issuing the uux request -- *PROVIDED*
I could completely lock any uuoids in the interim which might remove any files
from the spool directory.  (I really don't care if some other process sneaks
in "intervening" jobs -- that doesn't matter.  The process that will create
the jobs I'm concerned about has its own locking that guarantees only one
instance can run at once.)  For old-style uucp this is a dire pain, since all
sites share the same spool directory.  That would mean locking uucp across
all sites.  But for HDB I could at least lock the site in question, fill up
holes in the spool directory, then unlock the site.  Is this good enough?  Is
there a simpler way?

Somehow this makes me nervous.  Does the cleanup daemon honor a site lock?
Tampering with directory slots like this seems like a real kludge, and there's
no way I can think of to lock the directory in a truly safe way that's
guaranteed to be reliable.

Not to mention the problem that even if I can guarantee that uucico on the
sending end sends jobs in chronological order, I still don't know if uuxqt on
the receiving end will *run* them in the same order!  If uuxqt runs jobs in ls
-f order for the receiver's spool directory, then no amount of clever fakery
on the sending end will help one wit.  If this is how uuxqt works then I fear
there may simply be no way to do this.

Aargh, am I asking for the impossible?  This seems like such a straightforward
thing to want to do, I'd have thought this issue would have been old hat.  I
notice news articles all the time where a reply has a lower article number
than the article to which it's replying; I wonder how much of that is from
uucico delivering jobs "out of order".

Any help appreciated.
-- 
 Jim Rosenberg
     CIS: 71515,124                         decvax!idis! \
     WELL: jer                                   allegra! ---- pitt!amanue!jr
     BIX: jrosenberg                  uunet!cmcl2!cadre! /

-----------------------------

From: Andrew Hume <andrew at alice.uucp>
Subject: Re: What kinds of things would you want in the GNU OS?
Date: 9 Jul 89 06:06:43 GMT
To:       unix-wizards at sem.brl.mil

In article <1050 at etnibsd.UUCP>, vsh at etnibsd.UUCP (Steve Harris) writes:
> In article <1549 at salgado.Solbourne.COM> dworkin at Solbourne.com (Dieter Muller) writes:
> >I'd *really* like a sane tty driver.
> 
> Hear hear!!  At a former job we talked a lot about how we would rewrite
> the tty driver.  One idea was to give the user, via ioctl's, access to
> the uart (or whatever serial-line multiplexer you have).  One ioctl to
and so on.....

this is plainly false advertising. it is plausible to give complete control
of a uart to a user. it is NOT plausible to do so under the guise
of a sane tty driver. normally, you would implement a new device
(say /dev/uart).

-----------------------------

From: "Steven M. Bellovin" <smb at ulysses.homer.nj.att.com>
Subject: Re: SLIP compression...
Date: 9 Jul 89 12:46:42 GMT
To:       unix-wizards at sem.brl.mil

In article <5108 at oregon.uoregon.edu>, jqj at oregon.uoregon.edu (JQ Johnson) writes:
> One possible place to put compression is in the modem itself.

The problem with putting compression in the modem is that you're
still limited by the 9.6Kbps or 19.2Kbps pipe from the CPU to the
modem.  (Assuming an external modem, of course.)

-----------------------------

From: Cliff Spencer <cspencer at spdcc.com>
Subject: Re: Using the uucp daemon (TCP/IP) on System V.3 with TCP/IP
Date: 9 Jul 89 12:38:08 GMT
Keywords: UUCP TCP/IP uucpd uucico sockets BSD4.3 SysV.3
To:       unix-wizards at sem.brl.mil

>>Porting BSD 4.3 UUCP daemon has already been done several times for different 
>>incarnations of TCP/IP implementations for system V Unix's.  Unfortunately
>>none of them are "free" that I know of.
>
>I only need the patches, I have the BSD4.3 uucpd source...

What's the big mystery? Doesn't the daemon just spawn /usr/lib/uucp/uucico? 

							-cliff

-----------------------------

From: Barry Shein <bzs at bu-cs.bu.edu>
Subject: Re: chown (was: at files and permissions)
Date: 9 Jul 89 15:38:15 GMT
To:       unix-wizards at sem.brl.mil


From: gwyn at smoke.BRL.MIL (Doug Gwyn)
>There seem to me to be two valid services that can be performed
>by a disk "quota" system.  One of them is to prevent runaway disk
>consumption such as
>	cat x >> x
>and the other is to keep users from accumulating junk that fills
>the available disk.  The first problem is dealt with adequately
>by a resource limit mechanism a la ulimit, or more reliably by a
>"dynamic" quota monitor attached to the specific session.  The
>second problem can be dealt with administratively, with periodic
>use of "du|sort -rn" to find where the problems are.  Realistic
>long-term storage quotas really have to be negotiated between the
>users and the system administrator anyway.  These methods of
>providing disk quota services do not encounter the scenario that
>you described for the UID-based quota scheme when the file owner
>is allowed to chown his own file.

No, it can't be dealt with with "du|sort -rn" except on very small
systems where you can probably just say "someone's hogging the disk"
loudly and get the same effect, cause everyone's in the same room
anyhow (ok, I exaggerate, but small systems with perhaps a hundred or
two entries in the password file.) Or, of course, where you charge
hard currency for disk space so the system has built-in feedback which
makes such problems relatively rare (on one system like that at
Harvard I was the "disk hog", but my funds solved the problem simply
enough, they bought me my own washing machine, no tears.)

Consider the system Rob Pike was describing in his recent USENIX talk.
One major component was a large, organization-wide file server. This
is the type of system that easily has tens of thousands of accounts
(that's not unusual, I worked with a non-unix system over the last few
years that had over 15,000 login accounts in the password file.)

You can have dozens if not hundreds of people using more than what was
decided was their fair share of disk every day. So you run this script
and send them mail. So what? So twenty of them went over their fair
share and won't be back for weeks to see your mail (negligently or
otherwise, they may have thought they had a good reason to do whatever
they did) are way over quota and the disk is busting at the seams on
some partitions. Another ten are ignoring you.

Don't tell me, you start moving some of their stuff off to tape. Oh
what fun, let's have about two dozen people to run this system just to
handle sending and answering disk quota mail, putting things to tape,
dealing with irate users who find they were put to tape and are quite
sure you are mistaken and have inconvenienced them (or believe they
can play the political game to make you never do that to them again),
get the stuff off tape, deal with people who are quite sure something
has gone wrong in this restoral not to mention a phone call or two
about how it took so damn long and they now have a dozen people idle
which is costing them about a thousand dollars an hour while you deal
with the others who are being difficult (ie. human), etc etc etc.

Sh*t Doug, I'd own your whole disk farm just by making you do things
by written, signed memo. You'd spend your weekends proposing budgets
for another dozen secretaries.

Obviously little systems don't need quotas very badly (tho, hey, they
solve both problems you describe with one model, why introduce two
systems where one will do?)

The correct answer is that if you personally shouldn't be constrained
to quotas either you should have infinite quotas or access to some
(set[gu]id) program which lets you set your own quotas (so problem #1,
the accidental overrun is still averted, if desirable.)

Disk is a finite, valuable resource. Many organizations must manage
their disk with many users from diverse administrative domains, and
manage it without any realistic chargeback scheme (ie. the disk is
essentially or actually free* as far as any individual user is
concerned.) The simplest, most obvious way to do this is to assign
disk quotas and have the software enforce these quotas automatically
instead of turning some poor sap into your local disk slave heavy.

My suspicion is you've never managed large systems like this or you
wouldn't even dream of suggesting to just send mail to offenders. And
they're not rare (hint: just about every university has at least one,
if not a few dozen, such systems.)

 --------------------

* In fact it's often worse than "free" since the disk is being paid
for out of overhead by everyone so anything you can grab for yourself
is a boon to you, kinda like taxes, you actually can win as long as
you're getting more than your fair share and someone else isn't.
Sorry, but that's life, you don't fix it by removing quotas.
-- 
	-Barry Shein

Software Tool & Die, Purveyors to the Trade
1330 Beacon Street, Brookline, MA 02146, (617) 739-0202
Internet: bzs at skuld.std.com
UUCP:     encore!xylogics!skuld!bzs or uunet!skuld!bzs

-----------------------------

From: Wayne Krone <wk at hpirs.hp.com>
Subject: Re: Convert string time into seconds?
Date: 7 Jul 89 21:27:09 GMT
To:       unix-wizards at sem.brl.mil

> I have a user entered time/date in the format:
> yymmddhhmmss
> I need to convert this into seconds since the epoch and

If you have ANSI C libraries, convert the yymmddhhmmss into a tm struct
and then use mktime() to convert that into seconds since the epoch.

Wayne

-----------------------------

From: "Brandon S. Allbery" <allbery at ncoast.org>
Subject: Re: at files and permissions
Date: 9 Jul 89 15:36:14 GMT
Followup-To: comp.unix.questions
To:       unix-wizards at sem.brl.mil

As quoted from <669 at lzaz.ATT.COM> by hutch at lzaz.ATT.COM (R.HUTCHISON):
+---------------
| About "at" requiring "root" permission, I guess it needs it to write
| into the "atjobs" directory.
+---------------

at needs root permissions so it can setuid() itself to the owner of the at
job file, so it can execute the job as the user who submitted it.

++Brandon
-- 
Brandon S. Allbery, moderator of comp.sources.misc	     allbery at ncoast.org
uunet!hal.cwru.edu!ncoast!allbery		    ncoast!allbery at hal.cwru.edu
      Send comp.sources.misc submissions to comp-sources-misc@<backbone>
NCoast Public Access UN*X - (216) 781-6201, 300/1200/2400 baud, login: makeuser

-----------------------------

From: Bill Carpenter <wjc at ho5cad.att.com>
Subject: Re: chown (was: at files and permissions)
Date: 9 Jul 89 11:44:25 GMT
Sender: bill at cbnewsh.att.com
To:       unix-wizards at sem.brl.mil

In article <10501 at smoke.BRL.MIL> gwyn at smoke.BRL.MIL (Doug Gwyn) writes:
> So now the issue becomes:  Is the BSD disk quota system bogus?
> ...
> second problem can be dealt with administratively, with periodic
> use of "du|sort -rn" to find where the problems are.  Realistic
> long-term storage quotas really have to be negotiated between the
> users and the system administrator anyway.  These methods of
> providing disk quota services do not encounter the scenario that
> you described for the UID-based quota scheme when the file owner
> is allowed to chown his own file.

My guess is that the reason that quotas are not handled
administratively is because it is too much hassle for some people.
Far be it from me to judge whether automating penalites is justified
on somebody else's system.

However, if I were building a tool to count up how much disk was being
used by various parties, I might just make the owner of a directory
the responsible person for all the blocks in the normal files
immediately under it.  Sure, some people leave directories open to
being filled up by sneaky people who want to evade disk quotas, but at
least my scheme would make the directory owner a co-conspirator.

Losing chown to get disk quotas seems about as wise as having an
imposed low ulimit.
--
   Bill Carpenter         att!ho5cad!wjc  or  attmail!bill

-----------------------------

From: Tatu Yl|nen <ylo at sauna.hut.fi>
Subject: Re: scsi rll trade off questions?
Date: 9 Jul 89 18:52:30 GMT
Sender: news at santra.uucp
To:       unix-wizards at sem.brl.mil


In article <14978 at ut-emx.UUCP> allred at ut-emx.UUCP (Kevin L. Allred) writes:
   I'm putting together a low end workstation for my personal use at home.
   It will have a 386SX, 4MB memory and monochrome VGA graphics.
   Initially I plan to just run MSDOS, but soon I would like to run UNIX.
   I currently am considering hard drives in the range of 65 to 80 MB.  I
   was only considering an RLL drive with 1:1 interleve controller until
   I had pointed out to me that Segate has recently started marketing a
   low cost SCSI addaptor (ST01 and ST02) suitable for use with its
   ST296N 80MB hard disk.  This combination reportedly offeres about 750
   KB/sec transfer rate, which is comparable to the 1:1 interleve RLL
   transfer rate, and it is more cost effective.  Apparently the SCSI
   addaptor works fine under DOS, but I have already had related to me
   that it probably won't work with UNIX because of lack of drivers (I
   heard that was a problem common to most SCSI boards even the expensive
   intelligent ones like the WD7000).  Are the various UNIX vendors
   developing drivers, so that I don't need to worry about this, or
   should I stick with the RLL controller and disks?

I have used a Priam 738 SCSI disk (337 MB, 20ms) with the Seagate
ST-01 controller for about one and a half years now.  For the first
half an year I used it under msdos in a slow 16-MHz 386 machine.
Coretest and others reported transfer rates in the range of 750 KB/sec.
(Check that you have the 0WS jumper installed - without it I only got
something like 500 KB/sec).

About an year ago I purchased Microport Unix System V/386, and wrote
a device driver for the controller and the disk.  I posted the driver
here about two weeks ago.  The driver has been in use on my system and
a couple of other systems for over an year.  The driver has proved to be
very reliable (some problems were reported with Seagate ST227N when
using 1KB sectors, but those disappeared by formatting the drive to
use 512 byte sectors).  The driver supports multiple drives and partitions.
My disk is partitioned in three partitions: 10 MB /tmp, 20 MB /u2 and
307 MB /u.  (BTW, I have never had any problems with large partitions.
Some people have reported problems in the news.)

I cannot give exact transfer rates under unix.  With my original driver
I only got something like 160 KB/sec while reading a large (10-20 MB)
file with dd -bs 64k.  That with an interleave of 9 and 1 KB sectors (sic!).
I have since optimized the data transfer routines by writing them
in assembly language.  This should probably allow interleaves in the
range 1-3.  I have not yet been able to test any other interleaves, as I have
not wanted to reformat the entire disk (it takes quite a while to copy
300 megabytes to floppies and back...)  Note that when formatting the disk,
it can be helpful to explicitly specify that mkfs does no interleaving
on the file system level as that is already handled by the drive.

As a reference, measured the same way my 40ms 42MB MFM drive gives 
40 kB/sec (sic!).  I was not able to improve it from that.
The scsi disk was actually so much faster that I copied /bin and /usr/bin
to the scsi disk (/u/bin & /u/usr/bin) and put those in PATH before
/bin and /usr/bin.  The difference is very significant.

The biggest problem with the driver is that during heavy disk activity
the serial lines lose incoming characters.  But then I hear this is a
general problem with Microport...

BTW, my driver cannot be used to boot from the scsi disk.  I use the
42 MB disk that came with the system for booting and swap (luckily I have
10 MB of ram, so the machine hardly ever swaps).


    Tatu Ylonen      ylo at sauna.hut.fi

-----------------------------

From: George Robbins <grr at cbmvax.uucp>
Subject: Re: Using the uucp daemon (TCP/IP) on System V.3 with TCP/IP
Date: 9 Jul 89 17:34:01 GMT
Keywords: UUCP TCP/IP uucpd uucico sockets BSD4.3 SysV.3
To:       unix-wizards at sem.brl.mil

In article <3674 at ursa-major.SPDCC.COM> cspencer at ursa-major.spdcc.COM (Cliff Spencer) writes:
> >>Porting BSD 4.3 UUCP daemon has already been done several times for different 
> >>incarnations of TCP/IP implementations for system V Unix's.  Unfortunately
> >>none of them are "free" that I know of.
> >I only need the patches, I have the BSD4.3 uucpd source...
> 
> What's the big mystery? Doesn't the daemon just spawn /usr/lib/uucp/uucico? 

Well, yes and no.  It plays with sockets doing a listen and opening a
a connection and then simulates a login and finally runs uucico passing
the open sockets as stdin/stdout.  If you have a completely functional
socket-emulation package, it shouldn't be a big deal.  Also, your uucp
is expected to know that it shouldn't try to do all those terminal
oriented ioctls on sockets...

-- 
George Robbins - now working for,	uucp: {uunet|pyramid|rutgers}!cbmvax!grr
but no way officially representing	arpa: cbmvax!grr at uunet.uu.net
Commodore, Engineering Department	fone: 215-431-9255 (only by moonlite)

-----------------------------


End of UNIX-WIZARDS Digest
**************************

---------- end of forwarded message



More information about the Comp.unix.wizards mailing list