From dave at horsfall.org Sat Mar 2 09:22:09 2024 From: dave at horsfall.org (Dave Horsfall) Date: Sat, 2 Mar 2024 10:22:09 +1100 (EST) Subject: [COFF] [TUHS] RIP Niklaus Wirth, RIP John Walker (fwd) Message-ID: Might interest the bods here too... -- Dave ---------- Forwarded message ---------- From: Paul Ruizendaal To: "tuhs at tuhs.org" Subject: [TUHS] RIP Niklaus Wirth, RIP John Walker Earlier this year two well known computer scientists passed away. On New Year’s Day it was Niklaus Wirth aged 90. A month later it was John Walker aged 75. Both have some indirect links to Unix. For Wirth the link is that a few sources claim that Plan 9 and the Go language are in part influenced by the design ideas of Oberon, the language and the OS. Maybe others on this list know more about those influences. For Walker, the link is via the company that he was running as a side-business before he got underway with AutoCAD: https://www.fourmilab.ch/documents/marinchip/ In that business he was selling a 16-bit system for the S-100 bus, based around the TI9900 CPU (which from a programmer perspective is quite similar to a PDP11). For that system he wrote a Unix-like operating system around 1978-1980, called NOS/MT. He had never worked with Unix, but had spelled the BSTJ issues about it. It was fully written in assembler. The design was rather unique, maybe inspired by Heinz Lycklama’s “Satellite Processor” paper in BSTJ 57-6. It has a central microkernel that handles message exchange, process scheduling and memory management. Each system call is a message. However, the system call message is then passed on to a privileged “fat kernel” process that handles it. The idea was to provide multiprocessor and network transparency: the microkernel could decide to run processes on other boards in the same rack or on remote systems over a network. Also the kernel processes could be remote. Hence its name “Network Operating System / Multitasking” or “NOS/MT”. The system calls are pretty similar to Unix. The file system is implemented very similar to Unix (with i-nodes etc.), with some notable differences (there are file locking primitives and formatting a disk is a system call). File handles are not shareable, so special treatment for stdin/out/err is hardcoded. Scheduling and memory management are totally different -- unsurprising as in both cases it reflects the underlying hardware. Just as NOS/MT was getting into a usable state, John decided to pivot to packaged software including a precursor of what would become the AutoCAD package. What was there worked and saw some use in the UK and Denmark in the 1980’s -- there are emulators that can still run it, along with its small library of tools and applications. “NOS/MT was left in an arrested state” as John puts it. I guess it will remain one of those many “what if” things in computer history. From lars at nocrew.org Tue Mar 5 06:27:18 2024 From: lars at nocrew.org (Lars Brinkhoff) Date: Mon, 04 Mar 2024 20:27:18 +0000 Subject: [COFF] [TUHS] Re: regex early discussions In-Reply-To: (Clem Cole's message of "Mon, 4 Mar 2024 11:57:15 -0500") References: <13abd764-984a-4c9f-8e3e-b1eb7c624692@gmail.com> Message-ID: <7w7cih7nfd.fsf@junk.nocrew.org> Dropped TUHS; added COFF. > * Numerous editors show up on different systems, including STOPGAP on > the MIT PDP6, eventually SOS, TECO, EMACs, etc., and most have some > concept of a 'line of text' to distinguish from a 'card image.' I'd like expand on this, since I never heard about STOPGAP or SOS on the MIT PDP-6/10 computers. TECO was ported over to the 6 only a few weeks after delivers, and that seems to have been the major editor ever since. Did you think of the SAIL PDP-6? From clemc at ccc.com Tue Mar 5 06:53:52 2024 From: clemc at ccc.com (Clem Cole) Date: Mon, 4 Mar 2024 15:53:52 -0500 Subject: [COFF] [TUHS] Re: regex early discussions In-Reply-To: <7w7cih7nfd.fsf@junk.nocrew.org> References: <13abd764-984a-4c9f-8e3e-b1eb7c624692@gmail.com> <7w7cih7nfd.fsf@junk.nocrew.org> Message-ID: On Mon, Mar 4, 2024 at 3:27 PM Lars Brinkhoff wrote: > > > I'd like expand on this, since I never heard about STOPGAP or SOS on the MIT > PDP-6/10 computers. Hmm, you are undoubtedly right. STOPGAP and SOS might just have been at DECisms. I initially used SOS on the CMU PDP-10s to prep BLISS, Macro-10, and SAIL for a small job I got. It was the most like the editor used on other job on the Computere Center's TSS system (whose name I forget, which I learned first). I wanted to get stuff done, not learn a new editor, so that was fine. It also worked on VMS 1.0, IIRC, as I had a job moving some BLISS-10 code to BLISS32 on the first Vax. At some point, I was shown TECO and EMACS on the PDP-10s, but I had started to work on PDP-11 UNIX by then, and ed(1) was all that was on V5. At the time, learning something fancier for the PDP-10 seemed like a wrong time investment since I was not getting paid to work on that system, and I was getting paid to hack on UNIX. Truth be known, as a UNIX person, I got pretty adept with ed, so even when vi mode of ex showed up a few years later, I was actually slow to bother. Any the CMU SOS doc I have says STOPGAP was DEC/MIT-ism but I bet that's wrong -- it was probably just DEC. Jargon file says: *SOS n.,obs. /S-O-S/ 1. An infamously {losing} text editor. Once, back in the 1960s, when a text editor was needed for the PDP-6, a hacker crufted together a {quick-and-dirty} `stopgap editor' to be used until a better one was written. Unfortunately, the old one was never really discarded when new ones (in particular, {TECO}) came along. SOS is a descendant (`Son of Stopgap') of that editor, and many PDP-10 users gained the dubious pleasure of its acquaintance. Since then other programs similar in style to SOS have been written, notably the early font editor BILOS /bye'lohs/, the Brother-In-Law Of Stopgap (the alternate expansion `Bastard Issue, Loins of Stopgap' has been proposed). 2. /sos/ n. To decrease; inverse of {AOS}, from the PDP-10 instruction set.* > TECO was ported over to the 6 only a few weeks after delivers, and that > seems to have been the major editor ever since. > Did you think of the SAIL PDP-6? > Maybe. I don't know. ᐧ ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at nocrew.org Tue Mar 5 16:49:27 2024 From: lars at nocrew.org (Lars Brinkhoff) Date: Tue, 05 Mar 2024 06:49:27 +0000 Subject: [COFF] [TUHS] Re: regex early discussions In-Reply-To: (Clem Cole's message of "Mon, 4 Mar 2024 15:53:52 -0500") References: <13abd764-984a-4c9f-8e3e-b1eb7c624692@gmail.com> <7w7cih7nfd.fsf@junk.nocrew.org> Message-ID: <7wsf155g20.fsf@junk.nocrew.org> Clem Cole wrote: > Jargon file says: SOS n.,obs. /S-O-S/ 1. An infamously {losing} text > editor. Once, back in the 1960s, when a text editor was needed for > the PDP-6, a hacker crufted together a {quick-and-dirty} `stopgap > editor' to be used until a better one was written. Thank you. Some additional clues: the jargon file started at SAIL, and shortly after was adopted by MIT and then jointly maintained. So it's not clear which one is "the PDP-6" here. As far as I know, Bill Weiher, the creator of STOPGAP and/or SOS?, is associated with SAIL, not MIT. From clemc at ccc.com Wed Mar 6 02:34:09 2024 From: clemc at ccc.com (Clem Cole) Date: Tue, 5 Mar 2024 11:34:09 -0500 Subject: [COFF] [TUHS] Re: regex early discussions In-Reply-To: <7wsf155g20.fsf@junk.nocrew.org> References: <13abd764-984a-4c9f-8e3e-b1eb7c624692@gmail.com> <7w7cih7nfd.fsf@junk.nocrew.org> <7wsf155g20.fsf@junk.nocrew.org> Message-ID: below... On Tue, Mar 5, 2024 at 1:49 AM Lars Brinkhoff wrote: > Thank you. Some additional clues: the jargon file started at SAIL, and shortly > after was adopted by MIT and then jointly maintained. So it's > not clear which one is "the PDP-6" here. As far as I know, Bill Weiher, the > creator of STOPGAP and/or SOS?, is associated with SAIL, not MIT. > You are welcome. I'm sorry to confuse the origin and the historical correction. I was using the docs I had, and as you pointed out, the Jargon says PDP-6 but does not specify which site. My notes from the later PDP-10 pointed at DEC+MIT. It does sound like STOPGAP/SOS came to the DEC world from Stanford. So thank you. That said, bring it back to the original question from Will. My original email was about the history of using reg-ex WRT to UNIX. It was less about editors and who did what as much as trying to point out that the idea of a text editor existed long before Ken's version of QED, much less, ed(1). Most importantly, Ken's QED came after the original QED, which came after other text editors. Adding reg-ex to an editor was natural for someone schooled in the ideas behind automaton and pattern matching. But tmany/most of the text editors in used had been created before that work had begun to be studied and formalized, so, these other editors had not included using reg-ex for the pattern match/search scheme. Ken's great leap was modeling and combining the QED user interface with this new idea in text pattern match/searching, demonstrating that it was a good fit. That would lead to other tools that decided to include the same pattern-matching ideas (grep, sed, awk, Perl, *et al.*). Will had asked -- how did people learn to use reg-ex? The observation I had made and was bringing forward to the list is that if new user came from a background based on being taught about how to create a pattern match er, and sid person had learned a little about the ideas behind automatons, learn to use reg-ex was not a big deal. It was only 'astonishing,' and users might need a separate explanation if they started from some other place - particularly if they did not have that same background in core CS theory/they had previously learned a different way with a different set of tools, such as the text editor. As I understand it, this is how Will came to learn UNIX, so folks like Will needed and appreciated documentation that came from other places. I think that he was asking which documents and what people in the background similar to him had chosen to use to learn how to use the UNIX toolkit. Clem -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnc at mercury.lcs.mit.edu Wed Mar 6 05:30:53 2024 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 5 Mar 2024 14:30:53 -0500 (EST) Subject: [COFF] [TUHS] Re: regex early discussions Message-ID: <20240305193053.9D98218C077@mercury.lcs.mit.edu> > From: Clem Cole > the idea of a text editor existed long before Ken's version of QED, > much less, ed(1). Most importantly, Ken's QED came after the original > QED, which came after other text editors. Yes; some of the history is given here: An incomplete history of the QED Text Editor https://www.bell-labs.com/usr/dmr/www/qed.html Ken would have run into the original on the Berkeley Time-Sharing System; he apparently wrote the CTSS one based on his experience with the one on the BTSS. Oddly enough, CTSS seems to have not had much of an editor before. The Programmer's Guide has an entry for 'Edit' (Section AH.3.01), but 'edit file' seems to basically do a (in later terminology) 'cat >> file'. Section AE seems to indicate that most 'editing' was done by punching new cards on a key-punch! The PDP-1 was apparently similar, except that it used paper tape. Editing paper tapes was difficult enough that Dan Murphy came up with TECO - original name 'Tape Editor and Corrector': https://opost.com/tenex/anhc-31-4-anec.pdf > Will had asked -- how did people learn to use reg-ex? I learned it from reading the 'sh' and 'ed' V6 man pages. The MIT V6 systems had TECO (with a ^R mode even), but I started out with ed, since it was more like editors I had previously used. Noel From will.senn at gmail.com Wed Mar 6 10:59:46 2024 From: will.senn at gmail.com (Will Senn) Date: Tue, 5 Mar 2024 18:59:46 -0600 Subject: [COFF] [TUHS] Re: regex early discussions In-Reply-To: References: <13abd764-984a-4c9f-8e3e-b1eb7c624692@gmail.com> <7w7cih7nfd.fsf@junk.nocrew.org> <7wsf155g20.fsf@junk.nocrew.org> Message-ID: <2b6dc37f-f052-4ea0-9774-b40c6994a512@gmail.com> On 3/5/24 10:34 AM, Clem Cole wrote: > Will had asked -- how did people learn to use reg-ex?  The observation > I had made and was bringing forward to the list is that if new user > came from a background based on being taught about how to create a > pattern match er, and sid person had learned a little about the ideas > behind automatons, learn to use reg-ex was not a big deal.  It was > only 'astonishing,' and users might need a separate explanation if > they started from some other place - particularly if they did not have > that same background in core CS theory/they had previously learned a > different way with a different set of tools, such as the text editor. > > As I understand it, this is how Will came to learn UNIX, so folks like > Will needed and appreciated documentation that came from other places. > I think that he was asking which documents and what people in the > background similar to him had chosen to use to learn how to use the > UNIX toolkit. > > > Clem > Yup. I was curious about exactly that and the answers fit the bill nicely. I knew that Ritchie & co. were mathy cs types, but it didn't occur to me that the rest of the unix folks were, as well. A little reflection and it became somewhat obvious. Sure, there were plenty of exceptions, but then, they had mathy cs types to lean on. Coming at it from the new millennium, it's hard to grok the early days, or recreate the aha moments. I'm just using Unix explorations as motivations for deeper study of CS stuff that interest me, personally. When I picked up the AWK book the other day, regex popped out at me as a deficiency (sure, I use them all the time, but mastery... not even close... a lot of the time, it's like magic...) so, rather than just start coding up a bunch of regexes, I thought I would find out a bit more about their genesis - not the research that led to their discovery or perfection (I have no mathy interest), but rather how they came into common use (the pragmatics, as it were). This led to me asking about the gestalt of the 60's / 70's and suchlike. For me, this is a bit like virtual reality, where I can immerse myself in what it might have been like and how it might have unfolded while tooling around like it is 1969 all over again. Thankfully, for a lot of y'all it's lived history and by willingly sharing so freely, you enrich the real-feel of the simulation :). Later, Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From grog at lemis.com Fri Mar 8 08:44:47 2024 From: grog at lemis.com (Greg 'groggy' Lehey) Date: Fri, 8 Mar 2024 09:44:47 +1100 Subject: [COFF] (redirected from TUHS) What do you currently use for your primary OS at home? In-Reply-To: <9eb334edeb7568193000f8755704af7799169b17.camel@gmail.com> References: <9eb334edeb7568193000f8755704af7799169b17.camel@gmail.com> Message-ID: On Thursday, 7 March 2024 at 1:47:26 -0500, Jeffry R. Abramson wrote: > > I eventually reverted back to Linux because it was clear that the > user community was getting much larger, I was using it > professionally at work and there was just a larger range of > applications available. Lately, I find myself getting tired of the > bloat and how big and messy and complicated it has all gotten. > Thinking of looking for something simpler and was just wondering > what do other old timers use for their primary home computing needs? I'm surprised how few of the responders use BSD. My machines all (currently) run FreeBSD, with the exception of a Microsoft box (distress.lemis.com) that I use remotely for photo processing. I've tried Linux (used to work developing Linux kernel code), but I couldn't really make friends with it. It sounds like our reasons are similar. More details: 1977-1984: CP/M, 86-DOS 1984-1990: MS-DOS 1991-1992: Inactive UNIX 1992-1997: BSD/386, BSD/OS 1997-now: FreeBSD Greg -- Sent from my desktop computer. Finger grog at lemis.com for PGP public key. See complete headers for address and phone numbers. This message is digitally signed. If your Microsoft mail program reports problems, please read http://lemis.com/broken-MUA.php -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: not available URL: From coff at tuhs.org Fri Mar 8 09:43:17 2024 From: coff at tuhs.org (segaloco via COFF) Date: Thu, 07 Mar 2024 23:43:17 +0000 Subject: [COFF] (redirected from TUHS) What do you currently use for your primary OS at home? In-Reply-To: References: <9eb334edeb7568193000f8755704af7799169b17.camel@gmail.com> Message-ID: On Thursday, March 7th, 2024 at 2:44 PM, Greg 'groggy' Lehey wrote: > On Thursday, 7 March 2024 at 1:47:26 -0500, Jeffry R. Abramson wrote: > > > I eventually reverted back to Linux because it was clear that the > > user community was getting much larger, I was using it > > professionally at work and there was just a larger range of > > applications available. Lately, I find myself getting tired of the > > bloat and how big and messy and complicated it has all gotten. > > Thinking of looking for something simpler and was just wondering > > what do other old timers use for their primary home computing needs? > > > I'm surprised how few of the responders use BSD. My machines all > (currently) run FreeBSD, with the exception of a Microsoft box > (distress.lemis.com) that I use remotely for photo processing. I've > tried Linux (used to work developing Linux kernel code), but I > couldn't really make friends with it. It sounds like our reasons are > similar. > > More details: > > 1977-1984: CP/M, 86-DOS > 1984-1990: MS-DOS > 1991-1992: Inactive UNIX > 1992-1997: BSD/386, BSD/OS > 1997-now: FreeBSD > > Greg > -- > Sent from my desktop computer. > Finger grog at lemis.com for PGP public key. > See complete headers for address and phone numbers. > This message is digitally signed. If your Microsoft mail program > reports problems, please read http://lemis.com/broken-MUA.php Not an old timer but feel like getting in on the fun. My main system these days is a Raspberry Pi 400 running a home-grown (but quite generic) Linux setup. Started as a cross-compiled kernel with a Gentoo stage 3 stuck on top, then started replacing and removing bits of userland. The main Gentoo-ism still around is that I didn't bump from OpenRC down to bare sysvinit, but pretty much everything else has been replaced by upstream packages at this point. Desktop is X11/dwm, haven't quite gotten hip with the Wayland stuff these days. I keep a Windows 10 x86_64 desktop around for video games. Work is then a macOS host but frequently working in a remote Windows desktop, so I use Windows, mac, and Linux pretty evenly in a regular day. Have volleyed between FreeBSD and Linux historically, whichever has better hardware support for the main machine I'm running at the time, with FreeBSD preferred all things equal. I've taken my approach with Linux for a long time, opting out of distros wherever possible and rolling my own system build. I've found it keeps the things I want working while being adaptable to incorporating new bits at will. Firefox is the only major component that I don't build from source, instead opting to grab updated binaries from Arch or Debian whenever I feel like doing an update cycle. Everything else I just nab from whoever makes it and build it up from the source packages. It's nice having intimate control over what goes in /bin vs /usr/bin vs /opt/bin (no /usr/local tree here...) On Thursday, March 7th, 2024 at 2:32 PM, Mike Markowski wrote: > > I also use Raspberry Pi 3's in PiDP 8/I (https://udel.edu/~mm/pidp8i/) and 11/70. I wonder how long till a R-Pi is enough for a work station... > > Mike Markowski I find the Raspberry Pi 400 checks all my boxes for what I tend to work on, although I'm not doing any, say, CAD or media editing, just writing code, some image processing, document scanning, and web browsing. - Matt G. From crossd at gmail.com Fri Mar 8 09:50:38 2024 From: crossd at gmail.com (Dan Cross) Date: Thu, 7 Mar 2024 18:50:38 -0500 Subject: [COFF] (redirected from TUHS) What do you currently use for your primary OS at home? In-Reply-To: References: <9eb334edeb7568193000f8755704af7799169b17.camel@gmail.com> Message-ID: On Thu, Mar 7, 2024 at 5:52 PM Greg 'groggy' Lehey wrote: > On Thursday, 7 March 2024 at 1:47:26 -0500, Jeffry R. Abramson wrote: > > I eventually reverted back to Linux because it was clear that the > > user community was getting much larger, I was using it > > professionally at work and there was just a larger range of > > applications available. Lately, I find myself getting tired of the > > bloat and how big and messy and complicated it has all gotten. > > Thinking of looking for something simpler and was just wondering > > what do other old timers use for their primary home computing needs? > > I'm surprised how few of the responders use BSD. My machines all > (currently) run FreeBSD, with the exception of a Microsoft box > (distress.lemis.com) that I use remotely for photo processing. I've > tried Linux (used to work developing Linux kernel code), but I > couldn't really make friends with it. It sounds like our reasons are > similar. > > More details: > > 1977-1984: CP/M, 86-DOS > 1984-1990: MS-DOS > 1991-1992: Inactive UNIX > 1992-1997: BSD/386, BSD/OS > 1997-now: FreeBSD I'm a bit surprised by this, as well. I consider myself very fortunate in that the first computer we had at home was a Macintosh (the 1985, 512K model; the so-called "Fat Mac"). I say I was fortunate for this because the machine really gave a very consistent experience compared to the 8-bit micros and the IBM PC that were common at the time; I didn't realize how important that was until much later, but once I did, I considered myself very lucky indeed. The next machine I had was a 486 running DOS. From there, I had a short stint running COHERENT, the MWC clone of (essentially) 7th Edition. Then I ran NetBSD for a few months, and then FreeBSD. I stayed on FreeBSD for a while, until sometime in the 4.9-era when `periodic(8)` got added. At that point, the growing complexity got to me. My friend Scott Schwartz had been telling me about Plan 9, and it was available around that time, so I installed it; that was my primary environment for a few years until I landed on a Macintosh. Nowadays, I sit in front of a Mac Studio as my workstation, and I have a bunch of other machines running a bunch of other stuff around the house: Plan 9 runs much of the home infrastructure (DNS, DHCP, that kind of stuff). There's a rinky dink FreeBSD print server running my ancient laser printer. There's an OpenBSD machine downstairs that runs backup DNS and consoles. I've got machines running FreeBSD, OpenBSD-current, and DragonFly, plus a Linux workstation that I run headless that I use for stuff that requires KVM. There are a couple of Raspberry Pi's and an x86 Linux machine that all speak AX.25 and are all connected to various (amateur) radios, an Alpha running VMS, and emulated VAXen, PDP-11s, mainframes, Multics, Pr1me, CDC, and a few other weird machines running different legacy OSes. I never gravitated towards Linux as a desktop machine, really. It has always felt very fiddly to me. I don't miss FreeBSD on the desktop, really. - Dan C. From crossd at gmail.com Fri Mar 8 10:19:17 2024 From: crossd at gmail.com (Dan Cross) Date: Thu, 7 Mar 2024 19:19:17 -0500 Subject: [COFF] (redirected from TUHS) What do you currently use for your primary OS at home? In-Reply-To: References: <9eb334edeb7568193000f8755704af7799169b17.camel@gmail.com> Message-ID: On Thu, Mar 7, 2024 at 6:50 PM Dan Cross wrote: > On Thu, Mar 7, 2024 at 5:52 PM Greg 'groggy' Lehey wrote: > > On Thursday, 7 March 2024 at 1:47:26 -0500, Jeffry R. Abramson wrote: > > > I eventually reverted back to Linux because it was clear that the > > > user community was getting much larger, I was using it > > > professionally at work and there was just a larger range of > > > applications available. Lately, I find myself getting tired of the > > > bloat and how big and messy and complicated it has all gotten. > > > Thinking of looking for something simpler and was just wondering > > > what do other old timers use for their primary home computing needs? > > > > I'm surprised how few of the responders use BSD. My machines all > > (currently) run FreeBSD, with the exception of a Microsoft box > > (distress.lemis.com) that I use remotely for photo processing. I've > > tried Linux (used to work developing Linux kernel code), but I > > couldn't really make friends with it. It sounds like our reasons are > > similar. > > > > More details: > > > > 1977-1984: CP/M, 86-DOS > > 1984-1990: MS-DOS > > 1991-1992: Inactive UNIX > > 1992-1997: BSD/386, BSD/OS > > 1997-now: FreeBSD > > I'm a bit surprised by this, as well. > > I consider myself very fortunate in that the first computer we had at > home was a Macintosh (the 1985, 512K model; the so-called "Fat Mac"). > I say I was fortunate for this because the machine really gave a very > consistent experience compared to the 8-bit micros and the IBM PC that > were common at the time; I didn't realize how important that was until > much later, but once I did, I considered myself very lucky indeed. > > The next machine I had was a 486 running DOS. From there, I had a > short stint running COHERENT, the MWC clone of (essentially) 7th > Edition. Then I ran NetBSD for a few months, and then FreeBSD. I > stayed on FreeBSD for a while, until sometime in the 4.9-era when > `periodic(8)` got added. At that point, the growing complexity got to > me. My friend Scott Schwartz had been telling me about Plan 9, and it > was available around that time, so I installed it; that was my primary > environment for a few years until I landed on a Macintosh. > > Nowadays, I sit in front of a Mac Studio as my workstation, and I have > a bunch of other machines running a bunch of other stuff around the > house: Plan 9 runs much of the home infrastructure (DNS, DHCP, that > kind of stuff). There's a rinky dink FreeBSD print server running my > ancient laser printer. There's an OpenBSD machine downstairs that runs > backup DNS and consoles. I've got machines running FreeBSD, > OpenBSD-current, and DragonFly, plus a Linux workstation that I run > headless that I use for stuff that requires KVM. There are a couple of > Raspberry Pi's and an x86 Linux machine that all speak AX.25 and are > all connected to various (amateur) radios, an Alpha running VMS, and > emulated VAXen, PDP-11s, mainframes, Multics, Pr1me, CDC, and a few > other weird machines running different legacy OSes. > > I never gravitated towards Linux as a desktop machine, really. It has > always felt very fiddly to me. I don't miss FreeBSD on the desktop, > really. Oh, and not to toot my own horn, but I forgot that in there for a year or two was a MIPS DECstation running Ultrix. That was pretty stylin', I gotta say: my friends were jealous. :-D - Dan C. From rudi.j.blom at gmail.com Fri Mar 8 12:51:06 2024 From: rudi.j.blom at gmail.com (Rudi Blom) Date: Fri, 8 Mar 2024 09:51:06 +0700 Subject: [COFF] (redirected from TUHS) What do you currently use for your primary OS at home? Message-ID: Currently I've only got an older laptop at home still running Windows 10 Pro. Mostly I use a company provided HP ProBook 440 G7 with Windows 11 Pro. I installed WSL2 to run Ubuntu 20.04 if only because I wanted to mount UFS ISO images 😊 Still employed I have access to lots of UNIX servers, SCO UNIX 3.2V4.2 on Intel based servers, Tru64 on AlphaServers, HP-UX 11.23/11.31 on Itanium servers. There's an rx-server rx2660 I can call my own but even in a testroom I can hear it. Reluctant to take home. My electricity bill would also explode I think. Cheers, uncle rubl -- The more I learn the better I understand I know nothing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Fri Mar 8 16:57:16 2024 From: dave at horsfall.org (Dave Horsfall) Date: Fri, 8 Mar 2024 17:57:16 +1100 (EST) Subject: [COFF] NOT DELETED 8 (OS/360) Message-ID: Can anyone remember what this meant on OS/360? Ken Robinson (one of my CompSci lecturers) used to say "Ah, the old 'NOT DELETED 8 trick!'"... -- Dave From e5655f30a07f at ewoof.net Fri Mar 8 22:09:08 2024 From: e5655f30a07f at ewoof.net (Michael =?utf-8?B?S2rDtnJsaW5n?=) Date: Fri, 8 Mar 2024 12:09:08 +0000 Subject: [COFF] NOT DELETED 8 (OS/360) In-Reply-To: References: Message-ID: <1612f004-2bd1-4cd8-bce9-1667e4d7e38e@home.arpa> On 8 Mar 2024 17:57 +1100, from dave at horsfall.org (Dave Horsfall): > Can anyone remember what this meant on OS/360? Ken Robinson (one of my > CompSci lecturers) used to say "Ah, the old 'NOT DELETED 8 trick!'"... https://www.mail-archive.com/ibm-main at bama.ua.edu/msg107633.html seems to suggest that "NOT DELETED 8" means "not deleted because in use". -- Michael Kjörling 🔗 https://michael.kjorling.se “Remember when, on the Internet, nobody cared that you were a dog?” From sjenkin at canb.auug.org.au Sat Mar 9 00:40:27 2024 From: sjenkin at canb.auug.org.au (steve jenkin) Date: Sat, 9 Mar 2024 01:40:27 +1100 Subject: [COFF] NOT DELETED 8 (OS/360) In-Reply-To: <1612f004-2bd1-4cd8-bce9-1667e4d7e38e@home.arpa> References: <1612f004-2bd1-4cd8-bce9-1667e4d7e38e@home.arpa> Message-ID: <23C77EAC-DED5-4ABA-828E-1C18CF3A7FB2@canb.auug.org.au> I found the following “IEF283I” message, with 6 sub-clauses for code ‘8’, in Internet Archive copy of a bitsavers doc. Ken Robinson told a few horror stories of OS/360’s evil (my word) error reporting. > On 8 Mar 2024, at 23:09, Michael Kjörling wrote: > > On 8 Mar 2024 17:57 +1100, from dave at horsfall.org (Dave Horsfall): >> Can anyone remember what this meant on OS/360? Ken Robinson (one of my >> CompSci lecturers) used to say "Ah, the old 'NOT DELETED 8 trick!'"... > > https://www.mail-archive.com/ibm-main at bama.ua.edu/msg107633.html seems > to suggest that "NOT DELETED 8" means "not deleted because in use". > > -- > Michael Kjörling 🔗 https://michael.kjorling.se > “Remember when, on the Internet, nobody cared that you were a dog?” ================== ibm :: 360 :: os :: R21.7 Apr73 :: GC28-6631-13 OS 360 R21.7 Messages and Codes Apr73 https://archive.org/details/bitsavers_ibm360osR2S360R21.7MessagesandCodesApr73_53080992/page/n301/mode/1up?q=%22not+deleted%22 292 Messages 6 Codes (Release 21. 7) IEF283I dsn NOT DELETED x VCL SER NOS= ser [z],ser [z],ser [z],ser [z],ser [z] VCl SER NOS= ser [z] ,ser [z],ser [z]. Expla ation; A DD statement specified DELETE as the disposition of data set dsn, but the data set was not deleted from the volumes whose serial numbers, ser, are listed in the message text. If the data set was net deleted from any of its volumes, the volumes listed are all of the volum-es en which the data set resides. If the data set was partially deleted, miessage IEF285I precedes this message in the SYSOUT data set and lists the volumes from which the data set was deleted . • If ser is a 6-digit number, it is the serial number ef the volume, which contains labels . • If ser begins with a slash cr L, the vol\ame is unlabeled; the number after the slash or L is an internal serial number assigned by the system, to an unlabeled volume. If ser begins with L, the number after the L is of the form xxxyy, where xxx is the data set number and yy is the volume sequence number for the data set. Five volume serial numbers are listed per line until all the volumes involved are listed. The last volume serial number is followed by a period. The 1-digit code, x, explains why the data set was not deleted. X Explanation 1 The expiration date had not occurred. When the data set was created, the expiration date was specified by the EXFET or RETPE subparameter in the LABEL parameter cf the ED statement. 4 No device was available for mounting during deletion. 5 Too many volumes were specified for deletion. Deletion can be accomplished in several job steps by specifying some of the volume serial numbers in each step. 6 Either no volumes were mounted cr the mounted volumes cculd net be demounted to permit the remaining volumes to be mounted. 8 The SCRATCH routine returned a code, z, following each vclum.e serial number explaining why the data set was not deleted from that volume. The values of z and their meanings are as follows; 1 - The data set was not found on the volume. 2 - The data set is security protected and the correct password was not given. 3 - The expiration date had not occurred. When the data set was created, the expiration date was specified by the EXPDT or RETPD subparameter in the LABEL parameter of the DD statement. 4 - An uncorrectable input/output error occurred in deleting the data set from the volume. 5 - The system was unable to have the volume mounted for deletion. 6 - The system requested that the operator mount the volume, but the operator did not mount it. 9 A job was cancelled and was deleted from any of the following queues ================== -- Steve Jenkin, IT Systems and Design 0412 786 915 (+61 412 786 915) PO Box 38, Kippax ACT 2615, AUSTRALIA mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin From paul.winalski at gmail.com Sat Mar 9 01:05:24 2024 From: paul.winalski at gmail.com (Paul Winalski) Date: Fri, 8 Mar 2024 10:05:24 -0500 Subject: [COFF] NOT DELETED 8 (OS/360) In-Reply-To: <23C77EAC-DED5-4ABA-828E-1C18CF3A7FB2@canb.auug.org.au> References: <1612f004-2bd1-4cd8-bce9-1667e4d7e38e@home.arpa> <23C77EAC-DED5-4ABA-828E-1C18CF3A7FB2@canb.auug.org.au> Message-ID: On 3/8/24, steve jenkin wrote: > > Ken Robinson told a few horror stories of OS/360’s evil (my word) error > reporting. OS/VS1 was notorious for... shall we say... terse operator console messages as well. An example: 00E WTR WAITING FOR WORK P00 -Paul W. From paul.winalski at gmail.com Sat Mar 9 01:44:08 2024 From: paul.winalski at gmail.com (Paul Winalski) Date: Fri, 8 Mar 2024 10:44:08 -0500 Subject: [COFF] [TUHS] History of non-Bell C compilers? In-Reply-To: References: Message-ID: On 3/7/24, Tom Lyon wrote: > For no good reason, I've been wondering about the early history of C > compilers that were not derived from Ritchie, Johnson, and Snyder at Bell. > Especially for x86. Anyone have tales? > Were any of those compilers ever used to port UNIX? > [topic of interest to COFF, as well, I think] DEC's Ultrix for VAX and MIPS used off-the-shelf Unix cc. I don't recall what they used for Alpha. The C compiler for VAX/VMS was written by Dave Cutler's team at DECwest in Seattle. The C front end generated intermediate language (IL) for Cutler's VAX Code Generator (VCG), which was designed to be a common back end for DEC's compilers for VAX/VMS. His team also licensed the Freiburghouse PL/I front end (commercial version of a PL/I compiler originally done for Multics) and modified it to generate VCG IL. The VCG was also the back end for DEC's Ada compiler. VCG was superseded by the GEM back end, which supported Alpha and Itanium. A port of GEM to x86 was in progress at the time Compaq sold off the Alpha technology (including GEM and its C and Fortran front ends) to Intel. From paul.winalski at gmail.com Sat Mar 9 01:53:55 2024 From: paul.winalski at gmail.com (Paul Winalski) Date: Fri, 8 Mar 2024 10:53:55 -0500 Subject: [COFF] NOT DELETED 8 (OS/360) In-Reply-To: References: <1612f004-2bd1-4cd8-bce9-1667e4d7e38e@home.arpa> <23C77EAC-DED5-4ABA-828E-1C18CF3A7FB2@canb.auug.org.au> Message-ID: The subject of obscure OS/360 error messages recalled to my mind the beginner's guide for students and faculty that the Boston College Computer Center wrote. They spent many pages explaining obtuse OS/360 error messages. My favorite line from that document is: "Despite what you may have been taught in German class, there is no such thing as a 'guten ABEND'." -Paul W From clemc at ccc.com Sun Mar 10 05:52:28 2024 From: clemc at ccc.com (Clem Cole) Date: Sat, 9 Mar 2024 14:52:28 -0500 Subject: [COFF] [ih] Fwd: Some Berkeley Unix history - too many PHDs per packet In-Reply-To: <84A5C4DC-E9E7-46F7-AA6C-AADD64ACD305@icloud.com> References: <606871377.2352922.1709955781555@mail.yahoo.com> <84A5C4DC-E9E7-46F7-AA6C-AADD64ACD305@icloud.com> Message-ID: This is UNIX history, but since the Internet's history and Unix history are so intertwined, I'm going to risk the wrath of the IH moderators to try to explain, as I was one of the folks who was at the table in those the times and participated in my small way in both events: the birth of the Internet and the spreading of the UNIX IP. More details can be found in a paper I did a few years ago: https://technique-societe.cnam.fr/colloque-international-unix-en-france-et-aux-etats-unis-innovation-diffusion-et-appropriation--945215.kjsp [If you cannot find it and are interested send me email off list and I'll forward it]. And ... if people want to continue this discussion -- please, please, move it to the more appropriate COFF mailing list: https://www.tuhs.org/cgi-bin/mailman/listinfo/coff - which I have CC'ed in this reply. On Fri, Mar 8, 2024 at 11:32 PM Greg Skinner via Internet-history < internet-history at elists.isoc.org> wrote: > Forwarded for Barbara > > > I will admit your response is confusing me. My post only concerns what > I think I remember as a problem in getting BSD UNIX, in particular the > source code. Nothing about getting something we wanted to use on a > hardware platform from one of the commercial vendors. We needed the BSD > source but got hung up. > Let me see if I can explain better ... Assuming you were running BSD UNIX on a Vax, your team would have needed two things: - an AT&T License for 32/V [Research Version 7 -- port to a Vax/780 at AT&T] and a - a license for BSD 3, later 4, then 4.1, *etc*., from the Regents of the University of CA. The first license gave your team core a few rights from AT&T: 1. the right to run UNIX binaries on a single CPU (which was named in your license) 2. the right to look at and modify the sources, 3. the right the create derivative works from the AT&T IP, and 4. the right to exchange your derivative works with others people that held a similar license from AT&T. [AT&T had been forced to allow this access (license) to their IP under the rules of the 1956 consent decree - see paper for more details, but remember, as part of the consent decree allow it to have a legal monopoly on the phone system, AT&T had to make its IP available to the US Gov -- which I'm guessing the crux of Barbara's question/observation]. For not-for-profits (University/Research), a small fee was allowed to be charged (order of 1-2 hundred $s) to process the paperwork and copy the mag tape. But their IP came without any warranty, and you had to hold AT&T harmless if you used it. In those days, we referred to this as *the UNIX IP was abandoned on your doorstep.* BTW: This license allowed the research sites to move AT&T derivative work (binaries) within their site freely. Still, if you look at the license carefully, most had a restriction (often/usually ignored at the universities) that the sources were supposed to only be available on the original CPU named in their specific license. Thus, if you were a University license, no fees were charged to run the AT&T IP on other CPUs --> however, the licensees were not allowed to use it for "commercial" users at the University [BTW: this clause was often ignored, although a group of us at CMU hackers in the late 1970s famously went on strike until the Unversity obtained at least one commercial license]. The agreement was that a single CPU should be officially bound for all commercial use for that institution. I am aware that Case-Western got a similar license soon after CMU did (their folks found out about the CMU strike/license). But I do not know if MIT, Standford, or UCB officials came clean on that part and paid for a commercial license (depending on the type of license, its cost was the order of $20K-25K for the first CPU and an order of $7K-10K for each CPU afterward - each of these "additional cpu' could also have the sources - but named in an appendix for each license with AT&T). I believe that some of the larger state schools like Penn State, Rutgers, Purdue, and UW started to follow that practice by the time Unix started to spread around each campus. That said, a different license for UNIX-based IP could be granted by the Regents of the University of CA and managed by its 'Industrial Laison's Office" at UCB (the 'IOL' - the same folks that brought licenses for tools like SPICE, SPLICE, MOTIS,* et al*). This license gave the holder the right to examine and use the UCB's derivative works on anything as long as you acknowledged that you got that from UCB and held the Regents blameless [we often called this the 'dead-fish license' -- *you could make a chip, make a computer, or even wrap dead-fish in it.* But you had to say you started with something from the Regents, but they were not to be blamed for what you did with it]. The Regents were exercising rights 3 and 4 from AT&T. Thus, a team who wanted to obtain the Berkeley Software Distribution for UNIX (*a.k.a*. BSD) needed to demonstrate that they held the appropriate license from AT&T [send a copy of the signature page from your license to the ILO] before UCB would release the bits. They also had a small processing fee to the IOL in the order of $1K. [The original BSD is unnumbered, although most refer to it today as 1BSD to differentiate it from later BSD releases for UNIX]. Before I go on, in those times, the standard way we operated was that you needed to have a copy of someone else's signature page to share things. In what would later become USENIX (truth here - I'm an ex-president of the same), you could only get invited and come to a conference if you were licensed from AT&T. That was not a big deal. We all knew each other. FWIW: at different times in my career, I have had a hanging file in a cabinet with a copy of the number of these pages from different folks, with whom I would share mag tapes (remember this is pre-Internet, and many of the folks using UNIX were not part of the ARPAnet). However, the song has other verses that make this a little confusing. If your team obtained a* commercial use license* from AT&T, they could further obtain a *commercial redistribution license*. This was initially granted for the Research Seventh Edition. It was later rewritten (with the business terms changing each time) for what would eventually be called System III[1], and then the different System V releases. The price of the redistribution license for V7 was $150K, plus a sliding scale per CPU you ran the AT&T IP, depending on the number of CPUs you needed. With this, the single CPU for the source restriction was removed. So ... if you had a redistribution license, you could also get a license from the Regents, and as long as you obeyed their rules, you could sell a copy of UNIX to run on any licensed target. Traditionally, hardware is part of the same purchase when purchased from a firm like DEC, IBM, Masscomp,* etc*. However, separate SW licenses were sold via firms such as Microsoft and Mt. Xinu. The purchaser of *a binary license* from one of those firms did not have the right to do anything but use the AT&T derivative work. If your team had a binary licensee, you could not obtain any of the BSD distributions until the so-called 'NET2" BSD release [and I'm going to ignore the whole AT&T/BSDi/Regents case here as it is not relevant to Barbara's question/comment]. So the question is, how did a DoD contractor, be it BBN, Ford Aerospace, SRI, etc., originally get access to UNIX IP? Universities and traditional research teams could get a research license. Commercial firms like DEC needed a commercial licensee. Folks with DoD contracts were in a hazy area. The original v5 commercial licensee was written for Rand, a DoD contractor. However, as discussed here in the IH mailing list and elsewhere, some places like BBN had access to the core UNIX IP as part of their DoD contracts. I believe Ford Aerospace was working with AT&T together as part of another US Gov project - which is how UNIX got there originally (Ford Aero could use it for that project, but not the folks at Ford Motors, for instance]. The point is, if you access the *IP indirectly* such as that, then your site probably did not have a negotiated license with a signature page to send to someone. @Barbara, I can not say for sure, but if this was either a PDP-11 or a VAX and you wanted one of the eBSDs, I guess/suspect that maybe your team was dealing with an indirect path to AT&T licensing -- your site license might have come from a US Gov contract, not directly. So trying to get a BSD tape directly from the IOL might have been more difficult without a signature page. So, rolling back to the original. You get access to BSD sources, but you had to demonstrate to the IOL folks in UCB's Cory Hall that you were legally allowed access to the AT&T IP in source code form. That demonstration was traditionally fulfilled with a xerographic copy of the signature page for your institution, which the IOL kept on file. That said, if you had legal access to the AT&T IP by indirect means, I do not know how the IOL completed that check or what they needed to protect the Regents. Clem 1.] What would be called a System from a marketing standpoint was originally developed as PWB 3.0. This was the system a number of firms, including my own, were discussing with AT&T at the famous meetings at 'Ricky's Hyatt' during the price (re)negotiations after the original V7 redistribution license. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tte at cs.fau.de Sun Mar 10 13:24:02 2024 From: tte at cs.fau.de (Toerless Eckert) Date: Sun, 10 Mar 2024 04:24:02 +0100 Subject: [COFF] [ih] Fwd: Some Berkeley Unix history - too many PHDs per packet In-Reply-To: References: <606871377.2352922.1709955781555@mail.yahoo.com> <84A5C4DC-E9E7-46F7-AA6C-AADD64ACD305@icloud.com> Message-ID: Thanks Clem for those memories and details. I only joined university in 1995, so my first collision with this whole copyright mess was when we had to sign individually for our groups SunOS source code license according to Sun's policies back then - but i don't think this was relating to AT. Of course, both ATT and BSD source code licenses where necessary for SunOS liceses back then. The BSD requirement may have went away when Sun rebased to SVR4. Not sure. I sometimes wonder what would have become of Linux if the whole CSRG/ATT lawsuit would have settled before 1991. For us in University doing OS research, it was quite annoying when we had all invested so much into SysV and BSD unix, but then our students told us from 1991 on to just forget about it and founded or joined companies doing Linux distributions (Suse being the local one from my universities metro area). Of course, in hindsight, this may have been a good thing, but of course, it took a long time for Linux to catch up, and i would not wonder if BSD die hards say that it still has not. Cheers Toerless On Sat, Mar 09, 2024 at 02:52:28PM -0500, Clem Cole via Internet-history wrote: > This is UNIX history, but since the Internet's history and Unix history are > so intertwined, I'm going to risk the wrath of the IH moderators to try to > explain, as I was one of the folks who was at the table in those the times > and participated in my small way in both events: the birth of the Internet > and the spreading of the UNIX IP. > > More details can be found in a paper I did a few years ago: > https://technique-societe.cnam.fr/colloque-international-unix-en-france-et-aux-etats-unis-innovation-diffusion-et-appropriation--945215.kjsp > [If you cannot find it and are interested send me email off list and I'll > forward it]. > > And ... if people want to continue this discussion -- please, please, move > it to the more appropriate COFF mailing list: > https://www.tuhs.org/cgi-bin/mailman/listinfo/coff - which I have CC'ed in > this reply. > > > On Fri, Mar 8, 2024 at 11:32 PM Greg Skinner via Internet-history < > internet-history at elists.isoc.org> wrote: > > > Forwarded for Barbara > > > > > I will admit your response is confusing me. My post only concerns what > > I think I remember as a problem in getting BSD UNIX, in particular the > > source code. Nothing about getting something we wanted to use on a > > hardware platform from one of the commercial vendors. We needed the BSD > > source but got hung up. > > > > Let me see if I can explain better ... > > Assuming you were running BSD UNIX on a Vax, your team would have needed > two things: > > - an AT&T License for 32/V [Research Version 7 -- port to a Vax/780 at > AT&T] and a > - a license for BSD 3, later 4, then 4.1, *etc*., from the Regents of > the University of CA. > > The first license gave your team core a few rights from AT&T: > > 1. the right to run UNIX binaries on a single CPU (which was named in > your license) > 2. the right to look at and modify the sources, > 3. the right the create derivative works from the AT&T IP, and > 4. the right to exchange your derivative works with others people that > held a similar license from AT&T. > > [AT&T had been forced to allow this access (license) to their IP under the > rules of the 1956 consent decree - see paper for more details, but > remember, as part of the consent decree allow it to have a legal monopoly > on the phone system, AT&T had to make its IP available to the US Gov -- > which I'm guessing the crux of Barbara's question/observation]. > > For not-for-profits (University/Research), a small fee was allowed to be > charged (order of 1-2 hundred $s) to process the paperwork and copy the mag > tape. But their IP came without any warranty, and you had to hold AT&T > harmless if you used it. In those days, we referred to this as *the UNIX IP > was abandoned on your doorstep.* BTW: This license allowed the research > sites to move AT&T derivative work (binaries) within their site freely. > Still, if you look at the license carefully, most had a restriction > (often/usually ignored at the universities) that the sources were supposed > to only be available on the original CPU named in their specific license. > > Thus, if you were a University license, no fees were charged to run the > AT&T IP on other CPUs --> however, the licensees were not allowed to use it > for "commercial" users at the University [BTW: this clause was often > ignored, although a group of us at CMU hackers in the late 1970s famously > went on strike until the Unversity obtained at least one commercial > license]. The agreement was that a single CPU should be officially bound > for all commercial use for that institution. I am aware that Case-Western > got a similar license soon after CMU did (their folks found out about > the CMU strike/license). But I do not know if MIT, Standford, or UCB > officials came clean on that part and paid for a commercial license > (depending on the type of license, its cost was the order of $20K-25K for > the first CPU and an order of $7K-10K for each CPU afterward - each of > these "additional cpu' could also have the sources - but named in an > appendix for each license with AT&T). I believe that some of the larger > state schools like Penn State, Rutgers, Purdue, and UW started to follow > that practice by the time Unix started to spread around each campus. > > That said, a different license for UNIX-based IP could be granted by the > Regents of the University of CA and managed by its 'Industrial > Laison's Office" at UCB (the 'IOL' - the same folks that brought licenses > for tools like SPICE, SPLICE, MOTIS,* et al*). This license gave the holder > the right to examine and use the UCB's derivative works on anything as long > as you acknowledged that you got that from UCB and held the > Regents blameless [we often called this the 'dead-fish license' -- *you > could make a chip, make a computer, or even wrap dead-fish in it.* But you > had to say you started with something from the Regents, but they were not > to be blamed for what you did with it]. > > The Regents were exercising rights 3 and 4 from AT&T. Thus, a team who > wanted to obtain the Berkeley Software Distribution for UNIX (*a.k.a*. BSD) > needed to demonstrate that they held the appropriate license from AT&T > [send a copy of the signature page from your license to the ILO] before UCB > would release the bits. They also had a small processing fee to the IOL in > the order of $1K. [The original BSD is unnumbered, although most refer to > it today as 1BSD to differentiate it from later BSD releases for UNIX]. > > Before I go on, in those times, the standard way we operated was that you > needed to have a copy of someone else's signature page to share things. In > what would later become USENIX (truth here - I'm an ex-president of the > same), you could only get invited and come to a conference if you were > licensed from AT&T. That was not a big deal. We all knew each other. > FWIW: at different times in my career, I have had a hanging file in a > cabinet with a copy of the number of these pages from different folks, with > whom I would share mag tapes (remember this is pre-Internet, and many of > the folks using UNIX were not part of the ARPAnet). > > However, the song has other verses that make this a little confusing. > > If your team obtained a* commercial use license* from AT&T, they could > further obtain a *commercial redistribution license*. This was initially > granted for the Research Seventh Edition. It was later rewritten (with the > business terms changing each time) for what would eventually be called > System III[1], and then the different System V releases. The price of the > redistribution license for V7 was $150K, plus a sliding scale per CPU you > ran the AT&T IP, depending on the number of CPUs you needed. With this, the > single CPU for the source restriction was removed. > > So ... if you had a redistribution license, you could also get a license > from the Regents, and as long as you obeyed their rules, you could sell a > copy of UNIX to run on any licensed target. Traditionally, hardware is > part of the same purchase when purchased from a firm like DEC, IBM, > Masscomp,* etc*. However, separate SW licenses were sold via firms such as > Microsoft and Mt. Xinu. The purchaser of *a binary license* from one of > those firms did not have the right to do anything but use the AT&T > derivative work. If your team had a binary licensee, you could not obtain > any of the BSD distributions until the so-called 'NET2" BSD release [and > I'm going to ignore the whole AT&T/BSDi/Regents case here as it is not > relevant to Barbara's question/comment]. > > So the question is, how did a DoD contractor, be it BBN, Ford Aerospace, > SRI, etc., originally get access to UNIX IP? Universities and traditional > research teams could get a research license. Commercial firms like DEC > needed a commercial licensee. Folks with DoD contracts were in a hazy > area. The original v5 commercial licensee was written for Rand, a DoD > contractor. However, as discussed here in the IH mailing list and > elsewhere, some places like BBN had access to the core UNIX IP as part of > their DoD contracts. I believe Ford Aerospace was working with AT&T > together as part of another US Gov project - which is how UNIX got there > originally (Ford Aero could use it for that project, but not the folks at > Ford Motors, for instance]. > > The point is, if you access the *IP indirectly* such as that, then > your site probably did not have a negotiated license with a signature page > to send to someone. > > @Barbara, I can not say for sure, but if this was either a PDP-11 or a VAX > and you wanted one of the eBSDs, I guess/suspect that maybe your team was > dealing with an indirect path to AT&T licensing -- your site license might > have come from a US Gov contract, not directly. So trying to get a BSD tape > directly from the IOL might have been more difficult without a signature > page. > > So, rolling back to the original. You get access to BSD sources, but you > had to demonstrate to the IOL folks in UCB's Cory Hall that you were > legally allowed access to the AT&T IP in source code form. That > demonstration was traditionally fulfilled with a xerographic copy of the > signature page for your institution, which the IOL kept on file. That > said, if you had legal access to the AT&T IP by indirect means, I do not > know how the IOL completed that check or what they needed to protect the > Regents. > > Clem > > > > > 1.] What would be called a System from a marketing standpoint was > originally developed as PWB 3.0. This was the system a number of firms, > including my own, were discussing with AT&T at the famous meetings at > 'Ricky's Hyatt' during the price (re)negotiations after the original V7 > redistribution license. > -- > Internet-history mailing list > Internet-history at elists.isoc.org > https://elists.isoc.org/mailman/listinfo/internet-history -- --- tte at cs.fau.de From sjenkin at canb.auug.org.au Sun Mar 10 18:22:33 2024 From: sjenkin at canb.auug.org.au (steve jenkin) Date: Sun, 10 Mar 2024 19:22:33 +1100 Subject: [COFF] [ih] Fwd: Some Berkeley Unix history - too many PHDs per packet References: <9ADA89F4-0621-40A4-A68F-6EF0A3218461@gmail.com> Message-ID: > On 10 Mar 2024, at 06:52, Clem Cole wrote: > > More details can be found in a paper I did a few years ago: https://technique-societe.cnam.fr/colloque-international-unix-en-france-et-aux-etats-unis-innovation-diffusion-et-appropriation--945215.kjsp [If you cannot find it and are interested send me email off list and I'll forward it]. For those playing along at home… INTERNATIONAL SYMPOSIUM - UNIX IN FRANCE AND IN THE UNITED STATES: INNOVATION, DIFFUSION AND APPROPRIATION 19 October 2017 UNIX: A View from the Field as We Played the Game -- Steve Jenkin, IT Systems and Design 0412 786 915 (+61 412 786 915) PO Box 38, Kippax ACT 2615, AUSTRALIA mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjenkin at canb.auug.org.au Sun Mar 10 20:05:32 2024 From: sjenkin at canb.auug.org.au (steve jenkin) Date: Sun, 10 Mar 2024 21:05:32 +1100 Subject: [COFF] [ih] Fwd: Some Berkeley Unix history - too many PHDs per packet In-Reply-To: References: <606871377.2352922.1709955781555@mail.yahoo.com> <84A5C4DC-E9E7-46F7-AA6C-AADD64ACD305@icloud.com> Message-ID: > On 10 Mar 2024, at 06:52, Clem Cole wrote: > > That said, a different license for UNIX-based IP could be granted by the Regents of the University of CA and managed by its 'Industrial Laison's Office" at UCB (the 'IOL' - the same folks that brought licenses for tools like SPICE, SPLICE, MOTIS, et al). This license gave the holder the right to examine and use the UCB's derivative works on anything as long as you acknowledged that you got that from UCB and held the Regents blameless [we often called this the 'dead-fish license' -- you could make a chip, make a computer, or even wrap dead-fish in it. But you had to say you started with something from the Regents, but they were not to be blamed for what you did with it]. > > > > Before I go on, in those times, the standard way we operated was that you needed to have a copy of someone else's signature page to share things. In what would later become USENIX (truth here - I'm an ex-president of the same), you could only get invited and come to a conference if you were licensed from AT&T. That was not a big deal. We all knew each other. FWIW: at different times in my career, I have had a hanging file in a cabinet with a copy of the number of these pages from different folks, with whom I would share mag tapes (remember this is pre-Internet, and many of the folks using UNIX were not part of the ARPAnet). > > However, the song has other verses that make this a little confusing. > > > So the question is, how did a DoD contractor, be it BBN, Ford Aerospace, SRI, etc., originally get access to UNIX IP? Universities and traditional research teams could get a research license. Commercial firms like DEC needed a commercial licensee. Folks with DoD contracts were in a hazy area. The original v5 commercial licensee was written for Rand, a DoD contractor. However, as discussed here in the IH mailing list and elsewhere, some places like BBN had access to the core UNIX IP as part of their DoD contracts. I believe Ford Aerospace was working with AT&T together as part of another US Gov project - which is how UNIX got there originally (Ford Aero could use it for that project, but not the folks at Ford Motors, for instance]. In the last while I’ve read about DARPA’s IPTO (Information Processing Technology Office) 1962-1986 and how they (generously) funded a very diverse range of projects for extended durations. Alan Kay comments that $1M was small beer to DARPA, who were investing billions in R&D every year. It was a boom time for US computing research - funders with vision, deep pockets and patience :) I can’t find my source now, nor any list of IPTO’s contracts given to UCB ( or given to anyone ). UCB - Berkeley - got many contracts, time-sharing / SDS-940, Ingres, TCP/IP in the Unix kernel and RISC processing. There was an IPTO director - Bob Taylor or Robert Kahn - that wanted a common development platform with IP plus development tools, who gave contracts to UCB’s CSRG to do the work. This story implies DARPA helped arrange Unix licences with the many defence contractors, albeit they only need binaries for BSD. If the Internet Society’s ‘brief history’ is to be believed, Defence declared Unix a ’standard’ (for which work?) in 1980. =================== DARPA’s short bio of IPTO. Doesn’t mention name change in 1986 to Information Processing Technology Office (not ’Techniques’) Information Processing Techniques Office DARPA’s Information Processing Techniques Office (IPTO) was born in 1962 and for nearly 50 years was responsible for DARPA’s information technology programs. =================== 850K PDF, selected IPTO pages from DARPA report, includes charts of projects and total budget - barely legible DARPA technical accomplishments volume 3 an historical review of selected darpa projects 1991 =================== One of the more interesting challenges was the transition of the ARPANET host protocol from NCP to TCP/IP as of January 1, 1983. This was a “flag-day” style transition, requiring all hosts to convert simultaneously or be left having to communicate via rather ad-hoc mechanisms. This transition was carefully planned within the community over several years before it actually took place and went surprisingly smoothly (but resulted in a distribution of buttons saying “I survived the TCP/IP transition”). TCP/IP was adopted as a defense standard three years earlier in 1980. This enabled defense to begin sharing in the DARPA Internet technology base and led directly to the eventual partitioning of the military and non- military communities. By 1983, ARPANET was being used by a significant number of defense R&D and operational organizations. The transition of ARPANET from NCP to TCP/IP permitted it to be split into a MILNET supporting operational requirements and an ARPANET supporting research needs. =================== In this 1988 oral history interview with Bob Khan, he talks about giving contracts to Bill Joy / USB’s CSRC to port Unix to the VAX 11/780 and BBN’s TCP/IP into BSD. Although a DEC package deal for VAX 11/750’s for Universities was mentioned (5 for $180k), there’s no mention of licensing (easy for Research, not for Defence contractors) page 42 =================== Although ARPA has no definitive timeline or list of accomplishments for the IPTO, it references others work. What Will Be (HarperCollins, 1997), author Michael Dertouzos credits DARPA with “… between a third and a half of all the major innovations in computer science and technology.” =================== PDF of 2003 article from IEEE Annals of the History of Computing J.C.R. Licklider’s Vision for the IPTO Chigusa Ishikawa Kita, Kyoto University The Information Processing Techniques Office of the Advanced Research Projects Agency was founded in 1962 as a step toward realizing a flexible military command and control system. In setting the IPTO’s research agenda for funding, its first director, J.C.R. Licklider, emphasized the development of time-sharing systems. This article looks at how Licklider’s early vision of “a network of thinking centers” helped set the stage for the IPTO’s most famous project: the Arpanet. =================== A partial list of DARPA Information processing projects. Omits the VLSI & RISC work. Norberg is a co-author of the 1996 book, "Transforming Computer Technology. Information Processing for the Pentagon, 1962-1986” DARPA's IPTO had Formidable Reputation Arthur L. Norberg, May 1997 =================== DARPA in the 1980s – Transformative Technology Development and Transition [ PDF, pg 15 ] Parallel to DARPA’s transformational military programs in the 1970s and 1980s were programs revolutionizing information technology, building on Licklider’s vision of “man-computer symbiosis.” DARPA’s research was foundational to computer science. ARPANET was one element of a much broader, increasingly coherent program based on the technological future that Licklider imagined. He and his IPTO colleagues conceived a multi-pronged development of the technologies underlying the transformation of information processing from clunky, room- filling, inaccessible mainframe machines to a ubiquitous network of interactive and personal computing capabilities. This transformation continues today in DARPA’s pursuit of artificial intelligence, cognitive (brain-like) computing, and robotics. =================== DARPA and the Internet Revolution By Mitch Waldrop [ also author of ’The Dream Machine’ on Licklider’s career ] =================== Another partial list from: DARPA/IPTO and the Computing Revolution DARPA is credited with “between a third and a half of all the major innovations in computer science and technology” – Michael Dertouzos, What Will Be (1997) The information technology revolution of the second half of the 20th century was largely driven by DARPA/IPTO (1962-1986) * Time-sharing * Interactive computing, personal computing * ARPANET * ILLIAC IV * The Internet J.C.R. Licklider (first IPTO Director) had the goal of human-computer symbiosis We now have the opportunity to go back to the future (forward to the past?) =================== -- Steve Jenkin, IT Systems and Design 0412 786 915 (+61 412 786 915) PO Box 38, Kippax ACT 2615, AUSTRALIA mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin From whm at msweng.com Mon Mar 11 04:02:44 2024 From: whm at msweng.com (William H. Mitchell) Date: Sun, 10 Mar 2024 11:02:44 -0700 Subject: [COFF] [ih] Fwd: Some Berkeley Unix history - too many PHDs per packet In-Reply-To: References: <606871377.2352922.1709955781555@mail.yahoo.com> <84A5C4DC-E9E7-46F7-AA6C-AADD64ACD305@icloud.com> Message-ID: <452081E6-D3BA-4FA1-9615-90F5DDAF21CA@msweng.com> At the U of Arizona, university lawyers would staple a page on the BSD contract that, among other onerous things, required UCB to indemnify UA for all harms related to usage of the software. A no-go, of course. Dr. Pete Downey discovered that we could run licenses through Kitt Peak National Observatory, with offices on the UA campus. The KPNO lawyer had the mindset of helping researchers do research, and our licensing problems were solved. :) --whm From clemc at ccc.com Mon Mar 11 05:58:46 2024 From: clemc at ccc.com (Clem Cole) Date: Sun, 10 Mar 2024 15:58:46 -0400 Subject: [COFF] [ih] Fwd: Some Berkeley Unix history - too many PHDs per packet In-Reply-To: References: <606871377.2352922.1709955781555@mail.yahoo.com> <84A5C4DC-E9E7-46F7-AA6C-AADD64ACD305@icloud.com> Message-ID: below... [Dropping IH list]. On Sun, Mar 10, 2024 at 6:05 AM steve jenkin wrote: > > > > On 10 Mar 2024, at 06:52, Clem Cole wrote: > > > > That said, a different license for UNIX-based IP could be granted by the > Regents of the University of CA and managed by its 'Industrial Laison's > Office" at UCB (the 'IOL' - the same folks that brought licenses for tools > like SPICE, SPLICE, MOTIS, et al). > I'm not sure if you are catching that the Regents IOL (Industrial Laison's Office) [part of the UCB EE Department] and the DARPA’s IPTO (Information Processing Technology Office) - which was originally part of the US Air Force, then US DOD,* etc*.. [and went through a number of name changes]. The latter group originally led and managed a small part of the US Gov DOD projects. Its history is best spelled out in Katie Hafner's wonderful book: "*Where Wizards Stay Up Late*" - ISBN 9780684832678. (More in a minute). The former, the IOL, managed the external relationships for the EE Department (and later EECS when they created the CS division of EE). It was set up initially in the latter part of the 1960s by my thesis advisor, the late Donald O Pederson(*a.k.a.*dop) --- and, as I said, the folks that brought you SPICE, SPLICE, and the like]. It already had a way to license and distribute technology from EE to external organizations [using an idea that would later be called 'open source.' He was famous for saying, *"Unlike our friends across the bay or on the east coast, we give everything away. That way, I get to go in the back door and see what they are doing. If I sell our tools, I use the front door like all salesmen."* The circa 1977 "Berkeley Software Distribution" for UNIX came from the IOL, as did other distributions they had been managing since about 1967 or so. > > In the last while I’ve read about DARPA’s IPTO (Information Processing > Technology Office) 1962-1986 > and how they (generously) funded a very diverse range of projects for > extended durations. > > Alan Kay comments that $1M was small beer to DARPA, who were investing > billions in R&D every year. Be careful. The US government, via DOD (and DOE), was funding billions, while DARPA was a small and mostly forgotten backwater the USAF originally had set up. As I said, see Katie Hafner's book for more details. $1M was a big deal to DARPA. But compared to funding a new fighter or a new air craft carrier, DARPA projects were small potatoes. > > > It was a boom time for US computing research - funders with vision, deep > pockets and patience :) > No, the boom was the Cold War and the space race. That was driving core tech. CS Research just hitched its wagon to those engines. Things like the ARPANet were funded to solve what the ARMY called the 'radar problem.: How(during a nuclear strike) are you able to keep disparate command centers informed and in sync? > > I can’t find my source now, nor any list of IPTO’s contracts given to UCB > ( or given to anyone ). > > UCB - Berkeley - got many contracts, time-sharing / SDS-940, Ingres, > TCP/IP in the Unix kernel and RISC processing. > Yikes -- having lived it. I fear you may be confusing and mixing some things up - certainly order, and what beget what. First, UCB was very late to the DARPA world. Note that the first ARPAnet IMP semi-available to UCB was at LBL (up the hill). And while the Regents ran LBL, LANL, Los Almos, and the like (for DOE, mind you, not DOD). Furthermore by the time of CSRG, CSRG did not have the contract for IP/TCP for UNIX -- BBN did. *CSRG had a contract from DARPA to support the UNIX kernel.* These are the sources of famous issues and questions WRT created. The concept of sockets(2) was a CSRG [Bill Joy ism -- actually to counter Rashid's ports() idea in Accent]. The IP stack (and support) *was supposed *to be from BBN (and it originally was -- you can see at least one early BBN distribution in the TUHS archives. BTW, Ingres was partially funded by DOD via DARPA and predates CSRG by about 4 or 5 years. Fateman got a contract to move MAXIMA from ITS to UNIX (and create Franz LISP). This was the origin of the original kernel work. Frankly, I don't remember who funded that; but I'm not sure it was DARPA. I think it may have been one of the national labs (DOE) that was using Maxima. FWIW: the Ingres ARPAnet connection was a 'very distant host' interface to one of the 4 ports on the LBL IMP. I'm not sure who funded Patterson during the RISC work. I know my thesis was funded by industrial folks as as well as DOE grant, not a DOD one. I just thought of another interesting factiod. Mind you, the BSD sources were free - which I'm sure caused a number of UNIX vendors to stop there (per dop's genius of going in the back door) since BBN was a commercial enterprise (and as such was looking for revenue streams ). The BBN stack actually cost money for commercial firms. In the early 1990s, when we decided to use it, not the UCB code, at Stellar, we had a get a sublicense for it from BBN. > > There was an IPTO director - Bob Taylor or Robert Kahn - that wanted a > common development platform with IP plus development tools, > who gave contracts to UCB’s CSRG to do the work. > Ouch ... that is not quite right. Again - get Katie's book. CSRG is >>much much<< later in the DARPA (or Internet story). By the time of CSRG, DARPA had moved inside of DOD a few times. It was not nearly the size of the other teams, but it was a real line item. As Alan Kay said, it was not even noticed when the original work started compared to other DOD projects. But the problem you are running into is that it was a multifold set of problems - which are often hard to untangle. While I'm not sure how well it worked in practice, the "justification" for the ARPAnet was to share expensive resources owned by the USG and supplied to DOD/DOE contractors. DOD and DOE were paying for lots of computing power at lots of places. DOE used almost anything they could get their hands on - particularly in the scientific processing area, but the CS Research types had started migrating to the PDP-10 for their specific serious work. However, with the PDP-10, there were N different OSs in use. DARPA knows it costs the >>USG<< less if the users operated with a DEC-supplied SW stack, but their CS researcher seems to do more projects with more enhanced OS. BBN has managed to get DEC to pick up its own PDP-10 system and migrate the 'default' OS to be based on theirs (FWIW: DEC is less impressed with ITS and WAITS in those days for commercial reasons - I'm not going to go down that rathole). The reader might try to remember that, as a general rule, DARPA and the rest of the US government teams are trying not to fund what we might call "*core OS Research."* In 1983, DEC "discontinued" development and no longer offered for sale the PDP-10 in favor of its now widely popular VAX series. DARPA switched to Vax as the platform it will supply to its contractors (DOD and DOE, as well as other depts, offered different systems). However, with the vax as a common platform for the DARPA contractor, there was still a need for some system extensions like ports/sockets for different research projects DARPA is funding. The research community has started to switch to UNIX. But DARPA is concerned about AT&T's "abandoning the OS on the doorstep" scheme. So the question was, how to get UNIX supported on Vax, Since the version of UNIX being used on the VAX by >>much<< but not all of the DOD and DOE community was BSD, DARPA's solution was let a contract to create a support group - CSRG was born. > This story implies DARPA helped arrange Unix licences with the many > defence contractors, albeit they only need binaries for BSD. > I did not imply that, nor do I think DARPA did. I think other parts of US GOV did >>sometimes<< have access to the UNIX IP by means other than the traditional license scheme from AT&T/WE Patent and Licensing group -- i.e. Otis Wilson *et al *(we have evidence of the same). For instance, Ford Aero was doing a joint project with AT&T for NASA [NASA is now an independent agency, but I wonder if that was always true]. Ford Aero is known to have had special access [and there seems to be evidence this was based on PWB 1.0 - which never was formally released outside of the Bell System]. There have been other discussions that when other parts of Ford Motor wanted to use UNIX, the Ford Aero folks were unable to help them. We also know that Rand was an original (1960s) DARPA contractor [back to its origin story as a research office inside the Air Force]. When the folks from Harvard went to Rand and wanted to use UNIX, the first commercial license was created by AT&T. And we know that story. There is evidence that some US government contractors, such as BBN, were in a grey zone. I'll try to get some enlightenment from some of the BBN UNIX folks I know. From discussions, it >>seems<< like the first version of UNIX made its way into BBN and was part of a US Gov contract, probably shared with ATT. But by the time of CSRG, BBN definitely had traditionally commercial source licenses. We also know that Ford became a traditional licensee but started in a place different from others [particularly if the reports of using PWB 1.0 were true -- that distribution was not available from the AT&T/WE patent and license group]. > If the Internet Society’s ‘brief history’ is to be believed, Defence > declared Unix a ’standard’ (for which work?) in 1980. > Please be careful here. The IP based on the UNIX ideas was not a USG standard for any department until FIPS-151 was published post IEEE P1003.1 - which was all in the 1980s. That said, there was often an *operational standard *in many US government departments, including DOD, by the early 1980s; based on a preference for a flavor of UNIX by many users, particularly researchers. Furthermore, IP/TCP was the DOD's operational standard by the early 1980s, but by the mid-1980s, DOD's DDN and DOC had picked ISO/OSI and GMAP specifications over the IP family [and we all know how that played out in the end]. A number of us in the industry at the time we scrambling how to bring an ISO/OSI stack out on our products for the USG and the Auto/Aerospace customers who were telling us they would not order equipment without [and, of course, the folks in the EU were pushing X.25 and the rest of ISO to counter IP's take off]. Metcalf's law would cause IP to win out (i.e., economic reasons), but understand there is a difference between an official standard and what was actually occurring. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dave at horsfall.org Mon Mar 11 15:35:02 2024 From: dave at horsfall.org (Dave Horsfall) Date: Mon, 11 Mar 2024 16:35:02 +1100 (EST) Subject: [COFF] NOT DELETED 8 (OS/360) In-Reply-To: <23C77EAC-DED5-4ABA-828E-1C18CF3A7FB2@canb.auug.org.au> References: <1612f004-2bd1-4cd8-bce9-1667e4d7e38e@home.arpa> <23C77EAC-DED5-4ABA-828E-1C18CF3A7FB2@canb.auug.org.au> Message-ID: On Sat, 9 Mar 2024, steve jenkin wrote: > Ken Robinson told a few horror stories of OS/360’s evil (my word) error > reporting. Ken Robinson had lots of stories :-) He was brilliant (and he gave us the "Fast Assembler" with 1-1/2 passes). I did catch him off-guard when I told him that I knew about Ackermann's Function (and implemented it in APL\360) :-) OK, now for Graham McMahon[*] (my third CompSci lecturer -- SNOBOL etc -- I am trying to keep ther memories alive): He wrote a program (presumably in /360 assembly) to solve those "White to move and mate in two" chess problems, and he gave me an object deck (I never did see the source). Well, me being me I used it to solve those puzzles, and send in the entry :-) I think they cottoned on, though, because they stopped printing my submissions :-( Now, think about it: a chess problem solver, in 360 assembler? It must've been using alpha-beta searches i.e. a stack, surely... And with only 4K segments? I'd've loved to see that source code. And I also remember a Barry Wragg? I think that he was a tutor, not a lecturer; I remember him saying that "IBM manuals are written to impress, not inform". [*] Ah yes; I walked into Dr. McMahon's Comp Sci class with my "trannie" blaring Nixon's "I shall resign the Presidency", and he stopped the class :-) -- Dave From coff at tuhs.org Thu Mar 14 23:39:57 2024 From: coff at tuhs.org (Tom Ivar Helbekkmo via COFF) Date: Thu, 14 Mar 2024 14:39:57 +0100 Subject: [COFF] (redirected from TUHS) What do you currently use for your primary OS at home? In-Reply-To: (Greg Lehey's message of "Fri, 8 Mar 2024 09:44:47 +1100") References: <9eb334edeb7568193000f8755704af7799169b17.camel@gmail.com> Message-ID: Greg 'groggy' Lehey writes: > I'm surprised how few of the responders use BSD. I started out with MINIX 1 on a 286-based PC. Ported various software to it, including UUCP, so I had email and could be on mailing lists. Moved to 386bsd when that came out, joining the Internet community around it. Stuck with it as it became NetBSD, and have run NetBSD on my primary systems ever since. I enjoyed playing with kernel code on MINIX and early NetBSD, but modern kernels are much too complicated for me, so I get those urges satisfied on MINIX 3, 2.11BSD, and 6th Edition UNIX, all on proper hardware. :) -tih -- Most people who graduate with CS degrees don't understand the significance of Lisp. Lisp is the most important idea in computer science. --Alan Kay From imp at bsdimp.com Fri Mar 15 00:58:15 2024 From: imp at bsdimp.com (Warner Losh) Date: Thu, 14 Mar 2024 08:58:15 -0600 Subject: [COFF] [TUHS] Re: SunOS 4 in 2024 In-Reply-To: <20240314134945.GC143836@mit.edu> References: <87h6h93e4q.fsf@gmail.com> <87zfv11w1u.fsf@gmail.com> <20240314134945.GC143836@mit.edu> Message-ID: [ moved to coff ] On Thu, Mar 14, 2024 at 7:49 AM Theodore Ts'o wrote: > On Thu, Mar 14, 2024 at 11:44:45AM +1100, Alexis wrote: > > > > i basically agree. i won't dwell on this too much further because i > > recognise that i'm going off-topic, list-wise, but: > > > > i think part of the problem is related to different people having > > different preferences around the interfaces they want/need for > > discussions. What's happened is that - for reasons i feel are > > typically due to a lock-in-oriented business model - many discussion > > systems don't provide different interfaces/'views' to the same > > underlying discussions. Which results in one community on platform X, > > another community on platform Y, another community on platform Z > > .... Whereas, for example, the 'Rocksolid Light' BBS/forum software > > provides a Web-based interface to an underlying NNTP-based system, > > such that people can use their NNTP clients to engage in forum > > discussions. i wish this sort of approach was more common. > > This is a bit off-topic, and so if we need to push this to a different > list (I'm not sure COFF is much better?), let's do so --- but this is > a conversation which is super-improtant to have. If not just for Unix > heritage, but for the heritage of other collecvtive systems-related > projects, whether they be open source or proprietary. > > A few weeks ago, there were people who showed up on the git mailing > list requesting that discussion of the git system move from the > mailing list to using a "forge" web-based system, such as github or > gitlab. Their reason was that there were tons of people who think > e-mail is so 1970's, and that if we wanted to be more welcoming to the > up-and-coming programmers, we should meet them were they were at. The > obvious observations of how github was proprietary, and locking up our > history there might be contra-indicated was made, and the problem with > gitlab is that it doesn't have a good e-mail gateway, and while we > might be disenfranchising the young'uns by not using some new-fangled > web interface, disenfranchising the existing base of expertise was > even worse idea. > > The best that we have today is lore.kernel.org, which is used by both > the Linux Kernel and the git development communities. It uses > public-inbox to archive the mailing list traffic, and it can be > accessed via threaded e-mail interface, as well as via NNTP. There > are also tools for subscribing to messages that match a filtering > criteria, as well as tools for extracting patches plus code review > sign-offs into a form that can be easily consumed by git. > email based flows are horrible. Absolutely the worst. They are impossible to manage. You can't subscribe to them w/o insane email filtering rules, you can't discover patches or lost patches easily. There's no standard way to do something as simple as say 'never mind'. There's no easy way to follow much of the discussion or find it after the fact if your email was filtered off (ok, yea, there kinda is, if you know which archives to troll through). As someone who recently started contributing to QEMU I couldn't get over how primitive the email interaction was. You magically have to know who to CC on the patches. You have to look at the maintainers file, which is often stale and many of the people you CC never respond. If a patch is dropped, or overlooked it's up to me to nag people to please take a look. There's no good way for me to find stuff adjacent to my area (it can be done, but it takes a lot of work). So you like it because you're used to it. I'm firmly convinced that the email workflow works only because of the 30 years of toolings, work arounds, extra scripts, extra tools, cult knowledge, and any number of other "living with the poo, so best polish up" projects. It's horrible. It's like everybody has collective Stockholm syndrome. The peoople begging for a forge don't care what the forge is. Your philisophical objections to one are blinding you to things like self-hosted gitea, gitlab, gerrit which are light years ahead of this insane workflow. I'm no spring chicken (I sent you patches, IIRC, when you and bruce were having the great serial port bake off). I've done FreeBSD for the past 30 years and we have none of that nonsense. The tracking isn't as exacting as Linux, sure. I'll grant. the code review tools we've used over the years are good enough, but everybody that's used them has ideas to make them better. We even accept pull requests from github, but our source of truth is away from github. We've taken an all of the above approach and it makes the project more approachable.In addition, I can land reviewed and tested code in FreeBSD in under an hour (including the review and acceptance testing process). This makes it way more efficient for me to do things in FreeBSD than in QEMU where the turn around time is days, where I have to wait for the one true pusher to get around to my pull request, where I have to go through weeks long processes to get things done (and I've graduated to maintainer status). So the summary might be email is so 1970s, but the real problem with it is that it requires huge learning curve. But really, it's not something any sane person would design from scratch today, it has all these rules you have to cope with, many unwritten. You have to hope that the right people didn't screw up their email filters. You have to wait days or weeks for an answer, and the enthusiasm to contribute dies in that time. A quick turnaround time is essential for driving enthusiasm for new committers in the community. It's one of my failings in running FreeBSD's github experiment: it takes me too long to land things, even though we've gone from years to get an answer to days to weeks.... I studied the linux approach when FreeBSD was looking to improve it's git workflow. And none of the current developers think it's a good idea. In fact, I got huge amounts of grief, death threads, etc for even suggesting it. Everybody thought, to a person that as badly as our hodge-podge of bugzilla, phabricator and cruddy git push hooks, it was lightyears ahead of Linux's system and allowed us to move more quickly and produced results that were good enough. So, respectfully, I think Linux has succeed despite its tooling, not because of it. Other factors have made it successful. The heroics that are needed to make it work are possible only because there's a huge supply that can be wasted and inefficiently deployed and still meet the needs of the project. Warner -------------- next part -------------- An HTML attachment was scrubbed... URL: