From paul.winalski at gmail.com Tue Aug 1 02:36:47 2023 From: paul.winalski at gmail.com (Paul Winalski) Date: Mon, 31 Jul 2023 12:36:47 -0400 Subject: [COFF] Typical Fate of Older Hardware In-Reply-To: <20230730203321.QWoHZ%steffen@sdaoden.eu> References: <5ec59010-d848-8adc-9872-7a4e6fb599eb@tnetconsulting.net> <20230730203321.QWoHZ%steffen@sdaoden.eu> Message-ID: I just read that on average one gram of gold is extracted from one ton of gold ore. Between the circuit runs inside chip packages and the gold coating on contacts, I'd think that discarded circuit boards could match conventional gold ore in terms of yield. There's a lot of wailing and gnashing of teeth concerning the world's supply of rare earth metals, which are needed for, among other things, the permanent magnets in disk drives. Wouldn't discarded hard drives be a good source of these metals vs. virgin ores? --Paul W. From brad at anduin.eldar.org Tue Aug 1 02:52:02 2023 From: brad at anduin.eldar.org (Brad Spencer) Date: Mon, 31 Jul 2023 12:52:02 -0400 Subject: [COFF] Typical Fate of Older Hardware In-Reply-To: (message from Paul Winalski on Mon, 31 Jul 2023 12:36:47 -0400) Message-ID: Paul Winalski writes: > I just read that on average one gram of gold is extracted from one ton > of gold ore. Between the circuit runs inside chip packages and the > gold coating on contacts, I'd think that discarded circuit boards > could match conventional gold ore in terms of yield. My understanding is the extraction of the gold from the contacts is more often than not, more expensive to do then to mine new gold. If I recall the details correctly, there are not a lot of ways to do that with gold because it doesn't react with a lot of other elements so it ends up being hard to reduce. > There's a lot of wailing and gnashing of teeth concerning the world's > supply of rare earth metals, which are needed for, among other things, > the permanent magnets in disk drives. Wouldn't discarded hard drives > be a good source of these metals vs. virgin ores? Shug... maybe... but with more and more systems going to solid state storage, the need for spinning rust is decreasing each year (and probably each quarter at this point). > --Paul W. -- Brad Spencer - brad at anduin.eldar.org - KC8VKS - http://anduin.eldar.org From steffen at sdaoden.eu Tue Aug 1 03:28:20 2023 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Mon, 31 Jul 2023 19:28:20 +0200 Subject: [COFF] Typical Fate of Older Hardware In-Reply-To: References: <5ec59010-d848-8adc-9872-7a4e6fb599eb@tnetconsulting.net> <20230730203321.QWoHZ%steffen@sdaoden.eu> Message-ID: <20230731172820.5g6oj%steffen@sdaoden.eu> Paul Winalski wrote in : |I just read that on average one gram of gold is extracted from one ton |of gold ore. Between the circuit runs inside chip packages and the |gold coating on contacts, I'd think that discarded circuit boards |could match conventional gold ore in terms of yield. | |There's a lot of wailing and gnashing of teeth concerning the world's |supply of rare earth metals, which are needed for, among other things, |the permanent magnets in disk drives. Wouldn't discarded hard drives |be a good source of these metals vs. virgin ores? The entire world has to go closed-loop economy. Our (the german) chancellor (and despite the absolutely non-understandable support of the frantic west) is talking this ever since. First really europe-wide noted in his speech at "Karlsuniversität Prag"(ue) in August last year. Die Technologien dafür sind heute schon da. Was wir brauchen, sind gemeinsame Standards für den Einstieg in eine echte europäische Kreislaufwirtschaft ‑ ich nenne es: ein strategisches Update unseres Binnenmarkts. The necessary technologies exist already today. What we need are common standards to enter a real european closed-loop economy -- i call it: a strategic update of our domestic market. 'Problem seems to be that the hundreds of millions of (most often, poor) refugees which have to leave the coasts inland-wise, and all the "contained" countries, and all the exsanguinated countries, will not have the necessary resources to do the necessary investment. Of course -- we can then sell something. --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt) From coff at tuhs.org Tue Aug 1 04:40:00 2023 From: coff at tuhs.org (segaloco via COFF) Date: Mon, 31 Jul 2023 18:40:00 +0000 Subject: [COFF] Typical Fate of Older Hardware In-Reply-To: References: Message-ID: > My understanding is the extraction of the gold from the contacts is more > often than not, more expensive to do then to mine new gold. If I recall > the details correctly, there are not a lot of ways to do that with gold > because it doesn't react with a lot of other elements so it ends up > being hard to reduce. > Brad Spencer Pardon the length, caught the chemist in me interested. Gold is famously difficult to attack with acids, which actually is a benefit moreso than a detriment. One strategy to free native gold from a substrate is to instead attack that substrate. The main acidic mixture which will actually attack gold is "aqua regia" which is typically a 1:1 nitric/hydrochloric mix, and is "regia" in that it can attack "regal" metals like gold and platinum. Nitric on its own is a very effective acid and oxidizer, and can be used to knock out all sorts of other metals, up to and including silver. However, nitric alone won't make a significant impact on the gold without the HCl there too. One of the problems with HNO3 alone (can't recall if this is why gold is unresponsive) is that its strong oxidizing potential can, in some circumstances, actually prevent its acidic reactions by fully oxidizing the exposed surface area of a metal before the acid can dissociate it. This can be observed with copper and anhydrous nitric: the copper will immediately oxidize on the surface and no further reaction occurs. Add water to facilitate the dissolution of the iron nitrate being formed and the reaction goes apoplectic. Still, this doesn't come into play as much simply in that anhydrous nitric is very uncommon, and it's hygroscopic so it'll sponge up enough water from the atmosphere if left to do so and then overcome the otherwise insoluble oxidation. Long story short, you can extract all sorts of metals *from* gold given they present surface area to react with, while leaving much of the gold intact, by successive baths in individual strong acids, taking care to not have HNO3 and HCl in contact with the metal at the same time. This isn't 100%; platinum, for instance, will also survive this process I'm pretty sure, as well as some minerals and other complexes, but its a good place to start. You can then take what's left and dissolve it in aqua regia, yielding a solution containing gold, possibly platinum, but hopefully little if any other metals. At that point, either electrolysis or precipitation reactions can be used to further purify, either by depositing the gold or at least eliminating remaining impurities. Similar processes are used for preparing radioactive isotopes for analysis: several stages of precipitation reactions to eliminate unwanted isotopes and then a final precip of the target species onto a planchet for alpha spectrometry or beta emission counting. For the curious, gamma is a different beast entirely, so this doesn't apply to particularly high potency radioisotopes. That said, this all has to take into account the cost of the acids, safe handling vessels for actually performing the separation, disposal (or further refinement) of the secondary metals from the process, etc. My hunch based on experiences in the environmental market, is that these sorts of costs are more often than not the barrier than any amount of technical difficulty. Mining operations have the game figured out on how to balance production and environmental stuff (note balance doesn't necessarily mean accept and value, industrial ops often budget for compliance violations and smaller fines.) Metal recycling operations likely have a lot more eyes on them, ironically, than extractive measures, and that is a newer industry. So much of it too is informed by market volatility. When gold peeks above a certain threshold, suddenly reclamation outweighs the costs, but then it dips again and you're bleeding money on a formal operation. Mining, sadly, has more history behind it, so will probably continue to be the most supported avenue for pursuing resources until either the chemical and disposal costs involved in reclamation come down or we run so low on resources the tacit, implied violence towards the communities these resources are extracted from escalates into full blown war. Of course, the other option is the steady march towards new horizons in semiconductor research, quantum computing, all of these attempts to get away from the current entrenched norms of IC implementation. One of the possible solutions to these issues, now that I've thought about my chemistry and tech stuff in the same breath, is perhaps designing newer substrates from which gold can be more easily reclaimed. If planned obsolescence is already a thing, those same engineers could at the very least design these frequently disposable devices with high turnover to have a recycling potential higher than what we have currently. In other words, if things are going to be made cheaply and to be discarded every couple of years to keep a revolving customer base, at the very least, engineer processes to easily put those discarded resources right back into the pool, not into landfills. Granted, I could go on for hours about that sort of humanistic engineering... - Matt G. P.S. You really awakend the chemist in me. Not often I get to dredge some of those memories up talking tech. There's a metallurgist living somewhere deep in my mind that enjoyed thinking about this at length. From paul.winalski at gmail.com Tue Aug 1 07:20:07 2023 From: paul.winalski at gmail.com (Paul Winalski) Date: Mon, 31 Jul 2023 17:20:07 -0400 Subject: [COFF] Typical Fate of Older Hardware In-Reply-To: References: Message-ID: Brad Spencer is right about the resistance of gold to attack by acids makes using acids to attack the substrate an effective strategy. One other interesting property of gold is that it dissolves in liquid mercury. One old and primitive--but effective--way to extract gold is to grind the ore finely, mix it with liquid mercury, then allow the substrate to settle out. You then have liquid mercury with gold dissolved in it. You heat the mercury to evaporate it away and there's your gold. The problem with this method is of course the extreme toxicity and volatility of liquid mercury. It's difficult (and expensive) to handle it safely, and in those parts of the world where this method is still in use, it usually isn't handled safely. The mercury-contaminated waste left over after the extraction is also toxic and environmentally damaging to dispose of. -Paul W. From coff at tuhs.org Tue Aug 1 07:59:49 2023 From: coff at tuhs.org (segaloco via COFF) Date: Mon, 31 Jul 2023 21:59:49 +0000 Subject: [COFF] Typical Fate of Older Hardware In-Reply-To: References: Message-ID: > iron nitrate being formed and the reaction goes apoplectic. Copper...not iron...thought I proofread better than that. Iron certainly will not spontaneously evolve from copper metal and nitric acid. In any case, I had never considered mercury solubility Paul, that's quite an interesting way to go about it. I wonder if gallium would do something similar... - Matt G. From sjenkin at canb.auug.org.au Tue Aug 1 09:11:15 2023 From: sjenkin at canb.auug.org.au (steve jenkin) Date: Tue, 1 Aug 2023 09:11:15 +1000 Subject: [COFF] Typical Fate of Older Hardware In-Reply-To: References: Message-ID: <3BFF53F0-D82D-4F8A-B79A-15DE44576EFB@canb.auug.org.au> Or cyanide :( Used to reprocess ‘mullock’ heaps left after mercury extraction. > On 1 Aug 2023, at 07:20, Paul Winalski wrote: > > One other interesting property of gold is that it dissolves in liquid > mercury. -- From wobblygong at gmail.com Tue Aug 1 16:30:05 2023 From: wobblygong at gmail.com (Wesley Parish) Date: Tue, 1 Aug 2023 18:30:05 +1200 Subject: [COFF] Typical Fate of Older Hardware In-Reply-To: <20230730203321.QWoHZ%steffen@sdaoden.eu> References: <5ec59010-d848-8adc-9872-7a4e6fb599eb@tnetconsulting.net> <20230730203321.QWoHZ%steffen@sdaoden.eu> Message-ID: <76501721-ab55-0c86-090a-dd06d53dc582@gmail.com> I've also done a fair amount of work breaking up and down old PCs and Macintoshes, in the early 2000s. The business owner talked about getting a furnace built to render down the old CRTs, but it hadn't happened by the time I left that company, and I doubt it had happened by the time of the Chirstchurch earthquakes 2010-2011. I do know we sent the metal cases off to the local metal recyclers. But what happened to the boards, I have no idea. Wesley Parish On 31/07/23 08:33, Steffen Nurpmeso wrote: > Grant Taylor via COFF wrote in > <5ec59010-d848-8adc-9872-7a4e6fb599eb at tnetconsulting.net>: > |On 7/29/23 6:26 PM, segaloco via COFF wrote: > |> Howdy folks, I wanted to get some thoughts and experiences with regards > |> to what sort of EOL handling of mainframe/mini hardware was typical. > | > |My experience disposing of things is from the late '90s and early '00s > |and is for much smaller things. So it may very well differ. > > Around 1990(+ a bit) i worked during holiday for a company which > collected old computers, monitors etc from authorities and, well, > other companies. Myriads of (plastic) keyboards, cables, etc., it > all was thrown into containers (ie rolled down the floor, then > blindly thrown), all mixed up. I (a prowd owner of an i386 DX 40 > by that time iirc) shortly thought of, you know, but to no avail. > I have no idea, i am pretty sure it all went down to Africa or > India, where young people and other unlucky then had to pave their > way through, as is still mostly the case today, _i think_. Let's > just hope they do not have illnesses because of the (likely) toxic > interour. (Having said that, i myself also worked for a short > time for another company where i was crawling through cable and > such shafts, .. without any mask .. Not to talk about waste > incinator and chemical industry here (Merck and Rhön etc), they > were also filter free, .. and then i was also smoking for over > twenty years. How did i end up here now?? I hope i am still from > one of those generations which can live a hundred years > nonetheless.) > > --steffen > | > |Der Kragenbaer, The moon bear, > |der holt sich munter he cheerfully and one by one > |einen nach dem anderen runter wa.ks himself off > |(By Robert Gernhardt) From mc at hack.org Tue Aug 1 19:49:38 2023 From: mc at hack.org (Michael Cardell Widerkrantz) Date: Tue, 01 Aug 2023 11:49:38 +0200 Subject: [COFF] [TUHS] Re: the wheel of reincarnation goes sideways In-Reply-To: References: Message-ID: <87edknw0r1.fsf@hack.org> Dan Cross , 2023-07-05 17:48 (-0400): > I thought some folks here might find this interesting. Someone else > today reminded me of tilde.town, which is a publicly accessible > machine running Linux. The tildes are a whole movement of public access *nix boxen. Here's a web page collecting a few of them: https://tildeverse.org/ They are a part of a larger Smol Internet movement: tildes, the Gopher revival, Gemini, et cetera. Of course they also have their own IRC network (tilde.chat), their own Internet radio station: https://tilderadio.org/ and phone network: https://tilde.tel/ SDF and Eventphone (I'm permanently on the EPVPN) also have their own phone networks, of course. Eventphone also runs their own DECT, GSM, and 3G networks during events, like the wonderful Chaos Communication Congress (C3) and the CCCamp (coming up soon!). -- MC, https://hack.org/mc/ From mc at hack.org Tue Aug 1 19:52:49 2023 From: mc at hack.org (Michael Cardell Widerkrantz) Date: Tue, 01 Aug 2023 11:52:49 +0200 Subject: [COFF] [TUHS] Re: the wheel of reincarnation goes sideways In-Reply-To: <0e0064dae74c2275ca50ac6453457eef@bl.org> References: <08ed6c3a-f2d3-590c-7de4-e01164271da1@tnetconsulting.net> <0e0064dae74c2275ca50ac6453457eef@bl.org> Message-ID: <87bkfrw0lq.fsf@hack.org> Michael Parson , 2023-07-09 09:55 (-0500): > There's also nyx.net. I've had an account with them since it was > nyx.cs.du.edu and was run on a Sun Sparcstation 10 and a Sparcstation > 2 running SunOS 4.1.x. I'm still on a mailing list, Future Culture, that I subscribed to soon after it was created at Nyx back when it was a PDP-11. > I've been running this domain (bl.org) as a multi-user system for > friends and family for a few decades. Same here for hack.org. Currently 31 users logged in on my shellbox. Began as a dial-up BBS in the 1980s and then sort of grew. -- MC, https://hack.org/mc/ From crossd at gmail.com Wed Aug 2 01:55:37 2023 From: crossd at gmail.com (Dan Cross) Date: Tue, 1 Aug 2023 11:55:37 -0400 Subject: [COFF] [TUHS] Re: the wheel of reincarnation goes sideways In-Reply-To: <87edknw0r1.fsf@hack.org> References: <87edknw0r1.fsf@hack.org> Message-ID: On Tue, Aug 1, 2023 at 5:49 AM Michael Cardell Widerkrantz wrote: > Dan Cross , 2023-07-05 17:48 (-0400): > > I thought some folks here might find this interesting. Someone else > > today reminded me of tilde.town, which is a publicly accessible > > machine running Linux. > > The tildes are a whole movement of public access *nix boxen. Here's a > web page collecting a few of them: > > https://tildeverse.org/ > > They are a part of a larger Smol Internet movement: tildes, the Gopher > revival, Gemini, et cetera. Interesting. I don't really get the point of the Gopher revival, to be honest; sure, I get that people want non-graphical, non-ad-laden content, but it sure seems like you could get something like that with the web just using a text-mode browser like `lynx` or even `links` and something like `gomarkdown`. It's like the people who want to use Fidonet as an "alternative" to email. I mean, one can use the same protocols in parallel with the mainstream services. - Dan C. > Of course they also have their own IRC > network (tilde.chat), their own Internet radio station: > > https://tilderadio.org/ > > and phone network: > > https://tilde.tel/ > > SDF and Eventphone (I'm permanently on the EPVPN) also have their own > phone networks, of course. Eventphone also runs their own DECT, GSM, and > 3G networks during events, like the wonderful Chaos Communication > Congress (C3) and the CCCamp (coming up soon!). > > -- > MC, https://hack.org/mc/ From coff at tuhs.org Wed Aug 2 02:27:57 2023 From: coff at tuhs.org (Grant Taylor via COFF) Date: Tue, 1 Aug 2023 11:27:57 -0500 Subject: [COFF] [TUHS] Re: the wheel of reincarnation goes sideways In-Reply-To: References: <87edknw0r1.fsf@hack.org> Message-ID: <1b01cf3b-41d7-cd82-044a-7973b84ca203@tnetconsulting.net> On 8/1/23 10:55 AM, Dan Cross wrote: > Interesting. I don't really get the point of the Gopher revival, to be > honest; Retro? Reminiscence? > sure, I get that people want non-graphical, non-ad-laden content, > but it sure seems like you could get something like that with the > web just using a text-mode browser like `lynx` or even `links` and > something like `gomarkdown`. Presumably people are creating new content to use in the newer Gopher sphere, etc. So if people are creating new content, why can't they create the same content in simple no-add HTML. > It's like the people who want to use Fidonet as an "alternative" > to email. I mean, one can use the same protocols in parallel with > the mainstream services. I was pawing at FidoNet (or other FTNs) as an alternative to SMTP specifically because it was not SMTP. Not that anything's wrong with SMTP to prevent it's use. My interest is in the form of avoiding a single protocol failure. Grant. . . . From steffen at sdaoden.eu Wed Aug 2 07:14:12 2023 From: steffen at sdaoden.eu (Steffen Nurpmeso) Date: Tue, 01 Aug 2023 23:14:12 +0200 Subject: [COFF] Typical Fate of Older Hardware In-Reply-To: <76501721-ab55-0c86-090a-dd06d53dc582@gmail.com> References: <5ec59010-d848-8adc-9872-7a4e6fb599eb@tnetconsulting.net> <20230730203321.QWoHZ%steffen@sdaoden.eu> <76501721-ab55-0c86-090a-dd06d53dc582@gmail.com> Message-ID: <20230801211412.PZXB-%steffen@sdaoden.eu> [resorting] Wesley Parish wrote in <76501721-ab55-0c86-090a-dd06d53dc582 at gmail.com>: |On 31/07/23 08:33, Steffen Nurpmeso wrote: |> Grant Taylor via COFF wrote in |> <5ec59010-d848-8adc-9872-7a4e6fb599eb at tnetconsulting.net>: |>|On 7/29/23 6:26 PM, segaloco via COFF wrote: |>|> Howdy folks, I wanted to get some thoughts and experiences with regards |>|> to what sort of EOL handling of mainframe/mini hardware was typical. |>| |>|My experience disposing of things is from the late '90s and early '00s |>|and is for much smaller things. So it may very well differ. |> |> Around 1990(+ a bit) i worked during holiday for a company which |> collected old computers, monitors etc from authorities and, well, |> other companies. Myriads of (plastic) keyboards, cables, etc., it |> all was thrown into containers (ie rolled down the floor, then |> blindly thrown), all mixed up. ... |I've also done a fair amount of work breaking up and down old PCs and |Macintoshes, in the early 2000s. | |The business owner talked about getting a furnace built to render down |the old CRTs, but it hadn't happened by the time I left that company, |and I doubt it had happened by the time of the Chirstchurch earthquakes |2010-2011. I do know we sent the metal cases off to the local metal |recyclers. But what happened to the boards, I have no idea. Yaaaaah, you know, as a native German of age by that time i remember myriads of toxic waste affairs where that shit was shipped to .. sheer "everywhere" (except first and second world, of course). So this was the tip of an iceberg the contours of which were known from many years of newspapers, magazines, good political TV documentations etc, yet the dullness of reality overwhelmed me. And, uh, oh, i acted conforming(ly) (very fast). --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt) From crossd at gmail.com Thu Aug 3 02:07:13 2023 From: crossd at gmail.com (Dan Cross) Date: Wed, 2 Aug 2023 12:07:13 -0400 Subject: [COFF] [TUHS] Re: the wheel of reincarnation goes sideways In-Reply-To: <1b01cf3b-41d7-cd82-044a-7973b84ca203@tnetconsulting.net> References: <87edknw0r1.fsf@hack.org> <1b01cf3b-41d7-cd82-044a-7973b84ca203@tnetconsulting.net> Message-ID: On Tue, Aug 1, 2023 at 12:28 PM Grant Taylor via COFF wrote: > On 8/1/23 10:55 AM, Dan Cross wrote: > > Interesting. I don't really get the point of the Gopher revival, to be > > honest; > > Retro? Reminiscence? I guess? > > sure, I get that people want non-graphical, non-ad-laden content, > > but it sure seems like you could get something like that with the > > web just using a text-mode browser like `lynx` or even `links` and > > something like `gomarkdown`. > > Presumably people are creating new content to use in the newer Gopher > sphere, etc. > > So if people are creating new content, why can't they create the same > content in simple no-add HTML. Exactly. There are even pre-baked things one could put together that would serve much the same purpose. Going back to gopher et al seem like throwing out the baby with the bathwater. A small HTTP server that serves a little subtree of files on some random port and automatically renders markdown or something into trivial HTML is really all one needs. > > It's like the people who want to use Fidonet as an "alternative" > > to email. I mean, one can use the same protocols in parallel with > > the mainstream services. > > I was pawing at FidoNet (or other FTNs) as an alternative to SMTP > specifically because it was not SMTP. Tell that to the Fidonet people. :-) > Not that anything's wrong with SMTP to prevent it's use. My interest is > in the form of avoiding a single protocol failure. I don't see what the protocol has to do with it, but sure. - Dan C. From coff at tuhs.org Thu Aug 3 06:58:43 2023 From: coff at tuhs.org (Grant Taylor via COFF) Date: Wed, 2 Aug 2023 15:58:43 -0500 Subject: [COFF] [TUHS] Re: the wheel of reincarnation goes sideways In-Reply-To: References: <87edknw0r1.fsf@hack.org> <1b01cf3b-41d7-cd82-044a-7973b84ca203@tnetconsulting.net> Message-ID: <1f4408f9-a487-ae3b-84e4-e585b35c80bb@tnetconsulting.net> On 8/2/23 11:07 AM, Dan Cross wrote: > I guess? I'm not endorsing it. I have my own preferences that people question. > Exactly. There are even pre-baked things one could put together > that would serve much the same purpose. Going back to gopher et al > seem like throwing out the baby with the bathwater. A small HTTP > server that serves a little subtree of files on some random port > and automatically renders markdown or something into trivial HTML is > really all one needs. I always wanted something that would re-use the same content between multiple services. I can make the same file(s) available via: - FTP(S) - HTTP(S) Why can't I make the same file(s) available via Gopher too? I wondered if it might be possible to do some magic at the file system level where the same source file(s) could be used and add wrappers around it to integrate said source file(s) into rendered files served up via the various protocols. Obviously I've not yet been motivated to do anything with Gopher in this regard. I'd likely include a BBS interface in this menagerie if I could do so. For various $REASONS. > Tell that to the Fidonet people. :-) The last time I looked, much of Fidonet (proper) and other FTNs were still using the Fido protocol (nomenclature?) to communicate between nodes. There were a few offering SMTP gateways. Have more of them migrated to SMTP gateways where Fidonet is now more of a separate SMTP network? > I don't see what the protocol has to do with it, but sure. I should clarify that I view SMTP as used on the Internet today as a very large network of federated email servers speaking a common protocol. As such the network is largely interdependent on various other parts of the network, e.g. DNS. I was hoping that Fidonet (proper) as an FTN was still using Fido protocol (nomenclature) such that it was largely independent from the aforementioned SMTP network. Does the protocol separation make more sense now? Grant. . . . From crossd at gmail.com Thu Aug 3 07:16:46 2023 From: crossd at gmail.com (Dan Cross) Date: Wed, 2 Aug 2023 17:16:46 -0400 Subject: [COFF] [TUHS] Re: the wheel of reincarnation goes sideways In-Reply-To: <1f4408f9-a487-ae3b-84e4-e585b35c80bb@tnetconsulting.net> References: <87edknw0r1.fsf@hack.org> <1b01cf3b-41d7-cd82-044a-7973b84ca203@tnetconsulting.net> <1f4408f9-a487-ae3b-84e4-e585b35c80bb@tnetconsulting.net> Message-ID: On Wed, Aug 2, 2023 at 4:58 PM Grant Taylor via COFF wrote: > On 8/2/23 11:07 AM, Dan Cross wrote: >[snip] > > Exactly. There are even pre-baked things one could put together > > that would serve much the same purpose. Going back to gopher et al > > seem like throwing out the baby with the bathwater. A small HTTP > > server that serves a little subtree of files on some random port > > and automatically renders markdown or something into trivial HTML is > > really all one needs. > > I always wanted something that would re-use the same content between > multiple services. > > I can make the same file(s) available via: > > - FTP(S) > - HTTP(S) > > Why can't I make the same file(s) available via Gopher too? I'm sure you can if that interests you. I just don't see much of a point, personally. But if that's what you're into, get on down with it. > I wondered if it might be possible to do some magic at the file system > level where the same source file(s) could be used and add wrappers > around it to integrate said source file(s) into rendered files served up > via the various protocols. > > Obviously I've not yet been motivated to do anything with Gopher in this > regard. > > I'd likely include a BBS interface in this menagerie if I could do so. > For various $REASONS. I don't know why that wouldn't be easily doable in a server for each protocol. I believe that some BBS packages already do this, but I don't really know. > > Tell that to the Fidonet people. :-) > > The last time I looked, much of Fidonet (proper) and other FTNs were > still using the Fido protocol (nomenclature?) to communicate between > nodes. There were a few offering SMTP gateways. > > Have more of them migrated to SMTP gateways where Fidonet is now more of > a separate SMTP network? No. I think most of the actual Fidonet people are either waiting for the Big One and the collapse of the Internet, or arguing about how someone dissed them in 1989. > > I don't see what the protocol has to do with it, but sure. > > I should clarify that I view SMTP as used on the Internet today as a > very large network of federated email servers speaking a common > protocol. As such the network is largely interdependent on various > other parts of the network, e.g. DNS. > > I was hoping that Fidonet (proper) as an FTN was still using Fido > protocol (nomenclature) such that it was largely independent from the > aforementioned SMTP network. > > Does the protocol separation make more sense now? I thought I was rather clear that one could use the SMTP protocol independently of the existing email network, but sure. - Dan C. From coff at tuhs.org Sat Aug 5 02:50:42 2023 From: coff at tuhs.org (segaloco via COFF) Date: Fri, 04 Aug 2023 16:50:42 +0000 Subject: [COFF] Inferno/Limbo Experiences Message-ID: So as I was searching around for literature I came across someone selling a 2 volume set of Inferno manuals. I had never seen print manuals so decided to scoop them up, thinking they'd fit nicely with a 9front manual I just ordered too. That said, I hate to just grab a book for it to sit on my shelf, so I want to explore Inferno once I've got literature in hand. Does anyone here know the best way of VMing Inferno these days, if I can just expect to find a copy of distribution media somewhere that'll work in VirtualBox or QEMU or if there's some particular "path of righteousness" I need to follow to successfully land in an Inferno environment. Second, and I hope I don't spin up a debate with this, but is this something I'm investing good time in getting familiar with? I certainly don't hear as much about Inferno as I do about Plan9, but it does feel like it's one of the little puzzle pieces in this bigger picture of systems theory and development. Have there been any significant Inferno-adjacent developments or use cases in recent (past 10-15) years? - Matt G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From edouardklein at gmail.com Sat Aug 5 03:06:41 2023 From: edouardklein at gmail.com (Edouard Klein) Date: Fri, 04 Aug 2023 19:06:41 +0200 Subject: [COFF] Inferno/Limbo Experiences In-Reply-To: References: Message-ID: <87fs4yvi6f.fsf@gmail.com> Hi Matt ! Inferno is friggin' awesome. You'll find plenty to read here: https://github.com/henesy/awesome-inferno This series of blog post will walk you through creating a basic grid: http://debu.gs/tags/inferno Inferno's documentation is (compared to Plan 9's) absolutely stellar. With Plan 9 I often have to resort to reading the sources, whereas Inferno man pages almost always explain things in context, as fully as I need. Also, If the 9p virus bites you, be sure to attend the next international workshop on plan 9: http://iwp9.org/ I learned more about Plan 9 in a few days than in years playing on and off with the system. Special thanks to the organizing committee and to Skip and Ron who took the time to answer my beginner questions, walked me through the codebase, and gave very valuable advice. I don't know if Charles Forsyth is reading here, but I asked him about use cases and he gave me interesting examples of Inferno use, maybe he'll pop up :) What I remember (I may be misremembering, so please don't take this as reliable information) is Inferno was used as the technical solution to solve client's problem, but clients did not care about inferno per se. So most of the use cases were quite ad-hoc and did not give rise to publications or directly shareable code. I've never used Inferno more than as a toy, but if the opportunity ever arise, I know I'll do it. I dream of creating an inferno powered smart home. It's somewhere on the todo list, down there... Please keep us updated on your explorations :) Cheers, Edouard. segaloco via COFF writes: > So as I was searching around for literature I came across someone selling a 2 volume set of Inferno manuals. I had never seen print manuals so decided to scoop them up, thinking > they'd fit nicely with a 9front manual I just ordered too. > > That said, I hate to just grab a book for it to sit on my shelf, so I want to explore Inferno once I've got literature in hand. Does anyone here know the best way of VMing Inferno these > days, if I can just expect to find a copy of distribution media somewhere that'll work in VirtualBox or QEMU or if there's some particular "path of righteousness" I need to follow to > successfully land in an Inferno environment. > > Second, and I hope I don't spin up a debate with this, but is this something I'm investing good time in getting familiar with? I certainly don't hear as much about Inferno as I do about > Plan9, but it does feel like it's one of the little puzzle pieces in this bigger picture of systems theory and development. Have there been any significant Inferno-adjacent developments > or use cases in recent (past 10-15) years? > > - Matt G. From scj at yaccman.com Sat Aug 5 11:17:06 2023 From: scj at yaccman.com (scj at yaccman.com) Date: Fri, 04 Aug 2023 18:17:06 -0700 Subject: [COFF] What Happened to Interdata? In-Reply-To: References: Message-ID: Sorry for the year's delay in responding... I wrote the compiler for the Interdata, and Dennis and I did much of the debugging. The Interdata had much easier addressing for storage: the IBM machine made you load a register, and then you had a limited offset from that register that you could use. I think IBM was 10 bits, maybe 12. But all of it way too small to run megabyte-sized programs. The Interdata allowed a larger memory offset and pretty well eliminated the offsets as a problem. I seem to recall some muttering from Dennis and Ken about the I/O structure, which was apparently somewhat strange but much less weird than the IBM. Also, IBM and Interdata were big-endian, and the PDP was little-endian. This gave Dennis and Ken some problems since it was easy to get the wrong endian, which blew gaskets when executed or copied into the file system. Eventually, we got the machine running, and it was quite nice: true 32-bit computing, it was reasonably fast, and once we got the low-level quirks out (including a famous run-in with the "you are not expected to understand this" code in the kernel, which, it turned out, was a prophecy that came true. On the whole, the project was so successful that we set up a high-level meeting with Interdata to demo and discuss cooperation. And then "the bug" hit. The machine would be running fine, and then Blam! it has lept into low memory and aborted with no hint as to what or where the fault was. We finally tracked down the problem. The Interdata was a microcode machine. And older Unix system calls would return -1 if they failed. In V7, we fixed this to return 0, but there was still a lot of user code that used the old convention. When the Interdata saw a request to load -1 it first noticed that the integer load was not on an address divisible by 4, and jumped to a location in the microcode and executed a couple of microinstructions. But then it also noticed that the address was out of range and entered the microcode again, overwriting the original address that caused the problem and freezing the machine with no indication of where the problem was. It took us only a day or two to see what the problem was, and it was hardware, and they would need to fix it. We had our meeting with Interdata, gave a pretty good sales pitch on Unix, and then said that the bug we had found was fatal and needed to be fixed or the deal was off. The bottom line, they didn't want to fix the bug in the hardware. They did come out with a Unix port several years later, but I was out of the loop for that one, and the Vax (with the UCB paging code) had become the machine of choice... --- On 2023-07-25 16:23, segaloco via COFF wrote: > So I've been studying the Interdata 32-bit machines a bit more closely lately and I'm wondering if someone who was there at the time has the scoop on what happened to them. The Wikipedia article gives some good info on their history but not really anything about, say, failed follow-ons that tanked their market, significant reasons for avoidance, or anything like that. I also find myself wondering why Bell didn't do anything with the Interdata work after springboarding further portability efforts while several other little streams, even those unreleased like the S/370 and 8086 ports seemed to stick around internally for longer. Were Interdata machines problematic in some sort of way, or was it merely fate, with more popular minis from DEC simply spacing them out of the market? Part of my interest too comes from what influence the legacy of Interdata may have had on Perkin-Elmer, as I've worked with Perkin-Elmer analytical equipment several times in the chemistry-side of my career and am curious if I was ever operating some vague descendent of Interdata designs in the embedded controllers in say one of my mass specs back when. > > - Matt G. > > P.S. Looking for more general history hence COFF, but towards a more UNIXy end, if there's any sort of missing scoop on the life and times of the Bell Interdata 8/32 port, for instance, whether it ever saw literally any production use in the System or was only ever on the machines being used for the portability work, I'm sure that could benefit from a CC to TUHS if that history winds up in this thread. -------------- next part -------------- An HTML attachment was scrubbed... URL: From coff at tuhs.org Sat Aug 5 11:46:19 2023 From: coff at tuhs.org (segaloco via COFF) Date: Sat, 05 Aug 2023 01:46:19 +0000 Subject: [COFF] What Happened to Interdata? In-Reply-To: References: Message-ID: Steve thank you for the recollections, that is precisely the sort of story I was hoping to hear regarding your Interdata work. I had found myself quite curious why it would have wound up on a shelf after the work involved, and that makes total sense. That's a shame too, it sounds like the 8/32 could've picked up quite some steam, especially beating the VAX to the punch as a UNIX platform. But hey, it's a good thing so much else precipitated from your work! Also, those sorts of microarchitectural bugs keep me up at night. For all the good in RISC-V there are also now maaaaany fabs with more license than ever to pump out questionable ICs. Combine that with questionable boards with strange bus architectures, and gee our present time sure does present ripe opportunities to experiment with tackling those sorts of problems in software. Can't say I've had the pleasure but it would be nice to still be able to fix stuff with a wire wrap in the field... - Matt G. P.S. TUHS cc as promised, certainly relevant information re: Interdata 8/32 UNIX. ------- Original Message ------- On Friday, August 4th, 2023 at 6:17 PM, scj at yaccman.com wrote: > Sorry for the year's delay in responding... I wrote the compiler for the Interdata, and Dennis and I did much of the debugging. The Interdata had much easier addressing for storage: the IBM machine made you load a register, and then you had a limited offset from that register that you could use. I think IBM was 10 bits, maybe 12. But all of it way too small to run megabyte-sized programs. The Interdata allowed a larger memory offset and pretty well eliminated the offsets as a problem. I seem to recall some muttering from Dennis and Ken about the I/O structure, which was apparently somewhat strange but much less weird than the IBM. > > Also, IBM and Interdata were big-endian, and the PDP was little-endian. This gave Dennis and Ken some problems since it was easy to get the wrong endian, which blew gaskets when executed or copied into the file system. Eventually, we got the machine running, and it was quite nice: true 32-bit computing, it was reasonably fast, and once we got the low-level quirks out (including a famous run-in with the "you are not expected to understand this" code in the kernel, which, it turned out, was a prophecy that came true. On the whole, the project was so successful that we set up a high-level meeting with Interdata to demo and discuss cooperation. And then "the bug" hit. The machine would be running fine, and then Blam! it has lept into low memory and aborted with no hint as to what or where the fault was. > > We finally tracked down the problem. The Interdata was a microcode machine. And older Unix system calls would return -1 if they failed. In V7, we fixed this to return 0, but there was still a lot of user code that used the old convention. When the Interdata saw a request to load -1 it first noticed that the integer load was not on an address divisible by 4, and jumped to a location in the microcode and executed a couple of microinstructions. But then it also noticed that the address was out of range and entered the microcode again, overwriting the original address that caused the problem and freezing the machine with no indication of where the problem was. It took us only a day or two to see what the problem was, and it was hardware, and they would need to fix it. We had our meeting with Interdata, gave a pretty good sales pitch on Unix, and then said that the bug we had found was fatal and needed to be fixed or the deal was off. The bottom line, they didn't want to fix the bug in the hardware. They did come out with a Unix port several years later, but I was out of the loop for that one, and the Vax (with the UCB paging code) had become the machine of choice... > > --- > > On 2023-07-25 16:23, segaloco via COFF wrote: > >> So I've been studying the Interdata 32-bit machines a bit more closely lately and I'm wondering if someone who was there at the time has the scoop on what happened to them. The Wikipedia article gives some good info on their history but not really anything about, say, failed follow-ons that tanked their market, significant reasons for avoidance, or anything like that. I also find myself wondering why Bell didn't do anything with the Interdata work after springboarding further portability efforts while several other little streams, even those unreleased like the S/370 and 8086 ports seemed to stick around internally for longer. Were Interdata machines problematic in some sort of way, or was it merely fate, with more popular minis from DEC simply spacing them out of the market? Part of my interest too comes from what influence the legacy of Interdata may have had on Perkin-Elmer, as I've worked with Perkin-Elmer analytical equipment several times in the chemistry-side of my career and am curious if I was ever operating some vague descendent of Interdata designs in the embedded controllers in say one of my mass specs back when. >> >> - Matt G. >> >> P.S. Looking for more general history hence COFF, but towards a more UNIXy end, if there's any sort of missing scoop on the life and times of the Bell Interdata 8/32 port, for instance, whether it ever saw literally any production use in the System or was only ever on the machines being used for the portability work, I'm sure that could benefit from a CC to TUHS if that history winds up in this thread. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scj at yaccman.com Sat Aug 5 13:35:53 2023 From: scj at yaccman.com (scj at yaccman.com) Date: Fri, 04 Aug 2023 20:35:53 -0700 Subject: [COFF] Bell Labs vs "East Coast" Management style of AT&T In-Reply-To: References: Message-ID: OK, you asked for it... Let me first say that the management style in the Unix Research area was pragmatic and, in many ways, ideal: * We were told that the work we were doing this year would probably take several years before it could be evaluated. This freed us to take 3 months on a project, and, even if the project itself failed it often inspired other people to "do it right". The management structure was very static -- organizations would remain unchanged in mission for several years, with supportive managers up the line. For example, the year I wrote Yacc (probably one of my most productive years) I got a rather blah review from my manager (his exact words were "Why would anyone want to do that?"). The next year I got a massive raise, and the year after my Boss and I made multiple trips to "sell" Unix to Bell Labs and AT&T organizations that could make use of it." In the early 1980s, the V7 port of Unix, which I had been working on for two years, was out and successful. A new language, C++, was being developed and showed promise. The Portable C Compiler had been ported to dozens of different machines, and front ends for FORTRAN and other languages were becoming available. And AT&T decided to divest the "thriving" computer company to go out and change the world. I could see the technology development that was possible and had always enjoyed delivering useful things to people who needed them. So I offered to transfer to the Commercial arm as Department Head of a group of 30 people, growing over two years to nearly 60. My supervisors were a great bunch, and when I told them that, I was sure I could do the job. However, though, they needed to teach me what was most important and how to do it right. They were a little surprised by this, but soon we were technically in sync. The debugger was one piece of software that was a kluge, and we redefined the formats to handle multiple languages and delivered C, Fortran, Ada, and Pascal compilers. However, the business was being run by people who had only worked for a monopoly, and they did not understand the first thing about marketing. They didn't know what the languages were, who used them, and what they did, and, in particular, we had an urgent need for a FORTRAN optimizer (because DEC's was excellent). My marketing support was one two-hour meeting every other week. It was always the last person hired into the marketing group and frequently had not even heard of the languages we were delivering. So we would talk about what we were doing in places like Usenix and Universities. After four years, we had built all the promised languages except for the FORTRAN optimizer, which was written and working but was held up for documentation. The word had come down that all the documentation was to be written in a small office in a small Southern state with no technical footprint whatsoever. The first draft was worse than you could possibly imagine. Somehow, they had gotten the idea that an optimizer was a piece of hardware! After quite a bit of heated discussion, they set out to fix it. But I had had it. I'd done the job I came to do, doubled the size of the department, and much of the software from that time is still alive and well today. But a California headhunter made me an offer I couldn't refuse, and I didn't. A couple of years in Silicon Valley taught me a lot about marketing -- I worked with some superb folks and settled into my post-Bell career. I also developed a deep interest in the craft of management and am co-authoring a book on how you turn programmers, doctors, accountants, etc. into managers. I have had many mistakes to learn from, as well as many successes. On 2023-05-29 00:28, steve jenkin wrote: > I was wondering if anyone close to Early Unix and Bell Labs would > offer some comments on the > evolution of Unix and the quality of decisions made by AT&T senior > managers. > > Tom Wolfe did an interesting piece on Fairchild / Silicon Valley, > where he highlights the difference between SV’s management style > and the “East Coast” Management style. > > [ Around 2000, “Silicon Valley” changed from being ‘chips & hardware’ > to ’software’ & systems ] > [ with chip making, every new generation / technology step resets > competition, monopolies can’t be maintained ] > [ Microsoft showed that Software is the opposite. Vendor Lock-in & > monopolies are common, even easy for aggressive players ] > > Noyce & Moore ran Fairchild Semiconductor, but Fairchild Camera & > Instrument was ‘East Coast’ > or “Old School” - extracting maximum profit. > > It seems to me, an outsider, that AT&T management saw how successful > Unix was > and decided they could apply their size, “marketing knowhow” and client > lists > to becoming a big player in Software & Hardware. > > This appears to be the reason for the 1984 divestiture. > > In another decade, they gave up and got out of Unix. > > Another decade on, AT&T had one of the Baby Bells, SBC, buy it. > > SBC had understood the future growth markets for telephony was “Mobile” > and instead of “Traditional” Telco pricing, “What the market will > bear” p[lus requiring Gross Margins over 90%, > SBC adopted more of a Silicon Valley pricing approach - modest Gross > Margins > and high “pass through” rates - handing most/all cost reductions onto > customers. > > If you’re in a Commodity market, passing on cost savings to customers > is “Profit Maximising”. > It isn’t because Commodity markets are highly competitive, but Volumes > drive profit, > and lower prices stimulate demand / Volumes. [ Price Elasticity of > Demand ] > > Kenneth Flamm has written a lot on “Pass Through” in Silicon Chip > manufacture. > > Just to close the loop, Bells Labs, around 1966, hired Fred Terman, > ex-Dean of Stanford, > to write a proposal for “Silicon Valley East”. > The AT&T management were fully aware of California and perhaps it was > a long term threat. > > How could they replicate in New Jersey the powerhouse of innovation > that was happening in California? > > Many places in many countries looked at this and a few even tried. > Apparently South Korea is the only attempt that did reasonably. > > I haven’t included links, but Gordon Bell, known for formulating a law > of computer ‘classes’, > did forecast early that MOS/CMOS chips would overtake Bipolar - used > by Mainframes - in speed. > It gave a way to use all those transistors on a chip that Moore’s Law > would provide, > and with CPU’s in a few, or one, chip, the price of systems would > plummet. > > He forecast the cutover in 1985 and was right. > The MIPS R2000 blazed past every other chip the year it was released. > > And of course, the folk at MIPS understood that building their own > O/S, tools, libraries etc > was a fool’s errand - they had Unix experience and ported a version. > > By 1991, IBM was almost the Last Man Standing of the original 1970’s > “IBM & the BUNCH”, > and their mainframe revenues collapsed. In 1991 and 1992, IBM racked > up the largest > corporate losses in US history to the time, then managed to survive. > > Linux has, in my mind, proven the original mid-1970’s position of > CSRC/1127 > that Software has to be ‘cheap’, even ‘free’ > - because it’s a Commodity and can be ’substituted’ by others. > > ================================= > > 1956 - AT&T / IBM Consent decree: 'no computers, no software’ > > 1974 - CACM article, CSRC/1127 in Software Research, no commercial > Software allowed > 1984 - AT&T divested, doing commercial Software & Computers > 1994 - AT&T Sells Unix > 1996 - “Tri-vestiture", Bell Labs sold to Lucent, some staff to AT&T > Research. > 2005 - SBC buys AT&T, long-lines + 4 baby bells > > 1985 - MIPS R2000, x2 throughput at same clock speed. Faster than > bipolar, CMOS CPU's soon overtook ECL > > ================================= > > Code Critic > John Lions wrote the first, and perhaps only, literary criticism of > Unix, sparking one of open source's first legal battles. > Rachel Chalmers > November 30, 1999 > https://www.salon.com/test2/1999/11/30/lions_2/ > > "By the time the seventh edition system came out, the company had > begun to worry more about the intellectual property issues and trade > secrets and so forth," Ritchie explains. > "There was somewhat of a struggle between us in the research group > who saw the benefit in having the system readily available, > and the Unix Support Group ... > Even though in the 1970s Unix was not a commercial proposition, > USG and the lawyers were cautious. > At any rate, we in research lost the argument." > > This awkward situation lasted nearly 20 years. > Even as USG became Unix System Laboratories (USL) and was half > divested to Novell, > which in turn sold it to the Santa Cruz Operation (SCO), > Ritchie never lost hope that the Lions books could see the light of > day. > He leaned on company after company. > > "This was, after all, 25-plus-year-old material, but when they > would ask their lawyers, > they would say that they couldnt see any harm at first glance, > but there was a sort of 'but you never know ...' attitude, and > they never got the courage to go ahead," he explains. > > Finally, at SCO [ by July 1996 ], Ritchie hit paydirt. > He already knew Mike Tilson, an SCO executive. > With the help of his fellow Unix gurus Peter Salus and Berny > Goodheart, Ritchie brought pressure to bear. > "Mike himself drafted a 'grant of permission' letter," says > Ritchie, > "'to save the legal people from doing the work!'" > > Research, at last, had won. > > ================================= > > Tom Wolfe, Esquire, 1983, on Bob Noyce: > The Tinkerings of Robert Noyce | Esquire | DECEMBER 1983.webarchive > http://classic.esquire.com/the-tinkerings-of-robert-noyce/ > > ================================= > > Special Places > IEEE Spectrum Magazine > May 2000 > Robert W. Lucky (Bob Lucky) > https://web.archive.org/web/20030308074213/http://www.boblucky.com/reflect/may00.htm > https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=803583 > > Why does place matter? Why does it matter where we live and work today > when the world is so connected that we're never out of touch with > people or information? > > The problem is, even if they get da Vinci, it won't work. > There's just something special about Florence, and it doesn't travel. > Just as in this century many places have tried to build their own > Silicon Valley. > While there have been some successes in > Boston, > Research Triangle Park, Austin, and > Cambridge in the U.K., > to name a few significant places, most attempts have paled in > comparison to the Bay Area prototype. > > In the mid-1960s New Jersey brought in Fred Terman, the Dean at > Stanford and architect of Silicon Valley, and commissioned him to > start a Silicon Valley East. > [ Terman reited from Stanford in 1965 ] > > ================================= > > -- > Steve Jenkin, IT Systems and Design > 0412 786 915 (+61 412 786 915) > PO Box 38, Kippax ACT 2615, AUSTRALIA > > mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin From sjenkin at canb.auug.org.au Sat Aug 5 23:57:52 2023 From: sjenkin at canb.auug.org.au (steve jenkin) Date: Sat, 5 Aug 2023 23:57:52 +1000 Subject: [COFF] [TUHS] Re: Re: What Happened to Interdata? In-Reply-To: References: Message-ID: <8243C758-EEB5-4F30-8E25-C61F9B250380@canb.auug.org.au> Tom, it stands up very well, 1977 to 2023. > On 5 Aug 2023, at 13:46, Tom Lyon wrote: > > Here's my summer activity report on my work porting V6 code to the Interdata, working closely under Steve and Dennis. I left before the nasty bug was discovered. (I think). > https://akapugsblog.files.wordpress.com/2018/05/inter-unix_portability.pdf -- From scj at yaccman.com Sun Aug 6 04:47:03 2023 From: scj at yaccman.com (scj at yaccman.com) Date: Sat, 05 Aug 2023 11:47:03 -0700 Subject: [COFF] [TUHS] Re: Unix game origins - stories similar to Crowther's Adventure In-Reply-To: References: Message-ID: <0229665988f70d07b4e73381118efc6c@yaccman.com> I was responsible for two games on Unix: Ching and GoFish. Ching was a fortune-telling game: you would type a question or situation as a text. The program would hash the text, convert it into yarrow sticks, and use display the "fortune". I copied the fortunes from a book, so I don't think ching actually was ever part of a Unix distribution because of the copyright. Some hand-carried versions got out, though, I think. The other game I wrote for my son: it played the game "Go Fish". It was amazingly hard to win, even for an adult, because it used a simple card-counting strategy: if the opponent asked for a 6 and I didn't have one I'd remember that the opponent had a 6, and when I drew one I immediately asked for it. A number of my co-workers tried the game out, and most of them lost badly. GoFish was distributed, and I actually was accused in public by someone who was sure the game cheated! A couple of years ago, a co-worker was showing me a "Unix on a chip" machine, and I saw that it had the sources for everything. I looked at the source for it, which was in C -- one of the first C programs I wrote. As I read the code, I discovered a bug: a type mismatch when calling a function. It was a bug, but didn't affect the behavior. The other thing I noticed was that the program had three GOTO's in it. I blushed... Steve --- On 2023-02-01 15:24, Dan Cross wrote: > [TUHS to Bcc] > > On Wed, Feb 1, 2023 at 3:23 PM Douglas McIlroy > wrote: >> > In the annals of UNIX gaming, have there ever been notable games that have operated as multiple processes, perhaps using formal IPC or even just pipes or shared files for communication between separate processes >> >> I don't know any Unix examples, but DTSS (Dartmouth Time Sharing >> System) "communication files" were used for the purpose. For a fuller >> story see https://www.cs.dartmouth.edu/~doug/DTSS/commfiles.pdf > > Interesting. This is now being discussed on the Multicians list (which > had a DTSS emulator! Done for use by SIPB). Warren Montgomery > discussed communication files under DTSS for precisely this kind of > thing; apparently he had a chess program he may have run under them. > Barry Margolin responded that he wrote a multiuser chat program using > them on the DTSS system at Grumman. > > Margolin suggests a modern Unix-ish analogue may be pseudo-ttys, which > came up here earlier (I responded pointing to your wonderful note > linked above). > >> > This is probably a bit more Plan 9-ish than UNIX-ish >> >> So it was with communication files, which allowed IO system calls to >> be handled in userland. Unfortunately, communication files were >> complicated and turned out to be an evolutionary dead end. They had >> had no ancestral connection to successors like pipes and Plan 9. >> Equally unfortunately, 9P, the very foundation of Plan 9, seems to >> have met the same fate. > > I wonder if there was an analogy to multiplexed files, which I admit > to knowing very little about. A cursory glance at mpx(2) on 7th > Edition at least suggests some surface similarities. > > - Dan C. From sauer at technologists.com Wed Aug 9 06:25:29 2023 From: sauer at technologists.com (Charles H Sauer (he/him)) Date: Tue, 8 Aug 2023 15:25:29 -0500 Subject: [COFF] 1987 - Apollo Computer Inc Network Computing System VHS discovered thanks to 512 byte C compiler In-Reply-To: References: Message-ID: https://www.youtube.com/watch?v=WZBejHtLC-Y backstory on how https://xorvoid.com/sectorc.html led to this rediscovery at https://notes.technologists.com/notes/2023/08/07/koko-keeping-warm/ Charlie -- voice: +1.512.784.7526 e-mail: sauer at technologists.com fax: +1.512.346.5240 Web: https://technologists.com/sauer/ Facebook/Google/LinkedIn/Twitter: CharlesHSauer From sauer at technologists.com Wed Aug 9 06:25:33 2023 From: sauer at technologists.com (Charles H Sauer (he/him)) Date: Tue, 8 Aug 2023 15:25:33 -0500 Subject: [COFF] October 2, 1991 - The Apple/IBM Alliance VHS discovered thanks to 512 byte C compiler In-Reply-To: <4c2f1e6a-fbeb-52b1-77ca-e1546d7acd5a@technologists.com> References: <4c2f1e6a-fbeb-52b1-77ca-e1546d7acd5a@technologists.com> Message-ID: https://www.youtube.com/watch?t=1&v=5HXLxtIfQhI backstory on how https://xorvoid.com/sectorc.html led to this rediscovery at https://notes.technologists.com/notes/2023/08/07/koko-keeping-warm/ Charlie -- voice: +1.512.784.7526 e-mail: sauer at technologists.com fax: +1.512.346.5240 Web: https://technologists.com/sauer/ Facebook/Google/LinkedIn/Twitter: CharlesHSauer From sjenkin at canb.auug.org.au Sun Aug 13 10:16:45 2023 From: sjenkin at canb.auug.org.au (Steve Jenkin) Date: Sun, 13 Aug 2023 10:16:45 +1000 Subject: [COFF] Bell Labs vs "East Coast" Management style of AT&T In-Reply-To: References: Message-ID: <69DE1503-3849-4027-A5C3-3DC34BC664CF@canb.auug.org.au> Steve, thanks for the wonderful account of history. you were at the heart of it all, very kind of you to answer my Q. You exactly described the problem at the divested AT&T: monopolists who didn’t understand ‘marketing’, especially not of commodity goods, where 90% gross margins kill sales volumes & profits. =================== I know I've read a comment about BTL's CSRC being "collegiate" even "collaborative”. Was that your experience? In 1971, Jerry Weinberg published a book with “Egoless Programming”. I wouldn’t phrase his concept that way, perhaps “Code Quality comes First” not just “performant” but well designed, well coded, well documented and easily maintained. I wrote & discarded two additional responses, included ‘below the fold’ if anyone wants to rip into them :) cheers steve > On 5 Aug 2023, at 13:35, scj at yaccman.com wrote: > > OK, you asked for it... > > Let me first say that the management style in the Unix Research area was pragmatic and, in many ways, ideal: > * We were told that the work we were doing this year would probably take several years before it could be evaluated. > This freed us to take 3 months on a project, and, even if the project itself failed it often inspired other people to "do it right”. > The management structure was very static -- organizations would remain unchanged in mission for several years, with supportive managers up the line. > For example, the year I wrote Yacc (probably one of my most productive years) > I got a rather blah review from my manager (his exact words were "Why would anyone want to do that?"). > The next year I got a massive raise, and the year after my Boss and I made multiple trips to "sell" Unix to Bell Labs and AT&T organizations that could make use of it.” > In the early 1980s, the V7 port of Unix, which I had been working on for two years, was out and successful. > A new language, C++, was being developed and showed promise. > The Portable C Compiler had been ported to dozens of different machines, and front ends for FORTRAN and other languages were becoming available. > And AT&T decided to divest the "thriving" computer company to go out and change the world. > I could see the technology development that was possible and had always enjoyed delivering useful things to people who needed them. > So I offered to transfer to the Commercial arm as Department Head of a group of 30 people, growing over two years to nearly 60. > My supervisors were a great bunch, and when I told them that, I was sure I could do the job. > However, though, they needed to teach me what was most important and how to do it right. > They were a little surprised by this, but soon we were technically in sync. > The debugger was one piece of software that was a kluge, and we redefined the formats to handle multiple languages and delivered C, Fortran, Ada, and Pascal compilers. > However, the business was being run by people who had only worked for a monopoly, and they did not understand the first thing about marketing. > They didn't know what the languages were, who used them, and what they did, and, in particular, we had an urgent need for a FORTRAN optimizer (because DEC's was excellent). > My marketing support was one two-hour meeting every other week. > It was always the last person hired into the marketing group and frequently had not even heard of the languages we were delivering. > So we would talk about what we were doing in places like Usenix and Universities. > After four years, we had built all the promised languages except for the FORTRAN optimizer, which was written and working but was held up for documentation. > The word had come down that all the documentation was to be written in a small office in a small Southern state with no technical footprint whatsoever. > The first draft was worse than you could possibly imagine. > Somehow, they had gotten the idea that an optimizer was a piece of hardware! > After quite a bit of heated discussion, they set out to fix it. > But I had had it. > I'd done the job I came to do, doubled the size of the department, and much of the software from that time is still alive and well today. > But a California headhunter made me an offer I couldn't refuse, and I didn't. > > A couple of years in Silicon Valley taught me a lot about marketing -- I worked with some superb folks and settled into my post-Bell career. > I also developed a deep interest in the craft of management and am co-authoring a book on how you turn programmers, doctors, accountants, etc. into managers. > I have had many mistakes to learn from, as well as many successes. =================== Source Code, especially when Portable, puts power in the hands of users/ consumers, taking it away from Vendors exploiting need: - Can be no "Vendor Lock-In", used extensively to 'mine' user bases for revenue. Economics has the notions of goods being excludable (owners/vendors control who uses a good) and rivalrous (only one customer can consume the good, thus preventing others). Source Code is 'non-excludable' - if you've got all the code for something, an 'owner' can't stop you running it. Neither is it 'rivalrous' - my using the Code doesn't stop anyone else using it vs I eat an Apple, you can't eat it. - Users can't be coerced into 'upgrades' or 'orphaned' when products are dropped or a company goes away. Having the source means a vendor never has to say 'sorry'. maybe. With the invention of Portable Systems and Apps, a whole new layer of the Computing Industry appeared: ISV's like Oracle (Indep Software Vendors) They got to exploit Customers (denying customers access to own data), while hardware vendors lost "use us or else" control. I think I've identified three things that Doug Mcilroy's group did / knew, apart from his 4 part "Unix Philosophy" (of building reusable Tools - allowing not-as-bright ppl like me to "Stand on the Shoulders of Giants") A: Artefacts: Code, distribution, self-hosting tools & 'toolchain', basic Unix Userland tools & online documentation B: Process: Collegiate, Collaborate, 2-way Sharing of ideas/code. "All of Us is better than any one of Us" C: Software Economics: a) Bill Gates Law, “it’s about volume”: selling '000's of units, cost/unit is v. low, sell millions: invoicing costs more than code. b) Code developers need to invent reasonable ways to get paid for their work. Android is given away, but Google make money on the "Play Store" Sometimes, there's no obvious substitute to "Pay for Support / License" :( [ the Economics rules are the same for Moore's Law. ] [ Sell lots, drop prices and High Elasticity of Consumer Products guarantees higher profits ] Points where GNU and Unix differ fundamentally: - for Unix, it's all about working Code: 'show me yours, don't just criticise', Clean, high quality, 'minimal' code was normalised + some doco - BTL researchers, mostly, weren't inflating their ego or looking for power. They collaborated and shared ideas, improved others work. - BTL understood 'things cost money' and their work was intended to lead to 'products'. They didn't agree with the 'management' Business Model, but weren't "Freedom!" Zealots... Cheap & 'everywhere' is good :) - BTL Research chose "Talent" very well, including "self-starters": Curious intellects able to take Initiative & challenge norms. - BTL researchers had the luxury of "all the time you need" to do 'step-wise refinement', rewrites, explore multiple alternatives, profile & 'tune', and modify core concepts / tools / languages. Iterating their way to great software. Deadlines and Deliverables are central to Business Projects - but anathema to Research, where you don't know the Question even. =================== Gordon Bell notes that: in 1984 there were 91 computer manufacturers in the USA, in 1990, just four of them were left: IBM, HP, DEC and Data General Within a decade, only IBM & HP were left, with IBM having declared ‘largest losses in US Corporate history’, twice, 1991 & 1992. The MOS / CMOS microprocessors were speeding up at 60%/year, multiples faster than ECL & TTL improved. Bell saw this early. Presume one of the reasons he left DEC in 1983. Gordon Moore and his team solved a Profit Maximisation problem with “Moore’s Law”. Old School AT&T management didn’t understand either: - they were competing in Commodity markets for hardware & software - their timing on hardware could not have been worse. The Intel juggernaut & CMOS & RISC vendors were about to over-run ’traditional’ firms Unix, C and the new toolchains created an entirely new class of Systems & Software: Portable Which created the ISV’s (Indep. Software Vendors). Oracle and SAP have done very, very well off the back of your invention :-/ Hardware and Software are Symbiotic: neither thrives without the other. Customers buy Software and what it can do for them. But they have to run it on hardware. Portable code allowed selection of “Best Option”. The Open Systems and Portability revolution, begun at Bell Labs, took this to a whole new level. Customers wanted & needed zero barriers to entry (and exit) - their data and systems had to ‘just run’… Which made 1980’s computing competitive in a way it had never been for the previous two decades. Vendors still sought ways to “Lock in” customers, but it wasn’t hardware anymore. The original AT&T management who thought they’d all get rich of Unix in 1984 can’t be blamed for missing this trend. It was entirely outside their experience. I wonder what the Shareholders thought? =================== -- Steve Jenkin, IT Systems and Design 0412 786 915 (+61 412 786 915) PO Box 38, Kippax ACT 2615, AUSTRALIA mailto:sjenkin at canb.auug.org.au http://members.tip.net.au/~sjenkin From coff at tuhs.org Fri Aug 18 01:55:44 2023 From: coff at tuhs.org (segaloco via COFF) Date: Thu, 17 Aug 2023 15:55:44 +0000 Subject: [COFF] Commonality of 60s-80s Print Standards Message-ID: Good morning folks, I'm hoping to pick some brains on something that is troubling me in my search for some historical materials. Was there some policy prior to mass PDF distribution with standards bodies like ANSI that they only printed copies of standards "to order" or something like that? What has me asking is when looking for programming materials prior to when PDF distribution would've taken over, there's a dearth of actual ANSI print publications. I've only come across one actual print standard in all my history of searching, a copy of Fortran 77 which I guard religiously. Compare this with PALLETS'-worth, like I'm talking warehouse wholesale levels of secondary sources for the same things. I could *drown* in all the secondary COBOL 74 books I see all over the place but I've never seen seen a blip of a suggestion of a whisper of an auction of someone selling a legitimate copy of ANSI X3.23-1974. It feels like searching for a copy of the Christian Bible and literally all I can find are self help books and devotional readers from random followers. Are the standards really that scarce, or was it something that most owners of back in the day would've thrown in the wood chipper when the next edition dropped, leading to an artificial narrowing of the amount of physical specimens still extant? To summarize, why do print copies of primary standards from the elden days of computing seem like cryptids while one can flatten themselves into a pancake under the mountains upon mountains of derivative materials out there? Why is filtered material infinitely more common than the literal rule of law governing the languages? For instance the closest thing to the legitimate ANSI C standard, a world-changing document, that I can find is the "annotated" version, which thankfully is the full text but blown up to twice the thickness just to include commentary. My bookshelf is starting to run out of room to accommodate noise like that when there are nice succint "the final answer" documents that take up much less space but seem to virtually not exist... - Matt G. From paul.winalski at gmail.com Fri Aug 18 06:22:48 2023 From: paul.winalski at gmail.com (Paul Winalski) Date: Thu, 17 Aug 2023 16:22:48 -0400 Subject: [COFF] Commonality of 60s-80s Print Standards In-Reply-To: References: Message-ID: On 8/17/23, segaloco via COFF wrote: > > To summarize, why do print copies of primary standards from the elden days > of computing seem like cryptids while one can flatten themselves into a > pancake under the mountains upon mountains of derivative materials out > there? Why is filtered material infinitely more common than the literal > rule of law governing the languages? I worked as a software engineer in the 1980s and '90s in Digital Equipment Corporation's unit that developed tools for programmers, including the compilers. I don't recall the policies and procedures of the various ANSI computer language standards committees regarding publication of the standards. I think the reason that there aren't many extant copies of them out there is that not many people actually cared what the standard said. What was important was the details of the particular implementation of the language that you as a programmer had to use. Even within DEC's compiler group, there were only a couple of copies of the ANSI standard document for any particular language. A typical compiler group has only one engineer tasked with standard interpretation and compliance. The rest of the compiler developers work from the specification for the upcoming release. > For instance the closest thing to the > legitimate ANSI C standard, a world-changing document, that I can find is > the "annotated" version, which thankfully is the full text but blown up to > twice the thickness just to include commentary. My bookshelf is starting to > run out of room to accommodate noise like that when there are nice succint > "the final answer" documents that take up much less space but seem to > virtually not exist... For a compiler developer, that isn't "noise". Official specifications for computer languages often contain--despite the best efforts of the committee members to prevent them--errors, inconsistencies, vague language, and outright contradictions. It's the compiler developers--especially those working on incoming bug reports--who have to deal with problems in the standard. It helps to have an idea of what the committee members' intentions were, and what their rationale was, for particular verbiage in the standard. I know DEC's representatives to the C standard committee, and in the case of the C and Fortran standards the extra verbiage was completely intentional. In case law, the Judge's decision in a trial usually is a page long, sometimes only a sentence or two. But there may be 80 pages of legal reasoning explaining just why the judge came to that conclusion. Compiler developers end up being language lawyers. When a problem comes up regarding a language feature, they want to know the committee's intentions and rationale for why the standard says what it does say (or appears to say). -Paul W. From coff at tuhs.org Fri Aug 18 08:11:27 2023 From: coff at tuhs.org (segaloco via COFF) Date: Thu, 17 Aug 2023 22:11:27 +0000 Subject: [COFF] Commonality of 60s-80s Print Standards In-Reply-To: References: Message-ID: > In case law, the Judge's decision in a trial usually is a page long, > sometimes only a sentence or two. But there may be 80 pages of legal > reasoning explaining just why the judge came to that conclusion. > Compiler developers end up being language lawyers. When a problem > comes up regarding a language feature, they want to know the > committee's intentions and rationale for why the standard says what it > does say (or appears to say). > > -Paul W. That's actually a very, very good comparison, it certainly helps me see it in a different way. For some of the background on the angle I'm approaching this from: I've worked in the EPA-regulated US environmental sector since late 2012, first as a chemist for 4 years until enough patches and suggestions up to our data system team got me on their radar enough to jump over the fence. Back in the lab days, our governing literature came from only a handful of sources: - EPA SW-846 ("Waste") - EPA Clean Water Act (NPDES, "Wastewater") - ASTM - Standard Methods for the Examination of Water and Wastewater Each of these groups maintains (the former two, gratis, the later two for licensing fees...) a plethora of chemical analysis methodology, often quite prescriptive, describing the legal expectations of running that method. In all my time working with the methods, we always, and I mean *always* had a copy of the legally binding document at arms length, often copies littered throughout the lab (although any formal investigation or dispute required a "controlled" signed and dated copy from the QA office.) On the flip side, I don't recall seeing *any* sort of general literature books around the lab akin to the computing books we see everywhere that were derivatives and digestions *of* the lore that was the legally binding method text. Then our work is usually broken up by program (and by matrix i.e. solid, water, organic waste) and the appropriate method for the program, permit, and sample matrix must be strictly adhered to. For instance, if you are running anything related to the Clean Water Act in wastewater for mid-level heavy metals analysis, it must be EPA method 200.7, no ifs ands or buts about it. As such, working from literally any other document is just setting yourself up for disaster because then, what if that author left out that you need to perform a filtration on the field-preserved sample before it goes on the instrument or it isn't valid 200.7 analysis? A book on ICP-AES may not tell you that, some random work someone wrote commenting on their experiences or observations with heavy metals analysis may not mention that little bit, but you can sure as heck bet that it will be precisely in section whatever subsection whatever paragraph whatever of the legally binding document. If you didn't read the standard and skipped this step, at the very least your data gets recalled, at most, you wind up in court for data fraud. Am I getting into apples vs oranges here? Is the difference here that standards like the ANSI standards here are more like "you must conform to this to say that you conform to it" but you do not need to conform to this to say that you are programming in a given programming language, or to sell products on a specific platform or in a specific industry, or something like that. Perhaps what I'm missing is the difference between the regulatory teeth involved in the EPA's expectation of data quality vs the fact that "quality" in off the shelf computing products on the private market is a suggestion, not a legal requirement to even operate? Is it that standards existed as a way to give products a nice marketing banner to fly under if they so chose and way to signal quality and compatibility to customers with the confidence that others won't go parading around like they're also comparable when they really aren't? That would certainly explain the difference between what I see in my chemistry side of things and what I see in computing if the expectation of computing standards is just "you don't have to follow this, but if you do you can flaunt it"? To put it even shorter, as a chemist working with regulatory EPA methodology, my bookshelf better be full of those legal documents and my work better *perfectly* match it or I can find myself in all sorts of hot water. For most bookshelves of programming books I've seen in stores, libraries, professors offices, etc. I scarcely *ever* see governing documents at all despite countless languages being legally defined and yet everyone hums along business as usual. Thanks for entertaining this question by the way, it's kinda "out there" but this is like the only circle of folks I've found that I consistently hear good insights on this sort of stuff from, which I appreciate. I wish I could articulate what I'm getting at more succinctly but it is what it is. - Matt G. From stuff at riddermarkfarm.ca Fri Aug 18 08:52:44 2023 From: stuff at riddermarkfarm.ca (Stuff Received) Date: Thu, 17 Aug 2023 18:52:44 -0400 Subject: [COFF] Commonality of 60s-80s Print Standards In-Reply-To: References: Message-ID: <94a15687-0349-c05b-6e5d-7ad6f1d74ab7@riddermarkfarm.ca> On 2023-08-17 18:11, segaloco via COFF wrote (in part): > Am I getting into apples vs oranges here? Yes. In some areas (such as crypto, whence I came), if you did not follow the standards, then you would not interoperate. There were no testing labs for compliance (as there was for FIPS, for example). I recall compliance tests for ANSI C but that stopped with the adoption of ISO C (if my aging memory is correct). S. From coff at tuhs.org Fri Aug 18 09:49:13 2023 From: coff at tuhs.org (segaloco via COFF) Date: Thu, 17 Aug 2023 23:49:13 +0000 Subject: [COFF] Commonality of 60s-80s Print Standards In-Reply-To: <94a15687-0349-c05b-6e5d-7ad6f1d74ab7@riddermarkfarm.ca> References: <94a15687-0349-c05b-6e5d-7ad6f1d74ab7@riddermarkfarm.ca> Message-ID: > > Am I getting into apples vs oranges here? > > > Yes. In some areas (such as crypto, whence I came), if you did > not follow the standards, then you would not interoperate. There were > no testing labs for compliance (as there was for FIPS, for example). > I recall compliance tests for ANSI C but that stopped with the > adoption of ISO C (if my aging memory is correct). > > S. Okay I think it's making sense to me now. So the apples of programming standards would come down to: - If you want to advertise/contract for interoperability with standards-compliant components, then your component must likewise adhere or you are lying to your customers and liable. - Otherwise, if you just want to push something but don't want to pad your market performance with attracting vendors interested in said interoperability, you're free to do so and don't face legal ramifications, you're just selling what could be assessed as a sub-par product, but you're within your legal right to do so as long as you don't suggest otherwise. Whereas the oranges of EPA standards I'm trying to compare it to are: - If you want to produce legal regulatory information that can be used in EPA-related disputes, you must adhere to these legally binding regulations put out by the EPA. - You can tell someone yeah I'll test your water for lead, but if they intend to use that number in litigation, a formal environmental survey, or some other legally-binding case, then you're held to the higher standard. In this case the particulars do matter because you're not selling a random product on the market, you're specifically selling regulatory acceptability as a factor of the product. I presume the only situations then where adherence to a programming standard by ANSI or another body could actually play some legal role in their operation are either: - The vendor is under contract to ensure the product they're producing is conformant (i.e. USDoD requiring NT to present POSIX calls) - The vendor cites the standard in published material as applying to their product But in both cases their due diligence is to prove that they're meeting the standards *for customers that expect it* not that there is any legal requirement that they do this in absence of said expectation. - Matt G. P.S. I'll disclaim for anyone that answers that I'm not seeking legal advise by the way, I don't want anyone to feel like they're under the microscope on this :P If the day comes I'm citing COFF in a court of law I'll just try and fly to Jupiter because bizarro world is probably upon us. From paul.winalski at gmail.com Sat Aug 19 00:07:29 2023 From: paul.winalski at gmail.com (Paul Winalski) Date: Fri, 18 Aug 2023 10:07:29 -0400 Subject: [COFF] Commonality of 60s-80s Print Standards In-Reply-To: References: Message-ID: On 8/17/23, segaloco wrote: > > IIs the difference here that > standards like the ANSI standards here are more like "you must conform to > this to say that you conform to it" but you do not need to conform to this > to say that you are programming in a given programming language, or to sell > products on a specific platform or in a specific industry, or something like > that. Here I think you have found the crux of the matter. A big difference between the ANSI standards for programming languages and the EPA regulations is the legal requirement for conformance. Nobody who uses a programming language is under any sort of legal requirement to conform to the ANSI standard--neither those who write compilers for the language, nor those who write programs in that language. The only "enforcement" of the standard that exists is consumer fraud law. If you market a compiler claiming that it is ANSI standard-compliant, and it isn't, you could be liable for civil or possibly criminal fraud/ But that's it. Very different from the EPA regulations that you cited, where failure to conform to the regulations can have very severe legal and financial consequences. Hence the lack of widespread use of the standards documents. Proper ANSI language standard compliance is important to compiler vendors who claim that their product conforms to the standard. The compiler groups I have worked in at DEC and Intel have had someone on the front end team for each language who is an expert on the fine print of the standard and whose job it is to see to it that the product (the compiler) stays standard-conformant. This person does have a complete copy of the ANSI standard on their desk. Users of compilers claiming to be standard-conformant are under no legal obligation whatsoever to write standard-conformant programs. There is thus no reason for them to have a copy of the standard readily at hand. However, every IT shop I've worked in does have their own in-house standard for programming practices, and this includes which features of the programming language are allowed to be used and sometimes how they are to be used. In a well-run programming shop, these rules are written down in detail, every programmer has a copy of them, and they are rigidly enforced--you can't check in your code if you've violated them. Failure to follow the in-house programming regulations can also have negative career advancement implications. -Paul W. From crossd at gmail.com Sat Aug 26 01:40:19 2023 From: crossd at gmail.com (Dan Cross) Date: Fri, 25 Aug 2023 11:40:19 -0400 Subject: [COFF] Fwd: [SDF] Computer Museum In-Reply-To: <202308250110.37P1AaQB018617@sdf.org> References: <202308250110.37P1AaQB018617@sdf.org> Message-ID: I thought folks on COFF and TUHS (Bcc'ed) might find this interesting. Given the overlap between SDF and LCM+L, I wonder what this may mean for the latter. - Dan C. ---------- Forwarded message --------- From: SDF Membership Date: Thu, Aug 24, 2023 at 9:10 PM Subject: [SDF] Computer Museum To: We're in the process of opening a computer museum in the Seattle area and are holding our first public event on September 30th - October 1st. The museum features interactive exhibits of various vintage computers with a number of systems remotely accessible via telnet/ssh for those who are unable to visit in person. If this interests you, please consider replying with comments and take the ascii survey below. You can mark an X for what interests you. I would like to know more about: [ ] visiting the museum [ ] how to access the remote systems [ ] becoming a regular volunteer or docent [ ] restoring and maintaining various vintage systems [ ] curation and exhibit design [ ] supporting the museum with an annual membership [ ] supporting the museum with an annual sponsorship [ ] funding the museum endowment [ ] day to day administration and operations [ ] hosting an event or meet up at museum [ ] teaching at the museum [ ] donating an artifact Info on our first public event can be found at https://sdf.org/icf From clemc at ccc.com Sat Aug 26 02:04:05 2023 From: clemc at ccc.com (Clem Cole) Date: Fri, 25 Aug 2023 12:04:05 -0400 Subject: [COFF] Fate of CERN's PL-11 Language for the PDP-11 Message-ID: C, BLISS, BCPL, and the like were hardly the only systems programming languages that targeted the PDP-11. I knew about many system programming languages of those times and used all three of these, plus a few others, such as PL/360, which Wirth created at Stanford in the late 1960s to develop the Algol-W compiler. Recently, I was investigating something about early systems programming languages, and a couple of questions came to me that I could use some help finding answers (see below). In 1971, RD Russell of CERN wrote a child of Wirth's PL/360 called PL-11: Programming Language for the DEC PDP-11 Computer in Fortran IV. It supposedly ran on CERN's IBM 360 as a cross-compiler and was hosted on DOS-11 and later RSX. [It seems very 'CARD' oriented if you look at the manual - which makes sense, given the time frame]. I had once before heard about it but knew little. So, I started to dig a little. If I understand some of the history correctly, PL-11 was created/developed for a real-time test jig that CERN needed. While it was available in limited cases, since BLISS-11 required a PDP-10 to cross-compile, it was not considered (I've stated earlier that some poor marketing choices at DEC hurt BLISS's ability to spread). Anyway, a friend at CERN later in the 70s/80s told me they thought that as soon as UNIX made it on the scene there since it was interactive and more accessible, C was quickly preferred as the system programming language of choice. However, a BCPL that had come from somewhere in the UK was also kicking around. So, some questions WRT PL-11: 1. Does anyone here know any (more) of the story -- Why/How? 2. Do you know if the FORTRAN source survives? 3. Did anything interesting/lasting get written using it? Tx Clem ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From coff at tuhs.org Tue Aug 29 07:29:47 2023 From: coff at tuhs.org (segaloco via COFF) Date: Mon, 28 Aug 2023 21:29:47 +0000 Subject: [COFF] Pipeable MCS6500 Family Disassembler Message-ID: Howdy folks, just wanted to share a tool I wrote up today in case it might be useful for someone else: https://gitlab.com/segaloco/dis65 This has probably been done before, but this is a bare-bones one-pass MOS 6500 disassembler that does nothing more than convert bytes to mnemonics and parameters, so no labeling, no origins, etc. My rationale is as I work on my Dragon Quest disassembly, there are times I have to pop a couple bytes through the disassembler again because something got misaligned or some other weird issue. My disassembler through the project has been da65, which does all the labeling and origin stuff but as such, requires a lot of seeking and isn't really amenable to a pipeline, which has required me to do something like: printf "\xAD\xDE\xEF\xBE" > temp.bin && da65 temp.bin && rm temp.bin to get the assembly equivalent of 0xDEADBEEF. Enter my tool, it enables stuff like: printf "\xAD\xDE\xEF\xBE" | dis65 instead. A longer term plan is to then write a second pass that can then do all the more sophisticated stuff without having to bury the mnemonic generation down in there somewhere, plus that second pass could then be architecture-agnostic to a high degree. Anywho, feel free to do what you want with it, it's BSD licensed. One "bug" I need to address is that all byte values are presented as unsigned, but in the case of indirects and a few other circumstances, it would make more sense for them to be signed. Probably won't jump on that ASAP, but know that's a coming improvement. While common in disassemblers, I have no intention on adding things like printing the binary bytes next to the opcodes. Also this doesn't support any of the undocumented opcodes, although it should be trivial to add them if needed. I went with lower-case since my assembler supports it, but you should have a fine time piping into tr(1) if you need all caps for an older assembler. - Matt G. From coff at tuhs.org Wed Aug 30 10:05:36 2023 From: coff at tuhs.org (segaloco via COFF) Date: Wed, 30 Aug 2023 00:05:36 +0000 Subject: [COFF] Japanese Computing History Books/Memoirs? Message-ID: Hello, today I received in the mail a book I ordered apparently by one of the engineers at Sega responsible for their line of consoles. It's all in Japanese but based on the little I know plus tables in the text, it appears to be fairly technical and thorough. I'm excited to start translating it and see what lies within. In any case, it got me thinking about what company this book might have as far as Japanese literature concerning computing history there, or even just significant literature in general regarding Japanese computer history. While we are more familiar with IBM, DEC, workstations, minis, etc. the Japanese market had their own spate of different systems such as NEC's various "PCs" (not PC-compats, PC-68, PC-88, PC-98), Sharp X68000, MSX(2), etc. and then of course Nintendo, Sega, NEC, Hudson, and the arcade board manufacturers. My general experience is that Japanese companies are significantly more tight-lipped about everything than those in the U.S. and other English-speaking countries, going so far as to require employees to use pseudonyms in any sort of credits to prevent potential poaching. As such, first-party documentation for much of this stuff is incredibly difficult to come by, and secondary materials and memoirs and such, in my experience at least, are virtually non-existent. However, that is also from my perspective here across the seas trying to research an obscure, technical subject in my non-native tongue. Anyone here have a particular eye for Japanese computing? If so, I'd certainly be interested in some discussion, doesn't need to be on list either. - Matt G.