Wednesday, December 30, 2020

Are Mailing Lists Toast?

Definitely Toast
 

From the very beginning when IIM (Cisco's email authentication draft) was merged with DK (Domain Keys from Yahoo!) to became DKIM, we both envisioned a sender signing policy module which allowed a domain so say "we sign all of our mail, so if you have unsigned mail or the signature is broken that's purportedly from us, that's not cool". Since we were all experienced with internet standards it was plain as day that there was a serious deployment problem since mailing lists mangle messages and thus broke signatures. Mailing lists would thus throw a wrench into the policy gears. This was 16 years ago.

Our effort at Cisco was driven primarily by phishing, and spear phishing in particular. We had heard tell of some exec at another company falling for a spear phishing attack, and we didn't figure our execs were any more clueful so that was pretty frightening. Since Cisco had exactly no presence in email at all, it also gave some plausible deniability as to what we were up to. We weren't looking to get into the email biz, but we weren't not looking either. 

We formed a group which included Jim Fenton and me, and created a project to sign all of the mail at Cisco with a goal of being able to annotate suspicious email purporting to be coming from Cisco employees. This required finding and signing all legitimate Cisco email. So off we went trying to find all of the sources of unsigned email in the company so that we could route it through the DKIM signing infrastructure. We didn't have the nifty reporting feature of DMARC so it wasn't the easiest thing to figure out. It was made much worse because Cisco had tons of acquisitions so there was a lot of legacy infrastructure left over from them, and who knows whether they were still using their mail servers or not. This was very slow going to say the least.

Most of the DKIM working group was pretty cavalier and dismissive about the mailing list problem. A DKIM signature from the mailing list would somehow solve the problem. We just needed to somehow trust that mailing list. Coming back after a dozen years and a lot of skiing under my belt, it seems that the previously unsolved problem remains unsolved: nobody knows how to "trust" a mailing list.

When I was still at Cisco I used a bunch of heuristics trying answer the question of whether the originating domain signature could actually be reconstructed and verified. It was met with a lot of derision and hysteria from the usual attack poodles, but I didn't care and kept trying to improve my recovery rate ultimately achieving northward of 90% recovery. It was interesting and tantalizing that the false positive rate was close enough to be worth considering marking suspicious mail up with warnings. The next step would have been to differentiate what that broken signature traffic was, where it was coming from, what it was doing that was not reversible and ultimately whether we cared about that case enough. There was no silver bullet that we could find, and we definitely didn't know how to  "trust" mailing lists.

So as we slogged on with hunting down our infrastructure and me still hacking on the heuristics, Cisco decided that it was interested in the email security angle that we had pioneered. We did diligence on a bunch of companies and settled on Ironport just down the peninsula from me. Cisco bought them and my group it was with decided -- without my input -- that they were switching to some wacky telephony thing that I had no interest in. When Ironport wouldn't transfer me, I wandered around a while and then decided it was time to quit and ski. The one thing I regret is that we didn't get a chance to finish off the research side of the problem, especially with mail lists to separate out the enormity of the remaining problem which seemingly nobody knows to this day.

Fast forward 12 years and I found this curious creature called and ARC-Signature and ARC-Seal in my email headers. The signature looked pretty identical to a DKIM signature which I found very odd, and it also had an ARC version of the Authentication-Results header. So what's going on? I found that ARC is an experimental RFC which was seemingly trying to solve the mailing list problem. Again. But using even more machinery in the form of the ARC-Seal. What is the Seal's purpose? I determined it is so that it can bind the ARC auth-res to the ARC signature. Why are they doing that? Because it's supposed to be of some value to the receiving domain to know what the original assessment of the originating DKIM signature was. But DKIM can already do that if a mailing list resigns the email. It was all a mystery.

So wondering whether there was some secret sauce that I had completely overlooked all those years ago, I posted on the DMARC working group list for the first time asking what that secret sauce might be. "It requires the receiver to trust the mailing list" I was told. I said that you can do that right now if the mailing list resigns the mail with DKIM, which I assume most do at this point (and screw them if they don't). Why does this require a different signature than a DKIM signature? They wanted to bind the ARC auth-res to the signature I was told. Why can't you just add a new tag to DKIM assuming that is a problem at all, which I am not very convinced it is? Never got a good answer for that. And most importantly why does the receiver care about the original auth-res in the first place? Never got a good answer for that either.

So this all boils down to trusting a mailing list at its heart. It's not clear to me whether some receivers are using the list's DKIM signature to bind it to some whitelist or reputation service. Somebody as big as Google could easily roll their own and just not tell anybody about it since it's completely proprietary just like the rest of the spam filtering. So on the outside we really don't know whether mailing list reputation is a Thing or not, but the assumption that the working group seems to be operating on is that it is not a Thing and remains a previously unsolved problem. I am willing to believe that assumption since it seemed like a hard problem all those years ago too. That and Google itself is participating with ARC, so that suggests they aren't any better off than anybody else. But who knows? That's part of the problem of being opaque is that nobody on the outside can scrutinize whether there is some magic thinking going, or whether there actually is some there three.

So here we are over a decade and half after DKIM's inception right back where we started. As far as I can tell, ARC doesn't bring much of anything new to the table and the original auth-res doesn't address the fundamental problem of trusting the mailing list or not. Whether the original signature verified on the list or not seems completely beside the point: if I trust the mailing list, why do I even care whether it verified or not? If they are forwarding messages without valid originating signatures, that is a very good reason to not trust them for starters. Any reputation system needs to take into account a lot of factors, and requiring signature verification at the mailing seems like table stakes.

Mailing lists. Again. We are completely wrapped around the axle of being able to provide pretty good forgery protection from mail from malicious domains, but domains can't seem to pull the trigger of asking receiving domains to toss mail without valid signatures because it would cause mailing list traffic to be discarded as well. There have been other drafts beyond my experiment on signature recovery that have not been well received, and have languished as probably they should. The worst part of this problem is that there is no way to determine what success looks like. How many false positives are acceptable? How can we assess quality and quantity? Cisco, for example, would grind to a halt if it's upper level engineers working on IETF standards ceased to work because they implemented a p=reject policy (I just checked, it's p=quarantine 0% percent which is basically p=none with a little bit of attitude). 

Been toast a little too long

So pretty much we're in the same trenches as we were from the beginning with no forward progress, and no forward progress on the horizon. It then caused me to question the unthinkable: are mailing lists actually worth saving? They are ancient technology that never had security constrains from the beginning and they are operating in a world where security is a Must requirement. Likewise, it's not like there aren't alternatives to mailing lists. Web based forums have been around for decades, so it doesn't even require new infrastructure. Lots of things that once were a Thing are now gone. Take Usenet for example. Usenet was revolutionary in many ways and provided the first thing we would recognize as social media on the nascent internet. But it's been sidelined by the new social media companies, but one of the biggest problems is that it couldn't adapt to spam for whatever reason. Probably not technical and more likely neglect, but it is basically dead now. The world moved on, and now it's just a relic of the past. It's not even that the new tech has to be particularly good: Reddit is the most similar to Usenet of the social media platforms and it is a terrible and buggy reimplementation of Usenet, yet it is popular and Usenet is done.

So are mailing lists a relic of the past too? Going by the total volume of email, it sounds like they're down in the noise from what I've heard -- like 1% -- but it would be good if some of the big providers stepped up with some concrete numbers. For many companies, perhaps most, losing access to external mailing lists wouldn't even be missed at all. Us in the internet community are profoundly attached to mailing lists because that's how business is done. But let's be serious here: we are complete outliers. Changing over to some off the shelf forum software while not trivial, is certainly well within the capability of internet community if it needed to. The same for other lists. It quite possible that a hybrid system could even be done in transition where mail is gateway'd to the forums or some such.

In conclusion the bad security properties that mailing lists and the like are causing better security to not be deployed. So yes we should just ignore mailing lists and let the people who run them them adapt however they feel fit. After 15+ years since the advent of DKIM-like technologies and the ability to determine who is sending mail and discover what that domain desires on receipt of mail that doesn't verify, we need to just move on and accept that the desire for verifiable mail from domains is more important than a legacy technology with ample alternatives. 

This is not to say that mailing lists need to be burned to the ground or anything drastic. I've noticed that some lists have taken to re-writing From headers seemingly from domains with a DMARC policy other than "none". This is pretty awful from a security standpoint because it further trains people to rely on the pretty name rather than the email address, but to be fair that is just building on an existing problem and MUA's are the real culprit of a lot of this since they do little to help people know who they are actually talking to. That said, none of this would be necessary if we used a more modern technology.

Sorry mailing lists. But it's come to this.








Sunday, December 27, 2020

How to build a laser printer from nothing at all

Foreward

This is completely from my perspective. Of course there was a lot more going on than just what I did and I don't want to diminish anybody else's experience. This was an accomplishment of a lifetime for a whole lot of people I suspect, and it's not my intention to take anything from that. Also: my timeline memory is horrible. There are bound to be mistakes.

The 1794 Laser Printer Controller

Gordian is born

In 1985 Gregg got a contract with a company named Talaris down in San Diego to build a laser printer controller. Gregg, myself, Andy and another named Paul then formed a new company called Gordian. We had a Microvax-1 and that was about it. Gregg and Andy were hardware engineers, and Paul was mainly absent so it was just me holding down the software front. The design was pretty ambitious with a dual processor design with a main processor to interpret printer languages (hppcl, postscript...) into graphics primitives and a graphics processor to render the graphics into the bitmap and enough video ram to feed it to the printer. It was full of IO devices including a parallel port, a serial port and most importantly a SCSI bus which allowed us to attach things like disks and at first a SCSI ethernet port. That misadventure eventually led to adding a LANCE chip for native ethernet. There were also two ports on the printer to attach storage in the form of a card full of eproms which Talaris used to ship packs of fonts.

So there was a lot going on here and it's just me at first. Well, not exactly just me because the agreement we had is that we'd provide the infrastructure and API for the hardware and graphics, and they'd provide the printer language interpreters. There were no deep discussions between us hammering out API's or anything like that. I mostly decided what I was going to do, and they wrote to that. I'd end up in San Diego to talk with some of their engineers about what was going on -- usually Rick and Barry, but I knew quite a lot of them, and of course Cal the CEO was the tester and torturer in chief. It was fairly hard for them to argue on the main processor side though because I decided quite organically that the easiest thing to do was to just emulate libc and Unix IO, so that was something of a no-brainer. The graphics kernel was a different story: there was no standard graphics API at the time, though X Windows was starting to come into the picture. I didn't know enough about it though, and X11 was a couple of years away. Looking back, it seems hard to imagine a medium sized company would put so much trust into a barely formed startup and its one software guy to invent an API that they could write to with not much back and forth, but that's I guess just how it was back then: we were all just making things up as we went along none of us quite grasping that what we were doing was actually hard and would cause bigger companies to go into insane Mythical Man Month mode.

So Gregg and Andy went about designing the hardware while I, as the title states, started from nothing at all. We had compilers for the two processors (but just barely for the GSP since TI was still working on it), but other than that not a line of code that could be imported, and no internet to beg, borrow, or steal from. Another thing to realize is that the debugger, Punix, graphics kernel, and storage were the original deliverables and they were -- get this because it never happens -- on a tight deadline. I forget how long we were given, but a year sounds about right including designing and debugging the hardware as well as Andy designing a gate array. Gate arrays weren't what they are these days where you can rent one out by the hour on AWS. It kept Andy up many many nights. Both Andy and I were working 100 hour weeks. I don't know how we managed any amount of life at all.

The Debugger, monXX

While the hardware was being designed, I decided that I had a lot of work ahead of me and that I could use some of the lag time before board bring up to build a debugger since the tool chain obviously didn't have one. The obvious choice was to make use of the serial line. I went on to invent a protocol for debugging remotely for things like downloading code, examining memory, setting breakpoints, calling functions, performance analysis and the like. Most importantly I figured out how to digest the symbol table so that you could print not only variable, but entire structures using normal C syntax, like foo->bar.baz.

It turned out to be immensely helpful not just for me, but for Talaris too since they were in the same boat as me trying to figure out how to bootstrap themselves to write embedded code which their engineers had not done to my knowledge. There were actually two distinct debuggers, one for the 32000 (mon32) and one for the GSP (monGSP). For the GSP it had to be gatewayed through the 32000 to get to the GSP. This all happened behind the scenes for the serial line as it was still an operational serial port used to send the printer pages to be printed. 

The debugger was ported to many other processor architectures over time including the Mot 68k and the MIPS architecture. The most significant improvement, however, was using the network. At first I just used my own raw ethernet frames because that was the quickest and easiest way to do something on Vaxes running VMS. After a while, however, we got MIPS workstations which ran Unix and inexplicably they didn't support raw ethernet access which pissed me off. So I was forced to port it all over to IP as well. The first time I remotely debugged something over the internet -- I think it was in New Zealand -- was nothing short of amazing. Today you'd call that a backdoor, but there was no such thing as security back then.

Punix, the little OS that could

The National 32000

Every programmer wants to write their own multitasking operating system, right? Well at least back then it was on the todo list of many a programmer looking to feather their cap. What does Punix mean? It stands for Puny-Unix. Everything a normal Unix has that doesn't require an MMU (memory management unit). MMU's were rare in embedded software generally, and not terribly helpful when there is no guarantee of a disk. It would have been handy for the debugger for watchpoints, but we managed.

As I said above, I chose a Unix environment because it already had documentation, and even better allowed Talaris to start writing their code: if it was in libc, it was in Punix. That meant that I had to write libc from scratch. All of the string functions, malloc/free/alloca, stdio, ctype and the like. It wasn't a full implementation of libc since for one we didn't have a FPU (floating point unit), but it had quite a bit of it. If I or Talaris needed something, I'd divert whatever I was doing and write it. Suffice it to say that it was enough for our purposes, though I suspect that if Talaris found something they needed they'd often just roll their own instead of bugging me. Remember: no internet, no git pull request.

At the time I started writing Punix, I was actually not familiar with the actual Unix operating system. All of my experience was on VAX/VMS. So in the kernel itself, it looked a lot more like VMS than Unix but for the most part it looked like something entirely different and entirely me. The main thing I took from VMS was their class driver/port driver architecture. Port drivers drove the physical hardware for DMA, interrupts, etc and provided a standardized interface to the class drivers. A class driver on the other hand used port drivers to do the low level work, and they did the hardware independent work. Think file systems as an example. The little font cartridges used the same file system that I laid down on the SCSI disks, for example. The main class drivers are called out later.

VMS had another thing that I latched on to which was that they used process based helpers for networking. It was the perfect excuse to make the OS multi-tasking, or what today we would call multi-threaded since it was all running in the same address space.  Threads, the term, had not been invented yet. It was a fully preemptive OS in that there was a current process, a run queue, and a wait queue. Since there was no thread interface to emulate, I just emulated fork(3). This may sound sensible, but remember there was no MMU and thus the need to clone enough of the stack in the child process to allow the child process to return. The first time implementing this definitely caused many an explosion until I got it right. The  remaining difficulty was context switching. To get into the wait queue, eg block, there was a function called sched() that backed up the registers into the process header and selected the appropriate process from the run queue to make it the current process. When an interrupt came in from a device the port driver would know who was waiting for that device and move it over to the run queue to possibly be context switched. The last thing was the timer interrupts to switch out processes at equal priority so they could get cpu time too.

The last notable part is that it had resource locking (ResLock) which implemented a mutex in a critical section of code. I get the impression that a lot of kernels use very inspecific mutexes while it's in kernel mode to lock out other processes, but Punix was very fine grained in that each driver would have its own read and write mutex. That and we didn't have an MMU, so there was no kernel mode in the first place. This would go on to cause quite a bit of hilarity with race conditions but fortunately I have a knack for visualizing race conditions so they were usually dispatched relatively easily. Would that were the case of memory corruption and the Heisenbugs they caused. They were the source of much, much anguish and is a reason I would rather deal with garbage collected systems, and strings that don't overflow these days.

The Graphics Kernel

The TI GSP

The GSP (Texas Instruments Graphic System Processor) was an odd beast. First off it was bit addressable rather than the more customary byte addressable. It also had a bunch of graphic primitives in its microcode which were quite convenient like being able to move a character in a font onto the bitmap with one instruction, clipped and masked if needed. This was tremendously helpful for me as I had absolutely no background in graphics of any kind. It was also impressively fast for the times: a 50Mhz clock which caused Gregg to speculate that surely this madness cannot keep going on because there are laws of physics to be obeyed. Since TI expected it to be used as a co-processor mainly, it had a host interface where another processor could reach inside its memory space and read and write it as well as interrupt the GSP to tell it that something happened, which is what the 32000 did. For bitmap graphics you need to draw the pixels into something so it had a bank of memory the size of the bitmap rounded up to the next power of two. Since the GSP was bit addressed, you could treat the bitmap as a literal rectangle and the processor would figure out how that translates to actual ram addresses. 

Did I mention that I had no clue about graphics? Paul -- the one who was mostly absent -- had a bunch of literature on raster graphics and we would sometimes talk when he showed up, but I was mostly on my own. I had to figure out and write Bressenham's algorithm (drawing lines) and the various algorithms for curves like ellipses and circles. Getting to understand non-zero winding numbers for filling objects with patterns was another new thing, and of course just understand the tools that the GSP brought to bear. But the base level graphics were only part of the problem.

The first order of business was to get commands and data in and out of the GSP. Remote Procedure Calls ala Sun's NFS were all the rage at the time so I'm like what the heck. So I designed a queuing mechanism for the RPC calls that were being generated by the Talaris designed language interpreters. They'd keep piling data into the queue and the GSP code would busily empty the queue by executing the RPC's on the queue. If the queue got too full the 32000 would back off and wait for the GSP to tell it that it was ok to start sending again. This led to an ominous debugging message "Queue Stalled", but all was working as intended -- it just meant that they they are sending me complicated stuff that takes a while to render. The front end of the RPC's were the actual API I made for Talaris to work against. There were a lot of primitives like line, polygon, text string, circle, change fonts, etc, etc, that would be immediately obvious to somebody who's worked on a html Canvas element.

Another aspect of the design involved fonts. Fonts can take up a lot of memory and you need to have them around so as to execute them. For this, I designed a faulting mechanism where the GSP determined if it didn't have font loaded and if not interrupt the 32000 to fetch the font from whatever storage it was stored on, either the eprom disks, or on a hard drive. This worked really well as the fonts on the GSP were really just a cache and it could decide if it wanted to toss a font to make room for another. Keeping track of that on the 32000 side would have been a nightmare regardless of who wrote it which most likely would have been me, so it saved me a lot of work.

One of the cute side effects of the GSP is that it since it treated memory for the bitmap as a rectangle, there was memory available on the side of the page depending on the width of the paper since the actual dimensions were rounded to the next power of 2.  This turned out to be an ideal place to stash the fonts and came to be known as "Condo Memory" which I believe that Barry at Talaris coined. And it could hold a lot of fonts at once and was perfectly suited for the GSP architecture to blit the characters into the bitmap at very high speed. 

Probably the most clever thing about the design was something I called the band manager. With a laser printer, you send each line bit by bit to the printer serially. There is a kind of ram called video ram that unsurprisingly does that for video displays so the deal was that you build the bitmap in regular memory and transfer a band at a time via the video memory which then serializes it and sends it to the laser's beam. Normally for high speeds you'd want two bitmaps so that when you're outputting the current page you're working on for the next page. But memory was expensive in those days so that wasn't an option. 

I had gone skiing alone for my sanity one day and after I came back I decided to go to a bar (a gay bar even! i was still single and mingling) and unwind. I started thinking about the double buffering problem (or lack thereof) and came upon the idea that I could actually write behind the beam of the laser printer to speed things up a lot. I literally used a cocktail napkins to sketch out the design so that I didn't forget the next day. It went like this: when a graphic drawing command from the queue was executed, the first thing it did was figure out the extent of the Y beginning and ending and queued it on the topmost band for drawing. If it ended up in a band that was closed because the laser hadn't got there yet, it just waited until the beam was past. The other key thing it did is that it didn't wait until it could draw the whole object, but instead drew what it could when a band became open as it was transferred to the video ram for output. It did this by setting up a clipping window that matched the part of the bitmap that was open to being drawn in. If it didn't complete, it was queued up for the next band that was closed to be executed again. This may sound wasteful but it wasn't: the first thing you need to do is clear the bitmap from the previous page, if you waited for it to be able to do that you it would defeat the entire point of writing behind the beam as it were. This sped  the printer up enormously and is why it could drive a 100 page per minute printer at full speed. 

So yeah, the GSP was a nifty little package and full of cool tools, but there is a lot more than just some graphics primitives to designing a graphics kernel on a co-processor. All of those I needed to work out on my own, and while I had help later for a bunch of other tasks, the GSP code was a no-go zone as far as everybody else was concerned. That was my baby and my problem alone.

Storage

The storage subsystem was relatively simple. Since it was mainly for fonts and things like that speed wasn't really much of an issue. The file system I created was a straightforward implementation of a hash table for the directory -- I didn't care about sorting because nobody was doing an ls -- and I don't think it allowed for sub directories. The main challenge was getting the SCSI driver to work. SCSI has what is called an initiator and target. The initiator is the OS code, and the target is the device. I had this notion that the initiator should be in control of the flow and that if the device didn't flow as expected something was wrong. It took me a long time to understand that the target was the one that controls the flow and that the initiator had to follow it. It wasn't until the printer was well into production that I figured out my error and things became a lot better. I did put some effort into speeding it, but it wasn't much of a priority. The only thing I didn't do is boot up from the disks. Maybe I'm misremembering with the flash based disk I designed, but a firmware upgrade involved putting a fresh set of eproms on the motherboard.

Networking

My old foe: the Kinetics Box
 

The really cool thing about the Talaris laser printer was the networking aspect. We designed it with networking in mind from the very beginning, and we were the first laser printer that could be accessed by ethernet. For all I know, we were the first example of the Internet of Things (IoT). The networking part started as a next phase as I recall, but it was very soon after the initial shipment we started working on it. We didn't have an onboard ethernet solution at the time, but we found this box from a company called Kinetics that was a SCSI ethernet adapter about the size of a shoe box, with about the same amount of charm. It got the job done, but given my wrong interpretation of how SCSI is supposed to flow gave me fits. I'm sure it was buggy in its own right but I was not helping the situation. We eventually designed a version of the controller with the AMD LANCE chip onboard and much rejoicing ensued.

One of the interesting concepts I came up with was the idea of borrowing buffers from the OS. We were very focused on speed of the networking because although our initial targeted printer was 15 pages a minute we were planning to target a 100 page a minute printer that needed to be fed at a rate that could keep up with printer. So the networking needed be very fast and efficient so the interpreters had enough CPU ticks to get their job done too. Remember that the 32000 was basically a 1 MIPS box in a world of 500,000 MIPS processors now. Drivers would have a set of buffers big enough to receive an ethernet packet (1536 bytes), and you'd need to keep enough of them around to buffer any burst. The naive Unix way to transfer the payload is using read(2), but that incurs a copy of the buffer to the user's buffer, and of course the allocation of the user's buffer altogether. For the network drivers that were just ordinary processes, that seemed needlessly wasteful since they were just an intermediary. So in comes the readsysbf and freesysbf ioctl's to the rescue. The network driver would borrow the buffer to do its protocol things and give it back when it was properly queued up for delivery to the process doing the printer interpreter job.

One interesting thing is that apparently neither me nor Talaris was aware of BSD sockets, so firing up and connections and listening on a socket were done in completely different ways. Had I known about connect, listen, and bind I would have emulated them, but I didn't.

TLAP -- The Talaris Printer Protocol

TLAP was the world's first ethernet protocol to drive a printer. Let's say I was sort of naive at first. Years before I had written networking for a point of sale terminal and its controller and knew that it required retransmit logic because the serial drivers (rs429, rest in hell) were flaky. For some reason I had forgotten that and we went some time before realizing that, golly, ethernet can drop packets too! Oops. I don't really remember a lot about the protocol other than it being pretty simple minded. Since I don't have the source any more, I don't have much to jog my memory. It did require protocol agreement between me and the engineers at Talaris since they were writing the host part of the protocol which attached into the print queuing systems for VMS and Unix. If I recall correctly, this was one of those cases where I mostly told them what I did and let them have at it after. Since there was no internet or email at the time, meetings required hour long trips to San Diego, so we tried to keep those as minimal as possible, usually once a month or so. Again in a world today where everybody gets their $.02 worth of kibbutzing, it pretty surprising how much was empowered to me to figure things out. The protocol ended up working just fine and drove that 100ppm laser printer like a champ. 

TCP/IP

 By the time we started working on a TCP/IP there really was a we. We had several customers at the time beyond Talaris and they wanted TCP for their application, in particular a terminal server we were working on. A terminal server is a box that takes user keystrokes on a keyboard of a computer terminal and sends them back to the host  which sends the output so you can do remote logins. Talaris didn't need that, but they were interested in compatibility with lpr which is the Unix command to send files to printers, and had a way to do that remotely using TCP. Lpr as a network protocol was rather clunky and not especially great at its task. I thought very hard about bringing this up at a new fangled thing called an IETF meeting which was happening at USC around that time we work working on it. I very nearly drove up there, but for reasons I don't remember -- inertia most likely -- I didn't go. I have no idea how receptive folks would have been at the time because there was so much to do. But it was pretty novel for a device that wasn't a host to be on the internet so they may well have been intrigued. Of course, IETF hadn't even got around to designing PPP and instead had SLIP for IP over serial lines so we're talking about a metric shitton of things that needed to be engineered, and fast.

My other engineers did a lot of the initial work, but I got my fingers in it too trying to deal with speed and many other problems. So it was definitely a collaborative affair which was sort of new for me. With one memorable bug we couldn't reproduce, so I got flown out to San Antonio to see it for myself. What was memorable is that the place was basically an airline retrofit shop and one of the retrofits in the hanger was one of the 747's that shuttled the space shuttle. I was very impressed. The bug, however, was less impressive: some edge case with a TCP checksum as I recall.

DEC LAT 

DEC's LAT protocol was mainly designed for terminal servers which we were working on at the time for another company (our terminal servers still sit in data centers to this day apparently). LAT was really optimized for terminals and the way they behaved, but they also had provisions for printers as well so this was a win for Talaris as the could natively support VMS without any need for a protocol driver as was the case for TLAP. LAT wasn't expecially fast though because terminals maxed out at about 19.2k baud so it wasn't a priority. TLAP still have a big advantage when you needed speed so it remained.

SMB, Novell Netware, and Appletalk

We built a lot of network stacks. I was involved with all of them, but the Novell one was mostly mine. Novell was a pretty simple minded protocol as I recall and it had an actual protocol to talk to printers which I implemented. I can't remember whether Talaris implemented it -- they must have because I don't think we'd do it on spec -- but the thing I do remember is that we licensed our code to Apple for their laserwriters. I was never quite clear whether that made it into shipping product.

Conclusion

It's sort of amazing what you can get done in a year or two when you are given free reign and few meetings. The first three sections of monXX, Punix and the graphics kernel were mostly up and running and out the door by about 1987 as I recall (we started in June 1985, so not bad all things considered). There were a lot of things beyond the laser printer going on that took up my time including the terminal server I spoke of, but also a graphical terminal that was a knock off of a DEC VT340. The industry was transitioning at the time from terminals to X Window terminals, but very quickly to workstatons so that ended up being a huge amount of effort without much to show for it. The laser printer did really well though and sustained Talaris for years to come. Not bad for starting out with nothing at all.