Hacker Timesnew | past | comments | ask | show | jobs | submit | philipstorry's commentslogin

SMTP won because it was simpler, but it's probably good to look at why it was simpler.

SMTP handled routing by piggybacking on DNS. When an email arrives the SMTP server looks at the domain part of the address, does a query, and then attempts transfer it to the results of that query.

Very simple. And, it turns out, immensely scalable.

You don't need to maintain any routing information unless you're overriding DNS for some reason - perhaps an internal secure mail transfer method between companies that are close partners, or are in a merger process.

By contrast X.400 requires your mail infrastructure to have defined routes for other organisations. No route? No transfer.

I remember setting up X.400 connectors for both Lotus Notes/Domino and for Microsoft Exchange in the mid to late 90s, but I didn't do it very often - because SMTP took over incredibly quickly.

An X.400 infrastructure would gain new routes slowly and methodically. That was a barrier to expanding the use of email.

Often X.400 was just a temporary patch during a mail migration - you'd create an artificial split in the X.400 infrastructure between the two mail systems, with the old product on one side and the new target platform on the other. That would allow you to route mails within the same organisation whilst you were in the migration period. You got rid of that the very moment your last mailbox was moved, as it was often a fragile thing...

The only thing worse than X.400 for email was the "workgroup" level of mail servers like MS Mail/cc:Mail. If I recall correctly they could sometimes be set up so your email address was effectively a list of hops on the route. This was because there was no centralised infrastructure to speak of - every mail server was just its own little island. It might have connections to other mail servers, but there was no overarching directory or configuration infrastructure shared by all servers.

If that was the case then your email address would be "johnsmith @ hop1 @ hop2 @ hop3" on one mail server, but for someone on the mail server at hop1 your email address would be "johnsmith @ hop2 @ hop3", and so on. It was an absolute nightmare for big companies, and one of the many reasons that those products were killed off in favour of their bigger siblings.


> ... why it was simpler.

In the early 90s I implemented a gateway between Novell email and X.400. What amused me the most was X.400 specified an exclusive enumerated list of reasons why email couldn't be delivered, including "recipient is dead". At the X.400 protocol level this was a binary number. SMTP uses a 3 digit number for general category, followed by a free form line of text. Many other Internet standards including HTTP use the same pattern.

It was already obvious at the time that the X.400 field was insufficient, yet also impractical for mail administrators to ensure was complete and correct.

That was the underlying problem with the X.400 and similar where they covered everything in advance as part of the spec, while Internet standards were more pragmatic.


> so your email address was effectively a list of hops on the route

Who can forget addresses like "utzoo!watmath!clyde!burl!ulysses!allegra!mit-eddie!rms@mit-prep"


I am still trying to forget setting up sendmail.cf in that era.

Ehhh.. This is a bit revisionist for a couple reasons.

1. smtp predates dns. or really even most of the internet. It was originally designed to work over uucp.

2. early smtp used bang paths (remember those) where the route or partial route is baked into the path.


A bit, perhaps, but not much.

At the time of bang paths, smtpd was just one of several email protocols in use. And X.400 was absolutely a competitor at the time.

A decade or two later, when it was clear that smtp had become the least common denominator between all email systems, then smtp absolutely used DNS and even had its own record type, MX.

So I don't think it is wrong to say a large part of why it won out on all other protocols was that you didn't have to mess with email routing once MX records was universally accepted.


Of course, for reliability, you could even bake multiple paths into the envelope address.

What I really liked about ZoneAlarm wasn't just that it was a very nice technology - and it was; but also that it got the user expectations and training right from a very early stage.

It was quite insistent on the fact that it would be "noisy" at first as it queried all the programs you ran, but would then quieten down once it had been "trained". It got that across in clear, simple language.

I think it was so successful because it got the soft side of its security job right as well as the hard part. It's certainly why I recommended it to anyone at the time...


Was working as an IT consultant. We got a call from an international manufacturer in the area for support. Local lead IT manager took down the firewall which infected their computer network around the world. All they wanted were bodies to help clean systems and apply OS updates.

My personal computer had ZoneAlarm on it. It became ground zero for reporting about infected systems. They ignored systems they thought were save; CISCO phone system running on Windows server and other backend devices. The company then bought a few licenses to run their own laptops.

It is such a same that Microsoft destroyed _ERD Commander_ and other quality tools which assisted in the clean up.


Yes and no.

Yes. The incentives for writing reliable, robust code were much higher. The internet existed so you could, in theory, get a patch out for people to download - but a sizeable part of any user base might have limited access, so would require something physical shipped to them (a floppy or CD). Making sure that your code worked and worked well at time of shipping was important. Large corporate customers were not going to appreciate having to distribute an update across their tens of thousands of machines.

No. The world wasn't as connected as it is today, which meant that the attack surface to reasonably consider was much smaller. A lot of the issues that we had back then were due to designs and implementations that assumed a closed system overall - but often allowed very open interoperability between components (programs or machines) within the system. For example, Outlook was automatable, so that it could be part of larger systems and send mail in an automated way. This makes sense within an individual organisation's "system", but isn't wise at a global level. Email worms ran rampant until Microsoft was forced to reduce that functionality via patches, which were costly for their customers to apply. It damaged their reputation considerably.

An extreme version of this was openness was SQL Slammer - a worm which attacked SQL Servers and development machines. Imagine that - enough organisations had their SQL Servers or developer machines directly accessible that an actual worm could thrive on a relational database system. Which is mindboggling to think about these days, but it really happened - see https://en.wikipedia.org/wiki/SQL_Slammer for details.

I wouldn't say that the evidence points to software being better in the way that we would think of "better" today. I'd say that the environment it had to exist in was simpler, and that the costs of shipping & updating were higher - so it made more sense to spend time creating robust software. Also nobody was thinking about the possible misuse or abuse of their software except in very limited ways. These days we have to protect against much more ingenious use & abuse of programs.

Furthermore today patching is quick and easy (by historical comparison), and a company might even be offering its own hosted solution, which makes the cost of patching very low for them. In such an environment it can seem more reasonable to focus on shipping features quickly over shipping robust code slowly. I'd argue that's a mistake, but a lot of software development managers disagree with me, and their pay packet often depends on that view, so they're not going to change their minds any time soon.

In a way this is best viewed as the third age of computing. The first was the mainframe age - centralised computer usage, with controlled access and oversight, so mistakes were costly but could be quickly recovered from. The second was the desktop PC age - distributed computer usage, with less access control, so mistakes were often less costly but recovering from them was potentially very expensive. The third is the cloud & device age, with a mix of centralised and distributed computer use, a mix of access control, and potentially much lower costs of recovery. In this third age if you make the wrong decisions on what to prioritise (robustness vs speed of shipping), it can be the worst of both the previous ages. But it doesn't have to be.

I hope that makes sense, and is a useful perspective for you.


Yo. Firstly, thanks for the trip down memory lane - well written, engaging, fun. My mind is still stuck in those days even after finishing the article, as you can tell from my anachronistic greeting.

Secondly, as someone who spent 15 years working with Lotus Notes, I can assure you that you can run it standalone. Obviously it makes no real sense for a Groupware product, but it can be done. To the Notes client opening a database locally or on a mail server is largely the same.

The main issue is that people used Notes to communicate and collaborate. So you could just go creating new Address Books, Discussion databases, Document Libraries and so on, but what exactly are you proving with that? It's be like just firing up the Microsoft Mail client and only looking at the address book...

Whilst I'm aware that there's plenty in Notes that people didn't like, I do think that there are some gems hidden in there which it would have been nice to have kept. The Notes dialect of Rich Text had a couple of niceties (programmable buttons, collapsible/expandable Sections). The database engine itself was unparalleled at the time, and in some ways it still hasn't been bettered.

But the issue remains that you'd need to set up a Notes/Domino Server (depending on your version - 4.5 onwards it's called Domino), and a small network. And that's a ball-ache that nobody wants. It can speak IPX/SPX and NetBIOS, so it doesn't have to be as complicated as TCP/IP, but it's still a lot of prep work before you even get to start looking at the usage. :-(

That having been said, I was a Principal Certified Lotus Professional on the Sysadmin track for about three versions of Notes, from 4.6 to 6, and can definitely help if you ever did want to do that. Feel free to email me at phil [at] philipstorry.net if you're ever so lacking in subjects that you feel forced into this last resort.


Not a bad article - thanks!

Others are pointing out that you cannot understand everything - and that's true enough.

But you only need to understand what's important. The experience of a good expert helps you to find that out.

As a systems administrator the recent AWS outage in the Middle East is the best recent example. There will be roughly three types of companies, separated by their understanding:

- Don't Understand - these companies thought that the cloud would handle this kind of thing for them, and are probably going to be doing a lot of finger-pointing in the near future.

- Do Understand, Don't Care - these companies did understand that high availability meant going multi-region, but decided against it for whatever reason. Probably cost vs perceived likelihood. These companies know that they've made a mistake. Short term they're wondering how to survive it, long term they'll be re-assessing their risk acceptance. Many may decide to stay single-region, but at least understand why.

- Do Understand, Do Care - these companies will simply be checking that their procedures worked for any manual parts of their failover, plus possibly looking at any improvements they can make given the real-life experience they've gained.

An LLM is just going to tell you how to implement it. It's not going to be thinking "what sort of availability do we require?", it may never start that conversation unless explicitly prompted. And even then it's going to return consensus opinions, which may not be what you want when evaluating risk.

I'd love to think a lot of companies will be looking at this event and updating their own risk register or justifying their existing risk decisions for hosting. But let's be honest - most won't even have thought about it, and won't until it goes wrong.


Quite the nostalgia blast for me!

I'm honestly not sure I had a machine with more than 2 fixed disks until well into the days of Windows 7 and SATA. The exception would be logical disks such as Stacker or similar compressed volumes - but I wasn't using them until later either.

If I recall correctly before SATA we had IDE which only had two devices (primary & secondary) per controller, and usually only two controllers on a motherboard. Given the physical size of disks even you'd probably just have a boot disk, maybe a data disk and then perhaps two optical drives. So it's absolutely believable that nobody found the bug simply because nobody had a machine configured that way.

Sure, you could have SCSI for more disks. But if you did, then you were probably doing something that required a lot of CPU grunt - at which point you might just leave the PC behind and go to a UNIX workstation anyway.

OK, now I'm starting to get flashbacks to just how bad SCSI support was on the PC, and it's stripping the the rose-tint from my glasses. Time to go!


There were also MFM and RLL hard drives. I don't recall if they were pre-IDE or something different altogether. It's been a long time.


Yes they were. IDE stands for Integrated Drive Electronics because the drive could be connected directly to the ISA bus, using an on-board controller, vs. having to use an MFM or RLL controller on the bus in between it and the disk.


I used MFM in the first PC I built from scavenged parts, it used an ISA MFM controller card. IDE came later, those had an integrated controller ("integrated drive electronics") so the PC didn't need to know how to do the low-level control for the drive. Luxury. And less jumpers to faff about with.


Also ESDI.


> before SATA we had IDE

I had the original IBM PC with two 5.25" floppy drives, and I think that was all the room there was on the disk controller. Dad bought a 10MB Hardcard to expand it; that went in an ISA slot, if I remember correctly. The disk controller might have been in an ISA slot, too.

I think that pre-AT era would have constrained DOS <5.0 more than the IDE/SATA/SCSI eras.


I had added a 2nd disk on my 386sx, but I guess it was after the DOS 5.0 time. I did not realize before v5 it was not allowed.

Not long afterwards I ended up on Coherent OS, fun times.


I had 3+ fixed disks somewhere around 1997, but that was on a Mac (so built-in SCSI), and the drives were all hand-me-downs that I got for free, that I could just plug and play to add a few more hundred megs of storage.


And expensive, really expensive.


I was gonna say, "MORE THAN TWO HARD DISKS? IN THE DAYS OF DOS?! Anything else we could get for you, your majesty?"


I have piles of old, but still functioning HDs now. I was looking at them yesterday and thought about how cheap these things have become.

I had to save up to buy floppies in the 80s!


Not to mention the weight, power demands and (to some extent) noise!

At one point I did have two hardcards plugged into my Amstrad 8086 machine which felt pretty decadent. (Or maybe it was a hardcard plus the internal hard drive?) In total it wasn't even 100MB of storage. https://en.wikipedia.org/wiki/Hardcard


That, and the IBM PC 5150 had what, 130 watts?, in the stock power supply.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: