Posted by maguay 1 day ago
Yes - and this is actually really important! It's true of most of the important early internet technologies. It's the entire reason "internet" standards won over "telco" (in this case ITU) standards - the latter could only be deployed by big coordinated efforts, while internet standards let individual decentralized admins hook their sites together.
Did any of the ITU standards win? In the end, internet swallowed telephones and everything is now VOIP. I think the last of the X standards left is X509?
Anyone remember the promise of ATM networking in the 90's? It was telecom grade networking which used circuit switched networking that would handle voice, video and data down one pipe. Instead of carelessly flinging packets into the ether like an savage, you had a deterministic network of pipes. You called a computer as if it were a telephone (or maybe that was Datakit?) and ATM handed the user a byte stream like TCP. Imagine never needing an IP stack or setting traffic priority because the network already handles the QoS. Was it simple to deploy? No. Was it cheap? Nooohooohooohooo. Was Ethernet any of those? YES AND YES. ATM was superior but lost to the simpler and cheaper Ethernet which was pretty crappy in its early days (thinnet, thicknet, terminators, vampire taps, AUI, etc.) but good enough.
The funny part is this has the unintended consequences of needing to reinvent the wheel once you get to the point where you need telecom sized/like infrastructure. Ethernet had to adapt to deterministic real-time needs so various hacks and standards have been developed to paper over these deficiencies which is what TSN is - reinventing ATM's determinism. In addition we also now have OTN, yet another protocol to further paper over the various other protocols to mux everything down a big fat pipe to the other end which allows Ethernet (and IP/ATM/etc) to ride deterministically between data-centers.
Without being able to get too into the telco detail, I think the lesson was that hard realtime is both much harder to achieve and not actually needed. People will happily chat over nondeterministic Zoom and Discord.
It's both psychological and slightly paradoxical. Once you let go of saying "the system MUST GUARANTEE this property", you get a much cheaper, better, more versatile and higher bandwidth system that ends up meeting the property anyway.
Seeing that the tech would never be good enough, they sold off the whole thing for cheap. Years later, they bought it back for way, way more money because they desperately needed to get into the cell phone business that was clearly headed to the moon.
I totally understand the pride they had in the reliability of their system, but it turns out that dropped calls just aren't that big of a deal when you can quickly redial and reconnect.
Those old phones had a long range. It was hard to make small ones because the old AT&T towers were much farther apart, up to 40km. Meanwhile, their competitors focused on smaller coverage areas (e.g. 2km or less for PCS) and better tech (CDMA), and it seemed to pay off.
There's likely an element of the "layering TCP on TCP" problem going on, too.
The classic popular treatment of the subject is: https://www.wired.com/1996/10/atm-3/
however, actually building a functional routing infrastructure that supported QOS was pretty intractable. that was one of several nails in ATMs coffin (I worked a little on the PNNI routing proposal).
edit: I should have admitted that yes, loss does have a relationship to queue depth, but that doesn't result in infinite queues here. it does mean that we have to know the link delay and the target bandwidth and have per-flow queue accounting, which isn't a whole lot better really. some work was done with statistical queue methods that had simpler hardware controllers - but the whole thing was indeed a mess.
I love this. Ethernet is such shit. What do you mean the only way to handle a high speed to lower speed link transition is to just drop a bunch of packets? Or sending PAUSE frames which works so poorly everyone disables flow control.
To handle a speed transition without dropping packets, the switch or router at the congestion point needs to be able to buffer the whole receive window. It can hold the packets and then dribble them out over the lower speed link. The server won’t send more packets until the client consumes the window and sends an ACK.
But in practice the receive window for an Internet scale link (say 1 gigabit at 20 ms latency) is several megabytes. If the receive window was smaller than that, the server would spend too much time waiting for ACKs to be able to saturate the link. It’s impractical to have several MB of buffer in front of every speed transition.
Instead what happens is that some switch or router buffer will overflow and drop packets. The packet loss will cause the receive window, and transfer rate, to collapse. The server will then send packets with a small window so it goes through. Then the window will slowly grow until there’s packet loss again. Rinse and repeat. That’s what causes the saw-tooth pattern you see on the linked page.
Once those requirements dropped down (partially because people just started to accept weird echo) the replacement became MPLS and whatever you can send IP over where Ethernet sometimes shows as package around the IP frame but has little relation to Ethernet otherwise.
ATM was nifty if you had a requirement of establishing voice-style, i.e. billable, connections. No thanks. It was an interesting technology but hopelessly hobbled by the desire to emulate a voice call that fit into a standard invoice line.
That approach of course didn’t age well when voice almost became a niche application.
I think standards are important, and I'm sad that no one bothers anymore, but stuff like this and the inclusion of interlace in digital video for that little 3 year window when it might have mattered does really sour one on the process.
BTW, I searched Kagi for "tolerable latency without echo cancellation in France" and saw your comment. Wow. I didn't realize web crawlers were that current these days.
We just wanted our own stuff. We did not want to coordinate with a proprietary vendor to network or be charged by the byte to do so.
I worked on a network that used RSVP ( https://en.wikipedia.org/wiki/Resource_Reservation_Protocol ) to emulate the old circuit-switched topology. It was kinda amazing to see how it could carve guaranteed-bandwidth paths through the network fabric.
Of course, it also never really worked with dynamic routing and brought in tons of complexity with stuck states. In our network, it eventually was just removed entirely in favor of 1gbit links with VLANs for priority/normal traffic.
It was complete garbage.
Another lab of theirs proudly made a Winsock that would use ATM SVCs instead of TCP and proudly made a brochure extolling their achievement "Web protocol without having to use TCP". Because clearly it was TCP hindering adoption of the Web /s
The Bellhead vs. Nethead was a real thing back then. To paraphrase an old saying about IBM, Telcos think if they piss on something, it improves the flavor.
One of the jobs I had applied out of college was to lead Schengen's central police database (think stolen car reports, arrest warrants etc) which would federate national databases. For some unfathomable reason, they chose X.400 as messaging bus for that replication, and endured massive delays and cost overruns for that reason. I guess I dodged a bullet by not going there.
Have you ever tried to implement an ITU standard from just reading the specs? It's hard. Firstly you have to spend a lot of money just to buy the specs. Then you find the spec is written by somebody who has a proprietary product, and is tiptoeing along a line that reveals enough information to keep the standards body happy (ie, has enough info to make it worthwhile to purchase the specification), and not revealing the secret sauce in their implementation.
I've done it, and it's an absolute nightmare. The IETF RFCs are a breath of fresh air in comparison. Not only can you read the source, there are example implementations!
And if you think that didn't lead to a better outcome, you're kidding yourself. The ITU process naturally leads to a small number of large engineering orgs publishing just enough information so they can interoperate, while keeping enough hidden so the investment discourages the rise of smaller competitors. The result is, even now I can (and do) run my own email server. If the overly complicated bureaucratic ITU standards had won the day, I'm sure email would have been run by a small number of CompuServe like rent seeking parasites for decades.
I don't think that's IETF policy. Individual IETF working groups decide whether to request publication of an RFC, and the availability of open source implementations is a strong argument in favour of publication, but not a hard requirement.
If the IETF standards are sometimes useful, it's more a matter of culture than of policy.
Of course, the biggest--and weirdest--success of the ITU standards is that the OSI model is still frequently the way networking stacks are described in educational materials, despite the fact that it bears no relation to how any of the networking stack was developed or is used. If you really dig into how the OSI model is supposed to work, one of the layers described only matters for teletypes--which were are a dying, if not dead, technology when the model was developed in the first place.
Using ITU voice codecs!
I’ve never been a fan
Another was NIH in considerable important places.
Yet another was that ITU standards promoted use of compilers generating serialization code from schema, and that required having that compiler. One common issue I found out from trying to rescue some old Unix OSI code was that the most popular option in use at many universities was apparently total crap.
In comparison, you could plop a grad student with telnet to experiment with SMTP. Nobody cared that it was shitty, because it was not supposed to be used long. And then nobody wanted to invest in better.
(Presentation and Session are currently taught in terms of CSS and cookies in HTML and HTTP, respectively. When the web stack became Officially Part of the Officiously Official Network Stack is quite beyond me, and rather implies that you must confound the Web and the Internet in order to get the Correct Layering.)
https://computer.rip/2021-03-27-the-actual-osi-model.html - The Actual OSI Model
> I have said before that I believe that teaching modern students the OSI model as an approach to networking is a fundamental mistake that makes the concepts less clear rather than more. The major reason for this is simple: the OSI model was prescriptive of a specific network stack designed alongside it, and that network stack is not the one we use today. In fact, the TCP/IP stack we use today was intentionally designed differently from the OSI model for practical reasons.
> The OSI model is not some "ideal" model of networking, it is not a "gold standard" or even a "useful reference." It's the architecture of a specific network stack that failed to gain significant real-world adoption.
A two-or-more order-of-magnitude reduction in a problem seems like a good start and a worthwhile step, not something to disregard because it's not 100%…
Funnily enough, if collusion is prohibited, the goal of such a law would be more competition, but the result is more mergers and monopolies, up until the point where antitrust kicks in and ad-hoc limits the monopoly, so each industry ends up with 1 bidder, or 2-3 tops
Sounds like a really fast way to kill a network instead of grow it into a 4B daily active user staple like email is today. You'd basically ensure that email would ONLY be spam, because marketers would be the only ones willing spend money to reach people.
Every time I see someone suggest micropayments on HN I have to wonder if people here have any understanding of how actual humans are. Turning every action on your network into a purchase decision is a good way to ensure nobody ever does anything on your network and thus it never becomes a network.
Humans will always gravitate toward the lowest friction way to achieve their goals. So immediately some private company would introduce a free communication channel as a loss leader instead, theirs would grow faster, and then they'd monetize via ads once their network reached critical mass (see also, whatsapp). Killing the more egalitarian decentralized protocol in the process.
My primary goal is not to send e-mail for free -- my primary goal is to have reliable, low-overhead communication with humans. Having this sponsored by spammers is a fine start, but even if I paid a dollar a year or so, that would be much lower overhead than even a day's worth of looking through spam is today (at the rate I value my time -- but even if you value your time orders of magnitudes less, the payoff is there).
https://jacobfilipp.com/MSJ/1993-vol8/qawindows.pdf
By 1995, the “Internet” e-mail address was the only remaining one.
that would be very annoying way to write e-mail and no less prone to typosquatting (if anything, more)
Both standards lacked hindsight we have today but x.400 would just be added complexity (as years of tacked-on extensions would build upon it) that makes non-error-prone parsing harder
Immutability is one of the best things about email.
SMTP handled routing by piggybacking on DNS. When an email arrives the SMTP server looks at the domain part of the address, does a query, and then attempts transfer it to the results of that query.
Very simple. And, it turns out, immensely scalable.
You don't need to maintain any routing information unless you're overriding DNS for some reason - perhaps an internal secure mail transfer method between companies that are close partners, or are in a merger process.
By contrast X.400 requires your mail infrastructure to have defined routes for other organisations. No route? No transfer.
I remember setting up X.400 connectors for both Lotus Notes/Domino and for Microsoft Exchange in the mid to late 90s, but I didn't do it very often - because SMTP took over incredibly quickly.
An X.400 infrastructure would gain new routes slowly and methodically. That was a barrier to expanding the use of email.
Often X.400 was just a temporary patch during a mail migration - you'd create an artificial split in the X.400 infrastructure between the two mail systems, with the old product on one side and the new target platform on the other. That would allow you to route mails within the same organisation whilst you were in the migration period. You got rid of that the very moment your last mailbox was moved, as it was often a fragile thing...
The only thing worse than X.400 for email was the "workgroup" level of mail servers like MS Mail/cc:Mail. If I recall correctly they could sometimes be set up so your email address was effectively a list of hops on the route. This was because there was no centralised infrastructure to speak of - every mail server was just its own little island. It might have connections to other mail servers, but there was no overarching directory or configuration infrastructure shared by all servers.
If that was the case then your email address would be "johnsmith @ hop1 @ hop2 @ hop3" on one mail server, but for someone on the mail server at hop1 your email address would be "johnsmith @ hop2 @ hop3", and so on. It was an absolute nightmare for big companies, and one of the many reasons that those products were killed off in favour of their bigger siblings.
In the early 90s I implemented a gateway between Novell email and X.400. What amused me the most was X.400 specified an exclusive enumerated list of reasons why email couldn't be delivered, including "recipient is dead". At the X.400 protocol level this was a binary number. SMTP uses a 3 digit number for general category, followed by a free form line of text. Many other Internet standards including HTTP use the same pattern.
It was already obvious at the time that the X.400 field was insufficient, yet also impractical for mail administrators to ensure was complete and correct.
That was the underlying problem with the X.400 and similar where they covered everything in advance as part of the spec, while Internet standards were more pragmatic.
Who can forget addresses like "utzoo!watmath!clyde!burl!ulysses!allegra!mit-eddie!rms@mit-prep"
1. smtp predates dns. or really even most of the internet. It was originally designed to work over uucp.
2. early smtp used bang paths (remember those) where the route or partial route is baked into the path.
Just seeing that X.400 notation is giving me bad memories!
- poor Internet fit, assuming managed, trusted networks - some promises depended on all participating systems behaving honestly
- once a message reaches another server, you cannot guarantee it isn't copied, backed up, or logged
- X.400 read receipts: more reliable but also more privacy invasive
- X.400 metadata: carries a lot of routing, classification, and organizational info leading to potential privacy leaks
- SMTP is ugly but observable, you don't need a standard specialist to debug issues
Yes, it is a pain to manage. Yes, it is all still mostly running on 20+-year-old hardware and software.
It is slightly ironic that the main way we communicate X.400 addresses between parties is through modern email.
I see that Wikipedia claims that "X.400 is quite widely implemented[citation needed], especially for EDI services", and that might once have been the case - but I doubt it was particularly widespread even at the time that article was first written. It's worth noting that that [citation needed] tag dates from October 2008!
For example from 2023: X.1095: Entity authentication service for pet animals using telebiometrics