Top
Best
New

Posted by signa11 2 days ago

Wirth's Revenge(jmoiron.net)
201 points | 97 comments
lateforwork 1 day ago|
Take a look at what was possible in the late 1980s with 8 MB of RAM: https://infinitemac.org/1989/NeXTStep%201.0

You can run NeXTStep in your browser by clicking above link. A couple of weeks ago you could run Framemaker as well. I was blown away by what Framemaker of the late 1980s could do. Today's Microsoft Word can't hold a candle to Framemaker of the late 1980s!

Edit: Here's how you start FrameMaker:

In Finder go to NextDeveloper > Demos > FrameMaker.app

Then open demo document and browse the pages of the demo document. Prepare to be blown away. You could do that in 1989 with like 64 MB of RAM??

In the last 37 years the industry has gone backwards. Microsoft Word has been stagnant due to no competition for the last few decades.

drfuchs 1 day ago||
Well, ackchyually, the first releases of FrameMaker were created on Sun 3/50 workstations with 4MB of (unexpandable, soldered-in) RAM on a 16Mhz 68020. Most customers had the same model, and could work on modestly-sized documents with ease.

But it's not a lot of space for documents of hundreds of pages, so typical customers who were using FrameMaker to write user manuals for their products had to use "book" files to tie together individually edited chapter files. Then, once in a while you'd have to push the "generate" button on the book to get all the page numbers consistent between chapters, all the cross-references updated, and generate the updated Table Of Contents, Index, etc. You're welcome.

But there's a potential degenerate case where Chapter 1 might have a forward reference to Chapter 2 ("see page 209"), but due to some editing in Chapter 2, the referenced material now on page 210. Well, in some fonts, "209" is wider than "210" (since "1" can be skinny). So, during the Generate operation, the reference becomes "see page 210". But there's some tiny chance that this skinnier text changes the including paragraph to have one less line, so there's some tinier chance that Chapter 1 takes one less page, so Chapter 2 starts one page earlier, and now the referenced material is back on page 209. So now we're in a loop.

This was such an unlikely edge case that nobody else noticed that it even existed, much less that it was detected. I didn't bother with a fancy error message; it would just give a little one-word popup: "Degenerate". Years later, mild panic ensues when a customer calls in, irate that the software is calling them a degenerate. (And it wasn't even a real example, just some other bug that triggered it.)

lateforwork 1 day ago||
You were on FrameMaker development team? That's so awesome!
ctmnt 17 hours ago||
Yeah, wow, talk about burying the lead.
canucker2016 1 day ago|||
MS Word and FrameMaker were never considered competitors in the same market.

Aldus Pagemaker was a closer competitor to Framemaker, but Pagemaker's bread and butter was at the lower end of the market.

see this review for MS Word for Windows 1.0. The competitors listed for their benchmarks include Ami Pro, and the DOS versions of WordPerfect and MS Word.

Review: https://computerhistory.org/wp-content/uploads/2019/08/Infow...

canucker2016 19 hours ago||
For more history on FrameMaker, here's a writeup by one of the founders - https://walden-family.com/david-murray/frame-posted.pdf

The article talks mentions Interleaf as their main competitor.

ferguess_k 1 day ago|||
I think back then, due to the scarcity of RAM and HDD, developers, especially elite developers working for Apple/Microsoft/Borland/whatever really went for the last mile to squeeze as much performance as they could -- or, at least they spent way more time on this comparing to modern day developers -- even for the same applications (e.g. some native Windows programs on Win 2000 v.s. the re-written programs on Win 11).

Nowadays businesses simply don't care. They already achieved the feudal-ish bastion they have dreamed about, and there is no "business value" to spend too much time on it, unless ofc if it is something performance related, like A.I. or Supercomputing.

On the other hand, hardware today is 100X more complicated than the NeXTStep/Intel i486 days. Greybeards starting from the 70s/80s can gradually adapt to the complexity, while newcomers simply have to swim or die -- there is no "training" because any training on a toy computer or a toy OS is useless comparing to the massive architecture and complexity we face today.

I don't know. I wish the evolution of hardware is slower, but it's going to get to the point anyway. I recently completed the MIT xv6 labs and thought I was good enough to hack the kernel a bit, so I took another Linux device driver class, and OMG the complexity is unfathomable -- even the Makefile and KBuild stuffs are way way beyond my understanding. But hey, if I started from Linux 0.95, or maybe even Linux 1.0, I'd have much elss trouble to drill into a subsystem, and gradually adapt. That's why I think I need to give myself a year or two of training to scroll back to maybe Linux 0.95, and focus on just a simpler device driver (e.g. keyboard), and read EVERY evolution. There is no other way for commoners like us.

lateforwork 1 day ago|||
Here's a screenshot of FrameMaker I just took: https://imgur.com/a/CG8kZk8

Look at the fancy page layout that was possible in the late 1980s. Can Word do this today?

kjellsbells 1 day ago|||
I didn't have defending Word on my todo list today,... but Word would totally be the wrong tool for this,so it isnt fair to compare.

The tragedy is that serious large document authoring systems died with the invention of hypertext and the CDROM. Instead of an elegant set of FrameMaker or Interleaf documents for print you got a cdrom with a private site. And then once the web took off, just a site. Something got lost in that transition beyond the pallet of manuals showing up on your loading dock when you bought a system.

Sadly because Word won, technical authors still try to produce some content with it, but (not their fault) it's a horrible broken experience for both writer and reader. One example is the 3GPP specs that define how the mobile phone network works. Giant 200 page Word docs that take minutes to open and paginate.

ferguess_k 1 day ago||
I still wish manuals are written in the old way, like this one: https://archive.org/details/gwbasicusersmanual_202003

It is not only a dump of functions, but also with examples for each one of them. I think the Go one is pretty good: https://go.dev/doc/

WillAdams 13 hours ago||
What is the new way in which manuals should be written?

I've been trying via Literate Programming:

http://literateprogramming.com/

and applying the concepts of:

https://diataxis.fr/

(originally developed at: https://docs.divio.com/documentation-system/) which divides documentation along two axes:

- Action (Practical) vs. Cognition (Theoretical)

- Acquisition (Studying) vs. Application (Working)

resulting in a matrix of four things

- Tutorials

- How-to Guides

- Explanation (of the code)

- Reference (of the code)

which seems to be working well for my current project: https://github.com/WillAdams/gcodepreview/blob/main/gcodepre...

ferguess_k 13 hours ago||
Yeah I think that's pretty good. I'd say Reference + Explanation + Examples is good enough, like the GWBASIC example -- it doesn't give tutorial-size code, just snippets for each function. But having a tutorial section is definitely more helpful.
digitalPhonix 1 day ago||||
I think Publisher would be the equivalent to FrameMaker from the Office suite. Publisher from Office ~2016 could definitely do that.

Unfortunately I think Publisher has faired even worse than Word in terms of stagnation, and now looks to be discontinued?

lateforwork 1 day ago|||
Publisher is the equivalent of InDesign. It was meant for brochures and so on. If you want to write a long technical manual today most people use Word. In that respect we are using less powerful software today than our grandparents.

Note: Adobe bought FrameMaker and continues to sell FrameMaker. But Word has captured the market not because of its technical merit but because of bundling.

ferguess_k 1 day ago||
I have never written any technical manuals, but I'm surprised that Word is the choice of tool. How does one embed e.g. code easily in the document? I feel there must be a better way to do it, maybe some kind of markdown syntax? Latex?
lateforwork 1 day ago||
> How does one embed e.g. code easily in the document?

You don't. For APIs and such, documentation is published online, and you don't need Word for that. Word is used in some industries, where printed manual is needed.

ferguess_k 1 day ago||
What about the printed manuals? I think they still have some of those not too long ago (e.g. Intel manuals). What was the tool chosen? Very curious to know.

Or, maybe a legacy example -- how were the printed manuals of Microsoft C 6.0 written? That was in the early 90s I think.

WillAdams 1 day ago||
Framemaker.
ferguess_k 1 day ago||
Thanks, thought MSFT was using its own tools.
WillAdams 1 day ago||
If you don't make it, you can't use it.

Microsoft has never made a technical publishing package, so it has to be outsourced.

ferguess_k 1 day ago||
Yeah I agreed. Kinda missed the old days with thick manuals. I bought one for gdb a couple of years ago and love it -- despite it is just the paper version of the online one.
WillAdams 1 day ago|||
Correct, it is going away as of October this year.
WillAdams 1 day ago||||
Yes, Word could do that, but it wouldn't be pleasant to set up or maintain or print (it would re-flow, badly every time one changes print drivers), moreover, there are only two states for long Word documents which include graphics in my experience: corrupt, and not-yet corrupt.
socalgal2 1 day ago|||
now paste some Chinese and Thai in there and a few high-res jpegs
WillAdams 1 day ago|||
There were also

- TeXview.app which at least inspired the award-winning TeXshop.app

- Altsys Virtuoso which became Macromedia Freehand (having been created as a successor to Freehand 3) --- these days one can use Cenon https://cenon.info/ (but it's nowhere near as nice/featureful)

- WriteNow.app --- while this was also a Mac application, the NeXT implementation was my favourite --- WN was probably the last major commercial app written in Assembly (~100,000 lines)

Still sad my NeXT Cube stopped booting up....

projektfu 1 day ago||
Also Lotus Symphony
layla5alive 21 hours ago|||
I'm sure they absolutely did not have 64MB of RAM in their workstations in 1989 :)
vanderZwan 19 hours ago||
Haha, right? I remember meeting up with some friends after school back in 1996 at the home of the one kid whose dad was a surgeon, and him showing off their Pentium Pro with a mindboggling 32 MiB of RAM! And then we tried playing SimIsle! Which actually managed to run on their computer! Very, very slowly! Unbelievable! :)
senderista 1 day ago|||
64MB in 1989? That wasn’t too shabby in 1999!
lateforwork 1 day ago||
FrameMaker 1.0 for the NeXTcube required a minimum of 8 MB to 16 MB of RAM.
canucker2016 1 day ago||
A web search shows that FrameMaker 1.0 cost $2,500.

wikipedia says that the Windows version, released in 1992, was priced at $500, which cannibalized sales on other platforms.

m463 1 day ago||
That fat thing wouldn't fit on my 360k floppy!
satvikpendem 1 day ago||
While the author says that much of it can be attributed to the layers of software in between to make it more accessible to people, in my experience most cases are about people being lazy in their developing of applications.

For example, there was a case of how Claude Code uses React to figure out what to render in the terminal and that in itself causes latency and its devs lament how they have "only" 16.7 ms to achieve 60 FPS. On a terminal. That can do way more than that since its inception. Primeagen shows an example [0] of how even the most terminal change filled applications run much faster such that there is no need to diff anything, just display the new change!

[0] https://youtu.be/LvW1HTSLPEk

Cthulhu_ 1 day ago||
It makes me wish more graphics programmers would jump over to application development - 16.7ms is a huge amount of time for them, and 60 frames per second is such a low target. 144 or bust.
moring 1 day ago|||
I don't think graphics devs changing over would change much. They would probably not lament over 16ms, but they would quickly learn that performance does not matter much in application development, and start building their own abstraction layer cake.

It's not even that performance is unimportant in absolute terms, but rather that the general state of software is so abysmal that performance is the least of your problems as a user, so you're not going to get excited over it.

pjmlp 1 day ago||||
No need for graphics programmers, anyone that is still around coding since the old days, does remember on how to make use of data structures, algorithms, and how to do much with little hardware resources.

Maybe the RAM prices will help bringing those skills back.

socalgal2 1 day ago||
Those practices make things hard to modify and update. As one example, I use to use binary formats for game data. The data is loaded into memory and it's ready to use, at most a few offsets need to be changed to pointers, in place.

Good: (*) no extra memory needed (*) can read directly from CD/DVD/HD to memory - not through buffered i/o (*) no parsing time

Bad: (*) change or add a single field in to any part of the saved data and all previous files are now invalid.

I'd solve this with a version number and basically print a message "old version in file, rebuild your data"

Modern apps use JSON/Protobufs, etc. They are way slower to load, take way more memory as there are usually 2 versions of the data, the data in the file and the parsed data your app is actually using. They take time to parse.

But, they continue to work even with changes and they can more trivially be used across languages and apps.

That's just one example of 100s of others where the abstraction is slower and uses more memory but provides flexibility the old ones didn't.

pjmlp 1 day ago||
No need for such extremes, it would already be enough to stop using Electron crap, or React on the terminal.

Followed by actually reading a data structures and algorithms book.

Finally, using compiled languages instead of interpreted ones.

No need to count bytes to fit into 48 KB.

socalgal2 1 day ago||
I disagree. I use all of those because they let me get work done faster. I've shipped Electron apps to clients that are effectively just a wrapper around a command line app because (1) I knew the client would much prefer a GUI over a command line (2) I was on Mac and they were on Windows and I knew I could more trivially get a cross platform app (3) I could make the app look good more trivially with HTML/CSS than I could with native (4) Iteration time is instant (change something /refresh). The clients don't care. Interpreted languages are fine.
pjmlp 1 day ago||
And that is how we get the mess we are in today.
bccdee 1 day ago||
Exactly. It turns out there are strong business incentives that favour tall towers of abstractions. Then everyone's using Electron, so it doesn't really matter if it's slow. People just come to perceive computers as slow. I hate it.
jacquesm 1 day ago||||
And embedded too. But then again, they do what they do precisely because in that environment those skills are appreciated, and elsewhere they are not.
ferguess_k 1 day ago||||
It's mostly on the business side. If business doesn't care then developers have no choice. Ofc the customers need to care too, looks like we don't care either...in general.
stoneforger 21 hours ago||
This is the state of the world, uncaring.
GuB-42 1 day ago||||
One tradeoff is that one of the tradeoffs graphics programmers do is about security. They typically work with raw pointers, using custom memory allocation strategies, memory safety comes after performance. There is not much in terms of sandboxing, bounds checking, etc... these things are costly in terms of performance, so they don't do it if they don't have to.

That's because performance is critical to games (where the graphics programmers usually are), and if the game crashes, no big deal as long as it doesn't happen so often as to seriously impact normal gameplay experience. Exploits are to be expected and sometimes kept deliberately if it leads to interesting gameplay, it is a staple of speedruns. Infinite money is fun in a game, but not in serious banking software...

I am all for performance, and I think the current situation is a shame, but there are tradeoffs, we need people who care about both performance and security, maybe embedded software developers who work on critical systems, but expect a 10x increase in costs.

bluGill 1 day ago|||
That wouldn't make any difference. Graphics programmers spend a lot of effort on performance because spending a lot of $$$$ (time) can make an improvement that people care about. For most applications nobody cares enough about speed to pay the $$$ needed to make it fast.

Many application programmers could make things faster - but their boss says good enough, ship it, move to a new feature that is worth far more to me.

elliotec 1 day ago|||
Yeah, I think a lot of this can be attributed to institutional and infrastructural inertia, abstraction debt, second+-order ignorance, and narrowing of specialty. People now building these things are probably good enough at React etc. to do stuff that needs to be done with it almost anywhere, but their focus needs to be ML.

The people that could make terminal stuff super fast at low level are retired on an island, dead, or don't have the other specialties required by companies like this, and users don't care as much about 16.7ms on a terminal when the thing is building their app 10x faster so the trade off is obvious.

mountain_peak 1 day ago||
Interestingly (or possibly not), since my very first computers had ~4K of RAM, I became adept at optimizations of all kinds, which came in handy for my first job - coding 360 mainframe assembly. There, we wouldn't be able to implement our changes if our terminal applications (accessing DB2/IMS) responded in anything greater than 1s. Then, the entire system was replaced with a cloud solution where ~30s of delay was acceptable.

I think the Internet made 'waiting' for a response completely normalized for many applications. Before then, users flew through screens using muscle memory. Now, when I see how much mouse clicking goes on at service counters, I always think back to those ultra-fast response time standards. I still see a few AS/400 or mainframe terminal windows running 'in the wild' and wonder what new employees think about those systems.

projektfu 1 day ago||
It's getting ridiculous. I know SPAs aren't to blame specifically, but it feels like whenever the 2003 page-based web interface is replaced with the modern SPA each action takes forever to load or process. Was just noticing this on FedEx's site today.
tasty_freeze 1 day ago||
This sounds like how curses did things, a 1980 technology.

On the other hand, if the guy in the video ran his app over a remote connection with limited bandwidth, diffing would probably perform better. I have a one Gbps google fiber connection to my job but at times my vpn bandwidth can choke down to a couple hundred kbps and sometimes worse.

nickm12 1 day ago||
I'm not sure what the high-level point of the article is, but I agree with the observation that we (programmers) should generally prefer using AI agents to write correct, efficient programs to do what we want, to have agents do that work.

Not that everything we want an agent to do is easy to express as a program, but we do know what computers are classically good at. If you had to bet on a correct outcome, would you rather an AI model sort 5000 numbers "in its head" or write a program to do the sort and execute that program?

I'd think this is obvious, but I see people professionally inserting AI models in very weird places these days, just to say they are a GenAI adopter.

xnorswap 1 day ago||
An interesting article and it was refreshing to read something that had absolutely no hallmarks of LLM retouching or writing.

It contains a helpful insight that there are multiple modes in which to approach LLMs, and that helps explain the massive disparity of outcomes using them.

Off topic: This article is dated "Feb 2nd" but the footer says "2025". I assume that's a legacy generated footer and it's meant to be 2026?

delichon 1 day ago||
> LLMs are still intensely computationally expensive. You can ask an AI what 2 * 3 is and for the low price of several seconds of waiting ... But the computer you have in front of you can perform this calculation a billion times per second.

This is a flip side of the bitter lesson. If all attention goes into the AI algorithm, and none goes into the specific one in front of you, the efficiency is abysmal and Wirth gets his revenge. At any scale larger than epsilon, whenever possible LLMs are better leveraged to generate not the answer but the code to generate it. The bitter lesson remains valid, but at a layer of remove.

WillAdams 1 day ago||
Surprised no one has mentioned Oberon in this discussion:

https://oberon.org/en

Really wish that there were drivers available to make that run nicely/natively on a Raspberry Pi rather than in an emulator:

http://pascal.hansotten.com/niklaus-wirth/project-oberon/obe...

lproven 14 hours ago|
> Really wish that there were drivers available to make that run nicely/natively on a Raspberry Pi rather than in an emulator:

So very much this.

The Pi range has redefined public expectations of what a small cheap computer is and can do. But it runs a cut-down version of a huge, complex OS.

We should give it its own small OS, one simple enough for a single ordinary person to read and understand.

The core of Project Oberon is about 4000 lines of code, and that's a compiler, an editor, a windowing TUI OS to run it in, and enough OS to save source, create and store binaries, and run them.

The successor to Oberon is A2 with the Bluebottle GUI. This is SMP-aware, has a TCP/IP network stack and a simple HTTP web browser, can do email and so on.

Its kernel is about 8000 lines of code.

I believe the Ultibo bare-metal FreePascal system for the Raspberry Pi:

https://ultibo.org/

... is the work of one developer, an Australian chap. I feel confident that with such knowledge he could do a native A2 port, and probably not take very long either if he can reuse Ultibo code and drivers.

I wonder how much money he'd want, and if there would be any way to crowdfund it...

WillAdams 14 hours ago||
Tell me where there is a Patreon or Kofi link for this, and I'm in.
whoisthemachine 1 day ago||
Lots of good thoughts in here.

> You can ask an AI what 2 * 3 is and for the low price of several seconds of waiting, a few milliliters of water and enough power to watch 5% of a TikTok video on a television, it will tell you.

This might be what many of the companies that host and sell time with an LLM want you to do, however. Go ahead, drive that monster truck one mile to pickup fast food! The more that's consumed, the more money that goes in the pockets of those companies....

> The instincts are for people to get the AI to do work for them, not to learn from the AI how to do the work themselves.

Improviny my own learning is one of the few things I find beneficial with LLMs!

stego-tech 1 day ago||
Just a genuinely excellent essay written to a broader technical audience than simply those software engineers who live in the guts of databases optimizing hyper-specific edge-cases (and no disrespect to you amazingly talented people, but man your essays can be very chewy reads sometimes). I hope the OP’s got some caching ready, because this is going to get shared.
cadamsdotcom 1 day ago||
The actual constraint is how long people are willing to wait for results.

If the results are expected to be really good, people will wait a seriously long time.

That’s why engineers move on to the next feature as soon as the thing is working - people simply don’t care if it could be faster, as long as it’s not too slow.

It doesn’t matter what’s technically possible- in fact, a computer that works too fast might be viewed as suspicious. Taking a while to give a result is a kind of proof of work.

topaz0 1 day ago||
> people simply don't care

I don't think that's right, even for laypeople. It's just that the pain of things that take 5 seconds when they could take 50 ms is subtle and can be discounted or ignored until you are doing a hundred things in a row that take 5 seconds instead of 50 ms. And if you don't know that it should be doable in 50 ms then you don't necessarily know you should be complaining about that pain.

wat10000 1 day ago||
It's also that the people who pay the price for slowness aren't the people who can fix it. Optimizing a common function in popular code might collectively save centuries of time, but unless that converts to making more money for your employer, they probably don't want you to do it. https://www.folklore.org/Saving_Lives.html
deepserket 1 day ago||
> It doesn’t matter what’s technically possible- in fact, a computer that works too fast might be viewed as suspicious. Taking a while to give a result is a kind of proof of work.

In recent times I found myself falling for this preconception when a LLM starts to spit text just a couple of seconds after a complex request.

pocksuppet 1 day ago||
https://thedailywtf.com/articles/The-Slow-Down-Loop
emsign 1 day ago|
LLMs are a very cool and powerful tool when you've learned how to use them effectively. But most people probably didn't and thus use them in a way that they produce unsatisfying results while maximizing resource and token use.

The cause of that is the companies with the big models are actually in the token selling business, marketing their models as all around problem solvers and life improvers.

More comments...