You can run NeXTStep in your browser by clicking above link. A couple of weeks ago you could run Framemaker as well. I was blown away by what Framemaker of the late 1980s could do. Today's Microsoft Word can't hold a candle to Framemaker of the late 1980s!
Edit: Here's how you start FrameMaker:
In Finder go to NextDeveloper > Demos > FrameMaker.app
Then open demo document and browse the pages of the demo document. Prepare to be blown away. You could do that in 1989 with like 64 MB of RAM??
In the last 37 years the industry has gone backwards. Microsoft Word has been stagnant due to no competition for the last few decades.
But it's not a lot of space for documents of hundreds of pages, so typical customers who were using FrameMaker to write user manuals for their products had to use "book" files to tie together individually edited chapter files. Then, once in a while you'd have to push the "generate" button on the book to get all the page numbers consistent between chapters, all the cross-references updated, and generate the updated Table Of Contents, Index, etc. You're welcome.
But there's a potential degenerate case where Chapter 1 might have a forward reference to Chapter 2 ("see page 209"), but due to some editing in Chapter 2, the referenced material now on page 210. Well, in some fonts, "209" is wider than "210" (since "1" can be skinny). So, during the Generate operation, the reference becomes "see page 210". But there's some tiny chance that this skinnier text changes the including paragraph to have one less line, so there's some tinier chance that Chapter 1 takes one less page, so Chapter 2 starts one page earlier, and now the referenced material is back on page 209. So now we're in a loop.
This was such an unlikely edge case that nobody else noticed that it even existed, much less that it was detected. I didn't bother with a fancy error message; it would just give a little one-word popup: "Degenerate". Years later, mild panic ensues when a customer calls in, irate that the software is calling them a degenerate. (And it wasn't even a real example, just some other bug that triggered it.)
Aldus Pagemaker was a closer competitor to Framemaker, but Pagemaker's bread and butter was at the lower end of the market.
see this review for MS Word for Windows 1.0. The competitors listed for their benchmarks include Ami Pro, and the DOS versions of WordPerfect and MS Word.
Review: https://computerhistory.org/wp-content/uploads/2019/08/Infow...
The article talks mentions Interleaf as their main competitor.
Nowadays businesses simply don't care. They already achieved the feudal-ish bastion they have dreamed about, and there is no "business value" to spend too much time on it, unless ofc if it is something performance related, like A.I. or Supercomputing.
On the other hand, hardware today is 100X more complicated than the NeXTStep/Intel i486 days. Greybeards starting from the 70s/80s can gradually adapt to the complexity, while newcomers simply have to swim or die -- there is no "training" because any training on a toy computer or a toy OS is useless comparing to the massive architecture and complexity we face today.
I don't know. I wish the evolution of hardware is slower, but it's going to get to the point anyway. I recently completed the MIT xv6 labs and thought I was good enough to hack the kernel a bit, so I took another Linux device driver class, and OMG the complexity is unfathomable -- even the Makefile and KBuild stuffs are way way beyond my understanding. But hey, if I started from Linux 0.95, or maybe even Linux 1.0, I'd have much elss trouble to drill into a subsystem, and gradually adapt. That's why I think I need to give myself a year or two of training to scroll back to maybe Linux 0.95, and focus on just a simpler device driver (e.g. keyboard), and read EVERY evolution. There is no other way for commoners like us.
Look at the fancy page layout that was possible in the late 1980s. Can Word do this today?
The tragedy is that serious large document authoring systems died with the invention of hypertext and the CDROM. Instead of an elegant set of FrameMaker or Interleaf documents for print you got a cdrom with a private site. And then once the web took off, just a site. Something got lost in that transition beyond the pallet of manuals showing up on your loading dock when you bought a system.
Sadly because Word won, technical authors still try to produce some content with it, but (not their fault) it's a horrible broken experience for both writer and reader. One example is the 3GPP specs that define how the mobile phone network works. Giant 200 page Word docs that take minutes to open and paginate.
It is not only a dump of functions, but also with examples for each one of them. I think the Go one is pretty good: https://go.dev/doc/
I've been trying via Literate Programming:
http://literateprogramming.com/
and applying the concepts of:
(originally developed at: https://docs.divio.com/documentation-system/) which divides documentation along two axes:
- Action (Practical) vs. Cognition (Theoretical)
- Acquisition (Studying) vs. Application (Working)
resulting in a matrix of four things
- Tutorials
- How-to Guides
- Explanation (of the code)
- Reference (of the code)
which seems to be working well for my current project: https://github.com/WillAdams/gcodepreview/blob/main/gcodepre...
Unfortunately I think Publisher has faired even worse than Word in terms of stagnation, and now looks to be discontinued?
Note: Adobe bought FrameMaker and continues to sell FrameMaker. But Word has captured the market not because of its technical merit but because of bundling.
You don't. For APIs and such, documentation is published online, and you don't need Word for that. Word is used in some industries, where printed manual is needed.
Or, maybe a legacy example -- how were the printed manuals of Microsoft C 6.0 written? That was in the early 90s I think.
Microsoft has never made a technical publishing package, so it has to be outsourced.
- TeXview.app which at least inspired the award-winning TeXshop.app
- Altsys Virtuoso which became Macromedia Freehand (having been created as a successor to Freehand 3) --- these days one can use Cenon https://cenon.info/ (but it's nowhere near as nice/featureful)
- WriteNow.app --- while this was also a Mac application, the NeXT implementation was my favourite --- WN was probably the last major commercial app written in Assembly (~100,000 lines)
Still sad my NeXT Cube stopped booting up....
wikipedia says that the Windows version, released in 1992, was priced at $500, which cannibalized sales on other platforms.
For example, there was a case of how Claude Code uses React to figure out what to render in the terminal and that in itself causes latency and its devs lament how they have "only" 16.7 ms to achieve 60 FPS. On a terminal. That can do way more than that since its inception. Primeagen shows an example [0] of how even the most terminal change filled applications run much faster such that there is no need to diff anything, just display the new change!
It's not even that performance is unimportant in absolute terms, but rather that the general state of software is so abysmal that performance is the least of your problems as a user, so you're not going to get excited over it.
Maybe the RAM prices will help bringing those skills back.
Good: (*) no extra memory needed (*) can read directly from CD/DVD/HD to memory - not through buffered i/o (*) no parsing time
Bad: (*) change or add a single field in to any part of the saved data and all previous files are now invalid.
I'd solve this with a version number and basically print a message "old version in file, rebuild your data"
Modern apps use JSON/Protobufs, etc. They are way slower to load, take way more memory as there are usually 2 versions of the data, the data in the file and the parsed data your app is actually using. They take time to parse.
But, they continue to work even with changes and they can more trivially be used across languages and apps.
That's just one example of 100s of others where the abstraction is slower and uses more memory but provides flexibility the old ones didn't.
Followed by actually reading a data structures and algorithms book.
Finally, using compiled languages instead of interpreted ones.
No need to count bytes to fit into 48 KB.
That's because performance is critical to games (where the graphics programmers usually are), and if the game crashes, no big deal as long as it doesn't happen so often as to seriously impact normal gameplay experience. Exploits are to be expected and sometimes kept deliberately if it leads to interesting gameplay, it is a staple of speedruns. Infinite money is fun in a game, but not in serious banking software...
I am all for performance, and I think the current situation is a shame, but there are tradeoffs, we need people who care about both performance and security, maybe embedded software developers who work on critical systems, but expect a 10x increase in costs.
Many application programmers could make things faster - but their boss says good enough, ship it, move to a new feature that is worth far more to me.
The people that could make terminal stuff super fast at low level are retired on an island, dead, or don't have the other specialties required by companies like this, and users don't care as much about 16.7ms on a terminal when the thing is building their app 10x faster so the trade off is obvious.
I think the Internet made 'waiting' for a response completely normalized for many applications. Before then, users flew through screens using muscle memory. Now, when I see how much mouse clicking goes on at service counters, I always think back to those ultra-fast response time standards. I still see a few AS/400 or mainframe terminal windows running 'in the wild' and wonder what new employees think about those systems.
On the other hand, if the guy in the video ran his app over a remote connection with limited bandwidth, diffing would probably perform better. I have a one Gbps google fiber connection to my job but at times my vpn bandwidth can choke down to a couple hundred kbps and sometimes worse.
Not that everything we want an agent to do is easy to express as a program, but we do know what computers are classically good at. If you had to bet on a correct outcome, would you rather an AI model sort 5000 numbers "in its head" or write a program to do the sort and execute that program?
I'd think this is obvious, but I see people professionally inserting AI models in very weird places these days, just to say they are a GenAI adopter.
It contains a helpful insight that there are multiple modes in which to approach LLMs, and that helps explain the massive disparity of outcomes using them.
Off topic: This article is dated "Feb 2nd" but the footer says "2025". I assume that's a legacy generated footer and it's meant to be 2026?
This is a flip side of the bitter lesson. If all attention goes into the AI algorithm, and none goes into the specific one in front of you, the efficiency is abysmal and Wirth gets his revenge. At any scale larger than epsilon, whenever possible LLMs are better leveraged to generate not the answer but the code to generate it. The bitter lesson remains valid, but at a layer of remove.
Really wish that there were drivers available to make that run nicely/natively on a Raspberry Pi rather than in an emulator:
http://pascal.hansotten.com/niklaus-wirth/project-oberon/obe...
So very much this.
The Pi range has redefined public expectations of what a small cheap computer is and can do. But it runs a cut-down version of a huge, complex OS.
We should give it its own small OS, one simple enough for a single ordinary person to read and understand.
The core of Project Oberon is about 4000 lines of code, and that's a compiler, an editor, a windowing TUI OS to run it in, and enough OS to save source, create and store binaries, and run them.
The successor to Oberon is A2 with the Bluebottle GUI. This is SMP-aware, has a TCP/IP network stack and a simple HTTP web browser, can do email and so on.
Its kernel is about 8000 lines of code.
I believe the Ultibo bare-metal FreePascal system for the Raspberry Pi:
... is the work of one developer, an Australian chap. I feel confident that with such knowledge he could do a native A2 port, and probably not take very long either if he can reuse Ultibo code and drivers.
I wonder how much money he'd want, and if there would be any way to crowdfund it...
> You can ask an AI what 2 * 3 is and for the low price of several seconds of waiting, a few milliliters of water and enough power to watch 5% of a TikTok video on a television, it will tell you.
This might be what many of the companies that host and sell time with an LLM want you to do, however. Go ahead, drive that monster truck one mile to pickup fast food! The more that's consumed, the more money that goes in the pockets of those companies....
> The instincts are for people to get the AI to do work for them, not to learn from the AI how to do the work themselves.
Improviny my own learning is one of the few things I find beneficial with LLMs!
If the results are expected to be really good, people will wait a seriously long time.
That’s why engineers move on to the next feature as soon as the thing is working - people simply don’t care if it could be faster, as long as it’s not too slow.
It doesn’t matter what’s technically possible- in fact, a computer that works too fast might be viewed as suspicious. Taking a while to give a result is a kind of proof of work.
I don't think that's right, even for laypeople. It's just that the pain of things that take 5 seconds when they could take 50 ms is subtle and can be discounted or ignored until you are doing a hundred things in a row that take 5 seconds instead of 50 ms. And if you don't know that it should be doable in 50 ms then you don't necessarily know you should be complaining about that pain.
In recent times I found myself falling for this preconception when a LLM starts to spit text just a couple of seconds after a complex request.
The cause of that is the companies with the big models are actually in the token selling business, marketing their models as all around problem solvers and life improvers.