Top
Best
New

Posted by quesomaster9000 12/29/2025

Show HN: Z80-μLM, a 'Conversational AI' That Fits in 40KB(github.com)
How small can a language model be while still doing something useful? I wanted to find out, and had some spare time over the holidays.

Z80-μLM is a character-level language model with 2-bit quantized weights ({-2,-1,0,+1}) that runs on a Z80 with 64KB RAM. The entire thing: inference, weights, chat UI, it all fits in a 40KB .COM file that you can run in a CP/M emulator and hopefully even real hardware!

It won't write your emails, but it can be trained to play a stripped down version of 20 Questions, and is sometimes able to maintain the illusion of having simple but terse conversations with a distinct personality.

--

The extreme constraints nerd-sniped me and forced interesting trade-offs: trigram hashing (typo-tolerant, loses word order), 16-bit integer math, and some careful massaging of the training data meant I could keep the examples 'interesting'.

The key was quantization-aware training that accurately models the inference code limitations. The training loop runs both float and integer-quantized forward passes in parallel, scoring the model on how well its knowledge survives quantization. The weights are progressively pushed toward the 2-bit grid using straight-through estimators, with overflow penalties matching the Z80's 16-bit accumulator limits. By the end of training, the model has already adapted to its constraints, so no post-hoc quantization collapse.

Eventually I ended up spending a few dollars on Claude API to generate 20 questions data (see examples/guess/GUESS.COM), I hope Anthropic won't send me a C&D for distilling their model against the ToS ;P

But anyway, happy code-golf season everybody :)

514 points | 122 comments
nineteen999 12/29/2025|
This couldn't be more perfectly timed .. I have an Unreal Engine game with both VT100 terminals (for running coding agents) and Z80 emulators, and a serial bridge that allows coding agents to program the CP/M machines:

https://i.imgur.com/6TRe1NE.png

Thank you for posting! It's unbelievable how someone sometimes just drops something that fits right into what you're doing. However bizarre it seems.

quesomaster9000 12/29/2025||
Oh dear, it seems we've... somehow been psychically linked...

I developed a browser-based CP/M emulator & IDE: https://lockboot.github.io/desktop/

I was going to post that instead, but wanted a 'cool demo' instead, and fell down the rabbit hole.

stevekemp 12/29/2025|||
That is beautiful.

I wrote a console-based emulator, and a simple CP/M text-adventure game somewhat recently

https://github.com/skx/cpmulator/

At some point I should rework my examples/samples to become a decent test-suite for CP/M emulators. There are so many subtle differences out there.

It seems I could even upload a zipfile of my game, but the escape-codes for clearing the screen don't work, sadly:

https://github.com/skx/lighthouse-of-doom

jaak 12/29/2025||||
I've been playing the Z80-μLM demos in your CP/M emulator. Works great! However, I have yet to guess a correct answer in GUESS.COM! I'm not sure if I'm just not asking the right questions or I'm just really bad at it!
quesomaster9000 12/29/2025||
Don't tell anybody, but you sit on it
sailfast 12/29/2025||
Boris!!!
nineteen999 12/30/2025|||
Haha I love it. Just imagine if instead of DOS-based Windows, a CP/M based alternative evolved and took over the PC industry. Nice one!
sixtyj 12/29/2025|||
Connections: Alternative History of Technology by James Burke documents these "coincidences".
TeMPOraL 12/29/2025||
Those "coincidences" in Connections are really no coincidence at all, but path dependence. Breakthrough advance A is impossible or useless without prerequisites B and C and economic conditions D, but once B and C and D are in place, A becomes obvious next step.
embedding-shape 12/29/2025|||
Some of those really are coincidences, like "Person A couldn't find their left shoe and ended up in London at a coffee house, where Person B accidentally ended up when their carriage hit a wall, which lead to them eventually coming up with Invention C" for example.

Although from what I remember from the TV show, most of what he investigates/talks about is indeed path dependence in one way or another, although not everything was like that.

sixtyj 12/29/2025|||
That’s why I’ve put the word in parentheses :)
simonjgreen 12/29/2025||
Super intrigued but annoyingly I can’t view imgur here
abanana 12/29/2025||
Indeed, part of me wants to not use imgur because we can't access it, but a bigger part of me fully supports imgur's decision to give the middle finger to the UK after our government's censorship overreach.
wizzwizz4 12/29/2025|||
It was a really clever move on Imgur's part. Their blocking the UK has nothing to do with the Online Safety Act: it's a response to potential prosecution under the Data Protection Act, for Imgur's (alleged) unlawful use of children's personal data. By blocking the UK and not clearly stating why, people assume they're taking a principled stand about a different issue entirely, so what should be a scandal is transmuted into positive press.
homebrewer 12/29/2025|||
It blocks many more countries than just the UK because it's the lowest effort way of fighting "AI" scrapers.

imgur was created as a sort of protest against how terrible most image hosting platforms were back then, went down the drain several years later, and it's now just like they were.

supern0va 12/29/2025||
It turns out that running free common internet infrastructure at scale is both hard and expensive, unfortunately. What we really need is a non-profit to run something like imgur.
rahen 12/29/2025||
I love it, instant Github star. I wrote an MLP in Fortran IV for a punched card machine from the sixties (https://github.com/dbrll/Xortran), so this really speaks to me.

The interaction is surprisingly good despite the lack of attention mechanism and the limitation of the "context" to trigrams from the last sentence.

This could have worked on 60s-era hardware and would have completely changed the world (and science fiction) back then. Great job.

noosphr 12/29/2025|
Stuff like this is fascinating. Truly the road not taken.

Tin foil hat on: i think that a huge part of the major buyout of ram from AI companies is to keep people from realising that we are essentially at the home computer revolution stage of llms. I have a 1tb ram machine which with custom agents outperforms all the proprietary models. It's private, secure and won't let me be motetized.

Zacharias030 12/29/2025||
how so? sound like you are running Kimi K2 / GLM? What agents do you give it and how do you handle web search and computer use well?
Dwedit 12/29/2025||
In before AI companies buy up all the Z80s and raise the prices to new heights.
nubinetwork 12/29/2025|
Too late, they stopped being available last year.
whobre 12/29/2025||
Kind of. There’s still eZ80
giancarlostoro 12/29/2025||
This is something I've been wondering about myself. What's the "Minimally Viable LLM" that can have simple conversations. Then my next question is, how much can we push it so it can learn from looking up data externally, can we build a tiny model with an insanely larger context window? I have to assume I'm not the only one who has asked or thought of these things.

Ultimately, if you can build an ultra tiny model that can talk and learn on the fly, you've just fully localized a personal assistant like Siri.

andy12_ 12/29/2025||
This is extremely similar to Karpathy's idea of a "cognitive core" [1]; an extremely small model with near-0 encyclopedic knowledge and basic reasoning and tool-use capabilities.

[1] https://x.com/karpathy/status/1938626382248149433

fho 12/29/2025|||
You might be interested in RWKV: https://www.rwkv.com/

Not exactly "minimal viable", but a "what if RNNs where good for LLMs" case study.

-> insanely fast on CPUs

giancarlostoro 12/30/2025||
My personal idea revolves around "can I run it on a basic smartphone, with whatever the 'floor' for basic smartphones under lets say $300 is for memory (let's pretend RAM prices are normal).

Edit: The fact this runs on a Smartphone means it is highly relevant. My only thing is, how do we give such a model an "unlimited" context window, so it can digest as much as it needs. I know some models know multiple languages, I wouldnt be surprised if sticking to only English would reduce the model size / need for more hardware and make it even smaller / tighter.

qingcharles 12/29/2025|||
I think what's amazing to speculate is how we could have had some very basic LLMs in at least the 90s if we'd invented the tech previously. I wonder what the world would be like now if we had?
Dylan16807 12/29/2025||
For your first question, the LLM someone built in Minecraft can handle simple conversations with 5 million weights, mostly 8 bits.

I doubt it would be able to make good use of a large context window, though.

andrepd 12/29/2025||
We should show this every time a Slack/Teams/Jira engineer tries to explain to us why a text chat needs 1.5GB of ram to start up.
dangus 12/29/2025|
> It won't write your emails, but it can be trained to play a stripped down version of 20 Questions, and is sometimes able to maintain the illusion of having simple but terse conversations with a distinct personality.

You can buy a kid’s tiger electronics style toy that plays 20 questions.

It’s not like this LLM is bastion of glorious efficiency, it’s just stripped down to fit on the hardware.

Slack/Teams handles company-wide video calls and can render anything a web browser can, and they run an entire App Store of apps, all from a cross-platform application.

Including Jira in the conversation doesn’t even make logical sense. It’s not a desktop application that consumes memory. Jira has such a wide scope that the word “Jira” doesn’t even describe a single product.

andrepd 12/29/2025|||
My Pentium 3 in 2005 could do chat and video calls and play chess and send silly emotes. There is no conceivable user-facing reason why in 20 years the same functionality takes 30× as many resources, only developer-facing reasons. But those are not valid reasons for a professional. If a bridge engineer claims he now needs 30× as much concrete to build the same bridge as he did 20 years ago, and the reason is his/her own conveinence, that would not fly.
ben_w 12/29/2025|||
> If a bridge engineer claims he now needs 30× as much concrete to build the same bridge as he did 20 years ago, and the reason is his/her own conveinence, that would not fly.

By itself, I would agree.

However, in this metaphor, concrete got 15x cheaper in the same timeframe. Not enough to fully compensate for the difference, but enough that a whole generation are now used to much larger edifices.

andrepd 12/29/2025|||
So it means you could save your client 93% of their money in concrete, but you choose to make it 2× more expensive! That only makes my metaphor stronger ahaha.
ben_w 12/29/2025|||
You could save 93% of the money in concrete, at the cost of ???* in the more-expensive-than-ever time of the engineer themselves who now dominates the sticker price.

(At this point the analogy breaks down because who pays for the software being slower is the users' time, not the taxes paid by a government buying a bridge from a civil engineer…)

* I don't actually buy the argument that the last decade or so of layers of "abstraction" save us developers any time at all, rather I think they're now several layers deep of nested inner platforms that each make things more complicated, but that's a separate entire thread, and blog post: https://benwheatley.github.io/blog/2024/04/07-21.31.19.html

beagle3 12/29/2025|||
But also, there is more traffic on the bridge.

The word processors of 30 years ago often had limits like “50k chapters” and required “master documents” for anything larger. Lotus 123 had much fewer columns or rows than modern excel.

Not an excuse, of course, but the older tools are not usable anymore if you have modern expectations.

kiicia 12/30/2025|||
But it only shows how wasteful your new bridge is. Concrete being cheaper does not mean you somehow need to use more of it.
dangus 12/29/2025|||
I have great doubts that you were doing simultaneous screen sharing from multiple participants with group annotation plus HD video in your group calls, all while supporting chatting that allowed you to upload and view multiple animated gifs, videos, rich formatted text, reactions, slash command and application automation integrations, all simultaneously on your Pentium 3.

I would be interested to know the name of the program that did all that within the same app during that time period.

For some reason Slack gets criticism for being “bloated” when it basically does anything you could possibly imagine and is essentially a business communication application platform. Nobody can actually name a specific application that does everything Slack does with better efficiency.

andrepd 12/29/2025||
You're grasping at anything to justify the unjustifiable. Not only did I do most (not all, obviously) of those things in my Pentium 3, including video and voice chat, screenshare, and silly animated gifs and rich text formatting, but also: that's beside the point. Let's compare like with like then; how much memory does it take to have a group chat with a few people and do a voice/video in MSN messenger or the original Skype, and how much does Slack or Teams take? What about UI stutter? Load time? There's absolutely no justification for a worse user experience in a 2025 computer that would be a borderline supercomputer in 2005.
dangus 1/1/2026|||
You bring up apps like Skype doing similar work in 2005, but Skype was barely out of its 2003 public alpha by then. Version 2.0 beta came out in 2005 and was the first version to support video, and only supported video calling between two people.

And you bring up things that are supposedly bad about Slack that are basically non-existent boogeymen. UI stutter, load time, and excessive memory use, I can’t think of any time any of these things have existed at all or noticeably impacted my experience on Slack on a basic low end laptop.

Those older apps like MSN Messenger and the original Skype didn’t actually do the things that Slack does now. I mean specifically multiple simultaneous screen shares plus annotations plus HD video feeds (with important features like blurred and replaced backgrounds, added by Skype in 2019) for all participants plus running an entire productivity app in the background at the same time.

Skype didn’t have screen sharing, at all, until 2009.

https://content.dsp.co.uk/history-of-skype

You call this situation “unjustifiable” but we would struggle to find any personal computing device sold at any price point that can’t handle the application smoothly. If I go back five years and buy a $200 mini PC or a $300 iPad or $500 laptop it’s going to run Slack just fine.

Specs are just arbitrary numbers on a box. It doesn’t matter that we got to the moon using a turd and a ham sandwich for a computer.

You can’t accept that the layperson doesn’t care that back in my day we walked uphill both ways for 15 miles on our dial-up connection. If it works, it works.

ben_w 12/29/2025||||
> Slack/Teams handles company-wide video calls and can render anything a web browser can, and they run an entire App Store of apps, all from a cross-platform application.

The 4th Gen iPod touch had 256 meg of RAM and also did those things, with video calling via FaceTime (and probably others, but I don't care). Well, except "cross platform", what with it being the platform.

dangus 12/29/2025||
Group FaceTime calls didn’t exist at the time. That wasn’t added until 2018 and required iOS 12.

Remember that Slack does simultaneous multiple participants screen sharing plus annotations plus HD video feeds from all participants plus the entirety of the rest of the app continues to function as if you weren’t on a call at all simultaneously.

It’s an extremely powerful application when you really step back and think about it. It just looks like “text” and boring business software.

ben_w 12/29/2025|||
> Group FaceTime calls didn’t exist at the time. That wasn’t added until 2018 and required iOS 12.

And CU-SeeMe did that in the early 90s with even worse hardware: https://en.wikipedia.org/wiki/File:CU-Schools.GIF

Even more broadly, group calls were sufficiently widely implemented to get themselves standardised 29 years ago: https://en.wikipedia.org/wiki/H.323

> It’s an extremely powerful application when you really step back and think about it. It just looks like “text” and boring business software.

The *entire operating system of the phone* is more powerful, and ran on less.

dangus 12/30/2025||
Why don’t you just go ahead and tell me what specs you think Slack should run on and link me to an example program that has 100% feature parity that stays within those specs?

Showing me a black and white <10FPS group video call with no other accompanying software running simultaneously in the 90s is pointless.

Showing me that someone thought of a protocol is pointless. Just look at the history of HDTV. We wouldn’t really describe HDTV as being available to consumers despite it existing in the early 1990s.

I’d also like you to show me a laptop SKU sold in the last 10 years that is incapable of running Slack. If Slack is so inefficient you should be able to find me a computer that struggles with it.

Finally, I’ll remind you that Slack for mobile is a different application that isn’t running in the same way as the desktop app and uses fewer resources. The latest version of it will run on very old phone hardware, going all the way back to the iPhone 8 (2GB RAM), and that’s assuming you even need the latest version for it to function.

ben_w 1/3/2026||
> Why don’t you just go ahead and tell me what specs you think Slack should run on

1 Ghz processor, 512 MB RAM (might even manage 256 MB), 1080p monitor. And "a graphics accelerator", "a sound card", and "a webcam and microphone".

Probably even less on the RAM and CPU.

> and link me to an example program that has 100% feature parity that stays within those specs?

Windows 2000. Or XP.

That's the point. The OS supports all the apps needed to do whatever.

Making Slack into a monolithic blob to do all is just an example of the inner platform effect.

But if you insist: IE 7 would have been able to do all this. It's an app. It's also an example of the inner platform effect.

> Showing me a black and white <10FPS group video call with no other accompanying software running simultaneously in the 90s is pointless.

You should've thought of that before trying to "well akshually" me about which versions of FaceTime support multi-user video calling.

You want video calling? We had that 30 years ago on systems with total RAM smaller than current CPU cache, with internal busses whose bandwidth was less than your mobile's 5G signal, on screens smaller than the icon that has to be submitted to the App Store, with cameras roughly comparable to what we now use for optical mice, running over networks that were MacGyvered onto physical circuits intended for a single analogue voice signal.

Out of everything you list that Slack can do, the only thing that should even be remotely taxing is the HD video calling. Nothing else, at all. And the only reasons for even that to be taxing is correctly offloading work to the GPU and that you want HD. The GPU should handle this kind of thing trivially so long as you know how to use it.

All the "business logic" you mention in the other thread… if you can't handle the non-video business logic needed to be a server hosting 2000 simultaneous users on something with specs similar to a Raspberry Pi, you're not trying hard enough. I've done that. Business logic is the easy part for anything you can describe as "chat". Even if you add some minigames in there and the server is keeping track of the games, it should be a rounding error on a modern system.

fc417fc802 12/30/2025||||
If these applications only hogged memory when under stress (outgoing screencap plus video, multiple streams incoming, display to 3+ monitors) you might have a point. But that's not the case so you don't.

Meanwhile I can play back multiple 1080 videos on different monitors, run a high speed curl download, saturate my gigabit LAN with a bulk transfer, and run a brrfs scrub in the background all most likely without breaking 2 GB of RAM usage. MPV, VLC, and ffmpeg are all remarkably lightweight.

The only daily application I run that consumes a noticable quantity of resources is my web browser.

dangus 12/30/2025||
If you didn’t babysit your task manager would you know which program used more RAM or not?

This argument is just so endless and tiring.

Saturating my bandwidth or running a btrfs scrub isn’t accomplishing the business logic I need to do my job, that’s what my web browser is doing.

fc417fc802 12/30/2025||
So is it the "business logic" or is it the multiple HD streams that are supposed to account for the resource consumption? You've changed your story. But do please explain how the "business logic" to handle the chat box, UI, and whatever else is supposed to justify the status quo.

People making excuses for poorly designed software is what's tiring.

numpad0 12/31/2025|||
The problem with that kind of feature/benefit based thinking is that it won't correlate with code or computational footprints well. That's like justifying price of cars with seatback materials. That's not where the costs are.

Modern chat apps like Slack, Discord, Teams, etc. are extremely resource intense solely by being skinned Chrome showing overbloated HTMLs. That's it. Most of the "actual" engineering of it is outsourced and externalized to Google, NVIDIA/Intel/AMD, Microsoft/Apple, etc.

messe 12/29/2025|||
> can render anything a web browser can

That's a bug not a feature, and strongly coupled to the root cause for slack's bloat.

dangus 12/29/2025||
One person’s “bloat” is another person’s “critical business feature.”

The app ecosystem of Slack is largely responsible for its success. You can extend it to do almost anything you want.

spopejoy 12/30/2025||
> app ecosystem of Slack is largely responsible for its success.

Is that true? Slack was one of the first private chats that was not painful to use, circa 2015. I personally hate the integrations and wish they'd just fix the bugs in their core product.

vedmakk 12/29/2025||
If one would train an actual secret (e.g. a passphrase) into such a model, that a user would need to guess by asking the right questions. Could this secret be easily reverse engineered / inferred by having access to models weights - or would it be safe to assume that one could only get to the secret by asking the right questions?
Kiboneu 12/29/2025||
I don’t know, but your question reminds me of this paper which seems to address it on a lower level: https://arxiv.org/abs/2204.06974

“Planting Undetectable Backdoors in Machine Learning Models”

“ … On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate "backdoor key", the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees. …”

ronsor 12/29/2025||
> this secret be easily reverse engineered / inferred by having access to models weights

It could with a network this small. More generally this falls under "interpretability."

bitwize 12/29/2025||
Don't be surprised if you're paid a visit by the SCP Foundation: https://scp-wiki.wikidot.com/scp-079

(edit: change url)

roygbiv2 12/29/2025||
Awesome. I've just designed and built my own z80 computer, though right now it has 32kb ROM and 32kb RAM. This will definitely change on the next revision so I'll be sure to try it out.
wewewedxfgdf 12/29/2025|
RAM is very expensive right now.
wickedsight 12/29/2025|||
I just removed 128 megs of RAM from an old computer and am considering listing it on eBay to pay off my mortgage.
nrhrjrjrjtntbt 12/29/2025||
I wonder what year past 128M ram would pay off mortgage. Maybe 1985
tgv 12/29/2025|||
We're talking kilobytes, not gigabytes. And it isn't DDR5 either.
boomlinde 12/29/2025|||
Yeah, even an average household can afford 40k of slow DRAM if they cut down on luxuries like food and housing.
wewewedxfgdf 12/29/2025|||
Maybe the rich can but not all retro computer enthusiasts are rich.
charcircuit 12/29/2025||||
If you can afford to spend a few dollars without sacrificing housing or food, you are being financial irresponsible.
ant6n 12/29/2025|||
Busy cut down on the avocado toast!
nrhrjrjrjtntbt 12/29/2025||
Then I can afford eggs, ram and a studio appartment!
lacoolj 12/29/2025||
Maybe in Ohio
fuzzfactor 12/29/2025||
No apartment then, maybe just green, eggs, and RAM.
StilesCrisis 12/29/2025|||
thats-the-joke.gif
gcanyon 12/29/2025||
So it seems like with the right code (and maybe a ton of future infrastructure for training?) Eliza could have been much more capable back in the day.
antonvs 12/29/2025|
The original ELIZA ran on an IBM 7094 mainframe, in the 1960s. That machine had 32K x 36-bit words, and no support for byte operations. It did support 6-bit BCD characters, packed 6 per word, but those were for string operations, and didn't support arithmetic or logical operations.

This means that a directly translated 40 KB Z80 executable might be a tight squeeze on that mainframe, because 40K > 32K, counting words, not bytes. Of course if most of that size is just 2-bit weight data then it might not be so bad.

ELIZA running on later hardware would have been a different story, with the Z80 - released in 1976 - being an example.

gwern 12/29/2025|
So if it's not using attention and it processes the entire input into an embedding to process in one go, I guess this is neither a Transformer nor a RNN but just a MLP?
More comments...