Top
Best
New

Posted by cypres 3 hours ago

A CPU that runs entirely on GPU(github.com)
61 points | 18 comments
bmc7505 1 hour ago|
As foretold six years ago. [1]

[1]: https://breandan.net/2020/06/30/graph-computation#roadmap

deep1283 1 hour ago||
This is a fun idea. What surprised me is the inversion where MUL ends up faster than ADD because the neural LUT removes sequential dependency while the adder still needs prefix stages.
lorenzohess 2 hours ago||
Out of curiosity, how much slower is this than an actual CPU?
bastawhiz 2 hours ago|
Based on addition and subtraction, 625000x slower or so than a 2.5ghz cpu
medi8r 8 minutes ago||
So it could run Doom?
sudo_cowsay 2 hours ago||
"Multiplication is 12x faster than addition..."

Wow. That's cool but what happens to the regular CPU?

adrian_b 1 hour ago|
This CPU simulator does not attempt to achieve the maximum speed that could be obtained when simulating a CPU on a GPU.

For that a completely different approach would be needed, e.g. by implementing something akin to qemu, where each CPU instruction would be translated into a graphic shader program. On many older GPUs, it is impossible or difficult to launch a graphic program from inside a graphic program (instead of from the CPU), but where this is possible one could obtain a CPU emulation that would be many orders of magnitude faster than what is demonstrated here.

Instead of going for speed, the project demonstrates a simpler self-contained implementation based on the same kind of neural networks used for ML/AI, which might work even on an NPU, not only on a GPU.

Because it uses inappropriate hardware execution units, the speed is modest and the speed ratios between different kinds of instructions are weird, but nonetheless this is an impressive achievement, i.e. simulating the complete Aarch64 ISA with such means.

5o1ecist 7 minutes ago||
> where each CPU instruction would be translated into a graphic shader program

You really think having a shader per CPU-instruction is going to get you closer to the highest possible speed one can achieve?

nicman23 1 hour ago||
can i run linux on a nvidia card though?
micw 1 hour ago|
Linux runs everywhere
mrlonglong 58 minutes ago||
Now I've seen it all. Time to die.. (meant humourously)
Surac 1 hour ago||
Well GPU are just special purpous CPU.
RagnarD 2 hours ago|
Being able to perform precise math in an LLM is important, glad to see this.
jdjdndnzn 2 hours ago||
Just want to point out this comment is highly ironic.

This is all a computer does :P

We need llms to be able to tap that not add the same functionality a layer above and MUCH less efficiently.

Nuzzerino 2 hours ago||
> We need llms to be able to tap that not add the same functionality a layer above and MUCH less efficiently.

Agents, tool-integrated reasoning, even chain of thought (limited, for some math) can address this.

RagnarD 36 minutes ago||
You're both completely missing the point. It's important that an LLM be able to perform exact arithmetic reliably without a tool call. Of course the underlying hardware does so extremely rapidly, that's not the point.
5o1ecist 6 minutes ago||
Why?