Top
Best
New

Posted by albelfio 21 hours ago

Tinybox – A powerful computer for deep learning(tinygrad.org)
550 points | 318 commentspage 4
Buttons840 15 hours ago|
Oh, this is geohots product?

He's an interesting guy. Seems to be one who does things the way he thinks is right, regardless of corporate profits.

zahirbmirza 19 hours ago||
10 mil today... 1k in 10 years. Are OpenAI and Anthropic overvalued?
Gigachad 18 hours ago|
Looking at these prices I’m just thinking that as a user it makes no sense to buy this when you can just use the subsidised stuff from AI companies and then buy it a few years later at a tiny % of the cost.
p0w3n3d 18 hours ago||
Quite expensive little bastard. I wonder how much does it make sense to invest in a such device, if you can get $0.40/mtok from hyperbolic for example
sowbug 15 hours ago|
If you're OK letting them train on, and maybe keep, your data, then it's hard to beat cloud prices vs. local.
agnishom 13 hours ago||
Who is the intended customer for this product? I am genuinely curious.
moscoe 13 hours ago|
Anyone who wants to run/train/finetune a local llm.

“Not your weights, not your brain.”

heinternets 21 hours ago||
exabox -

720x RDNA5 AT0 XL 25,920 GB VRAM 23,040 GB System RAM

~ $10 Million

Who is the target market here?

LorenDB 21 hours ago||
I can't find sources but I think they are building it for Comma.ai (geohot's other company) so that Comma can scale up their training datacenter.
orochimaaru 21 hours ago|||
And... what about 20k lbs and 1360 cubic feet screams "tiny" :)
smoyer 21 hours ago||
That is very close to a half-length shipping container.
mayukh 21 hours ago|||
A non-trivial share of this market won’t show up in public data. That makes most estimates unreliable by default
spiderfarmer 21 hours ago|||
VC funded startups
dist-epoch 19 hours ago||
A company which doesn't want the big LLM providers to see it's prompts or data - military, health, finance, research
andai 20 hours ago||
Can someone explain the exabox? They say it "functions as a single GPU". Is there anything like that currently existing?
wmf 20 hours ago||
An NVL72 rack or Helios rack also "functions as a single GPU".
progbits 20 hours ago||
TPU pods
sudo_cowsay 21 hours ago||
I always wonder about these expensive products: Does the company make them once its ordered or do they just make them beforehand?
wmf 16 hours ago||
He builds a batch every few months.
cyanydeez 18 hours ago||
In this case, they're taking wire transfers, so they're definitely building them once they get the cash.
DeathArrow 4 hours ago||
I wonder how much has he sold.
operatingthetan 20 hours ago||
Are we at the point where 2x 9070XT's are a viable LLM platform? (I know this has 4, just wondering for myself).
oceanplexian 20 hours ago||
These things don’t have Flash Attention or either have a really hacked together version of it. Is it viable for a hobby? Sure. Is it viable for a serious workload with all the optimizations, CUDA, etc.. Not really.
cyanydeez 18 hours ago||
I'd go with strix halo if you're looking at that old of tech.

the latest AMD GPUs are RX 9070 XT w/32GB each

jgarzik 15 hours ago|
Skeptical of their engineering, with replies to questions like this: https://x.com/jgarzik/status/2031312666036146460?s=20
creddit 14 hours ago||
They answered your question with a pretty specific uptime target. Calling it a dodge and then moving the goalposts with a new question as your follow up doesn’t speak to you acting in good faith.
scratchyone 14 hours ago||
tbh they really didn't, tinygrad's was clearly a joke response. they were not providing a real uptime target.
potamic 13 hours ago||
Can't see replies, what did they say?
Moduke 11 hours ago||
https://xcancel.com/jgarzik/status/2031312666036146460?s=20
More comments...