Top
Best
New

Posted by albelfio 18 hours ago

Tinybox – A powerful computer for deep learning(tinygrad.org)
527 points | 296 commentspage 2
algolint 4 hours ago|
The most interesting part of Tinybox isn't just the hardware, but the push for a more vertical integration with tinygrad. We've become so accustomed to the CUDA/PyTorch stack that seeing a serious attempt at a different software-hardware synergy is refreshing, even if the hardware specs or price point relative to DIY homelabs raise some eyebrows for power users. It's more about reducing the friction for researchers who want a "just works" environment without the nightmare of driver/toolkit version hell.
ekropotin 17 hours ago||
IDK, I feel it’s quite overpriced, even with the current component prices.

I almost sure it’s possible to custom build a machine as powerful as their red v2 within 9k budget. And have a lot of fun along the way.

lostmsu 17 hours ago|
AMD now has 32 GiB Radeon AI Pro 9700. 4 of these (just under 2k each) would put you at 128 GiB VRAM
ekropotin 17 hours ago||
VRAM is not everything - GPU cores also matter (a lot) for inference
lostmsu 17 hours ago|||
4x Radeon will have significantly more GPU power than say Mac Studio or DGX Spark.
cyanydeez 16 hours ago|||
inference speed is like monitor Hz; sure, you go from 60 to 120Hz and thats noticeable, but unless your model is AGI, at some point you're just generating more code than you'll ever realistically be able to control, audit and rely on.

So, context is probably more $/programming worth than inference speed.

operatingthetan 18 hours ago||
The incremental price increases between products is funny.

$12,000, $65,000, $10,000,000.

znpy 17 hours ago||
I was more worried by the 600kW power requirement... that's 200 houses at full load (3kw) in southern europe... which likely means 400 houses at half load.

the town near my hometown has 650 – 800 houses (according to chatgpt).

crazy.

nine_k 14 hours ago|||
Or it's two 300kW fast EV chargers working together.

A typical home just consumes rather little energy, now that LED lighting and heat pump cooling / heating became the norm.

delusional 4 hours ago|||
I think the above commentor is reflecting on the total energy use from having a 600KW load running 24/7. I suppose the more interesting observation is the 14 MWh of daily consumption, enough to charge 100 Rivians every day.
paganel 4 hours ago||||
> and heat pump cooling / heating became the norm.

We're not all solidly middle-class (especially in Southern and Eastern Europe) and as such we cannot afford those heat pumps. But we'll have to eat the increased energy costs brought by insane server configurations like the ones from the article, so, yeey!!!

znpy 4 hours ago|||
> now that LED lighting and heat pump cooling / heating became the norm.

My brother in Christ, you vastly overestimate southern europe

nutjob2 6 hours ago||||
> at full load (3kw)

Do you live in a deprived rural village in a very poor country? Because you can't even run a heater and the oven with 3kW.

znpy 4 hours ago||
No it’s quite the norm actually.

Most power contracts give you 3 kwh power supply for residential home. That’s the standard.

Bumping to 4.5 or 6kwh must be required explicitly and costs and extra on the base power supply bill

ericd 14 hours ago||||
That’s surprising, 200 amp 240v service is pretty common in the US.
dist-epoch 16 hours ago|||
Your hometown also has public lightning, water pumps, and probably some other stuff.
sudo_cowsay 18 hours ago||
I mean the difference in performance is quite big too. However, the 10,000,000 is a little bit too much (imo).
DeathArrow 1 hour ago||
I wonder how much has he sold.
mciancia 15 hours ago||
Not sure why they stopped using 6 GPUs in thei builds - with 4 GPUs, both 9070 and rtx6000 come in 2 slot designs, so it easy to build it yourself using a bit more expensive, but still fairly regular motherboard.

With 6 GPUs you have to deal with risers, pcie retimers, dual PSUs and custom case for so value proposition there was much better IMO

adi_kurian 13 hours ago||
https://en.wikipedia.org/wiki/Decoy_effect
mmoustafa 17 hours ago||
I would love to see real-life tokens/sec values advertised for one or various specific open source models.

I'm currently shopping for offline hardware and it is very hard to estimate the performance I will get before dropping $12K, and would love to have a baseline that I can at least always get e.g. 40 tok/s running GPT-OSS-120B using Ollama on Ubuntu out of the box.

atwrk 2 hours ago||
For reference, 12k gets you at least 4 Strix Halo boxes each running GPT-OSS-120B at ~50tok/s.
hpcjoe 16 hours ago||
Look for llmfit on github. This will help with that analysis. I've found it reasonably accurate. If you have Ollama already installed, it can download the relevant models directly.
ks2048 14 hours ago||
"... and likely the best performance/$".

"likely" doesn't inspire much confidence. Surely, they have those numbers, and if it was, they'd publicize the comparisons.

SmartestUnknown 15 hours ago||
Regarding 2x faster than pytorch being a condition for tinygrad to come out of alpha:

Can they/someone else give more details as to what workloads pytorch is more than 2x slower than the hardware provides? Most of the papers use standard components and I assume pytorch is already pretty performant at implementing them at 50+% of extractable performance from typical GPUs.

If they mean more esoteric stuff that requires writing custom kernels to get good performance out of the chips, then that's a different issue.

wongarsu 18 hours ago|
Sound like solid prebuilt with well balanced components and a pretty case

Not revolutionary in any way, but nice. Unless I'm missing something here?

eurekin 18 hours ago||
It's pretty close to what people have been frankenbuilding on r/LocaLLaMa... It's nice to have a prebuild option.
speedgoose 18 hours ago|||
You could also order such configurations from a classic server reseller as far as I know. The case is a bit original there.
nextlevelwizard 18 hours ago|||
Tiny boxes are already several years old IIRC
llbbdd 14 hours ago||
If you wanted a box built by geohot, most recently known for signing on to Elons Twitter and then bailing, it's for you
asadm 9 hours ago||
actually known for comma.ai
More comments...