Top
Best
New

Posted by ingve 1/15/2026

Raspberry Pi's New AI Hat Adds 8GB of RAM for Local LLMs(www.jeffgeerling.com)
256 points | 209 commentspage 4
esskay 1/15/2026|
What a pointless product to waste time making.
5ersi 1/15/2026||
This AI 8GB HAT is $150, while better performing Pi5 8GB is $100. Makes no sense to buy the AI HAT.
giantg2 1/15/2026||
What not use a USB Coral TPU? Seems to do mostly the same stuff and is half the price.
geerlingguy 1/15/2026|
Coral is many times slower at this point (2 TOPS IIRC), but if it meets your needs, it's okay.
giantg2 1/18/2026||
Yeah, I think it's 4 TOPS, so I see.
huntercaron 1/15/2026||
Glad Jeff was critical here they need a bit of a wake up call it seems.
yawniek 1/15/2026||
i wonder how the Hailo 10H compares to Axera AX8850. add on boards seem to be cheaper and its a full SoC that can also draw much more power.
Western0 1/17/2026||
can You add gemma 3n to benchmark? this is less tahan normal 3 gemma but still good in small memory
xp84 1/15/2026||
Gigabytes?? In THIS economy?
rballpug 1/15/2026||
Catalog TG 211, 1000 Hz.
teekert 1/15/2026||
At this moment my two questions for these things are:

1. Can I run a local LLM that allows me to control Home Assistant with natural language? Some basic stuff like timers, to do/shopping lists etc would be nice etc.

2. Can I run object/person detection on local video streams?

I want some AI stuff, but I want it local.

Looks like the answer for this one is: Meh. It can do point 2, but it's not the best option.

worksonmine 1/15/2026||
1. Probably, but not efficiently. But I'm just guessing I haven't tried local LLMs yet.

2. Has been possible in realtime since the first camera was released and has most likely improved since. I did this years ago on the pi zero and it was surprisingly good.

noodletheworld 1/15/2026||
> Can I run a local LLM that allows me to control Home Assistant with natural language? Some basic stuff like timers, to do/shopping lists etc would be nice etc.

No. Get the larger PI recommended by the article.

Quote from the article:

> So power holds it back, but the 8 gigs of RAM holds back the LLM use case (vs just running on the Pi's CPU) the most. The Pi 5 can be bought in up to a 16 GB configuration. That's as much as you get in decent consumer graphics cards1.

> Because of that, many quantized medium-size models target 10-12 GB of RAM usage (leaving space for context, which eats up another 2+ GB of RAM).

…

> 8 GB of RAM is useful, but it's not quite enough to give this HAT an advantage over just paying for the bigger 16GB Pi with more RAM, which will be more flexible and run models faster.

The model specs shown for this device in the article are small, and not fit for purpose even for the relatively trivial use case you mentioned.

I mean, look, lots of people have lots of opinions about this (many of them wrong); it’s cheap, you can buy one and try… but, look. The OP really gave it a shot, and results were kind of shit. The article is pretty clear.

Don’t bother.

You want a device with more memory to mess around with for what you want to do.

Havoc 1/15/2026|
That seems completely and utterly pointless.

A NPU that adds to price but underperforms a rasp cpu?

You get SBC with 32gb ram…

Nevermind the whole minipc ecosystem which will crush this

More comments...