Top
Best
New

Posted by ingve 1/15/2026

Raspberry Pi's New AI Hat Adds 8GB of RAM for Local LLMs(www.jeffgeerling.com)
256 points | 209 commentspage 2
syntaxing 1/15/2026|
Interesting idea. I think the Jetson Orin Nano is a better purchase for this application. The main downside is the RAM is shared so you lose about 1G from the OS overhead.
endymion-light 1/15/2026||
can't wait to not be able to buy it, and also for it to be more expensive than a mini-computer

I buy a raspberry pi because I need a small workhorse - I understand adding RAM for local LLMs, but it would be like a raspberry pi with a GPU, why do i need it when a normal mini machine will have more ram, more compute capacity and better specs for cheaper?

rocketvole 1/15/2026|
a lot of people buy rpis because they are the only reasonable option for connectivity with power. I'm not sure what other devices you can get that have gpio and mipi connectivity with the ability to (potentially) run vlms and llms on them.

I daresay they could charge more than a comparably specced computer (if they don't already) and they would still be a viable purchase.

endymion-light 1/15/2026||
Surely with this hat you don't have any access to any GPIO?

Unless i'm missing something - which is where i'm like why not just buy a NUC with similiar RAM for far less.

Rohansi 1/15/2026||
You do still have access to the GPIO. This HAT [1] stacks on top of the GPIO connector but passes through all the pins so you can still use them. This one is connected through PCIe so it shouldn't be blocking off any pins from use, unless you wanted an NVMe SSD hooked up!

[1] https://www.raspberrypi.com/news/introducing-raspberry-pi-ha...

endymion-light 1/16/2026||
Ah great! Yeah that makes the benefit a lot more clear. I'm used to hats that seem to lock you out of GPIO use
moffkalast 1/15/2026||
> The Pi's built-in CPU trounces the Hailo 10H.

Case closed. And that's extremely slow to begin with, the Pi 5 only gets what, a 32 bit bus? Laughable performance for a purpose built ASIC that costs more than the Pi itself.

> In my testing, Hailo's hailo-rpi5-examples were not yet updated for this new HAT, and even if I specified the Hailo 10H manually, model files would not load

Laughable levels of support too.

As another datapoint, I've recently managed to get the 8L working natively on Ubuntu 24 with ROS, but only after significant shenanigans involving recompiling the kernel module and building their library for python 3.12 that Hailo for some reason does not provide outside 3.11. They only support the Pi OS (like anyone would use that in prod) and even that is very spotty. Like, why would you not target the most popular robotics distro for an AI accelerator? Who else is gonna buy these things exactly?

phito 1/15/2026||
Sounds like some PM just wanted to shove AI marketing where it doesn't make sense.
nottorp 1/15/2026||
Hmm. Can this "AI" hardware - or any other "AI" hardware that isn't a GPU - be used for anything other than LLMs?

YOLO for example.

dismalpedigree 1/15/2026||
Yes. The Hailo chips are mainly for AI vision models. This is the first time I have seen them pushed for LLM. They are very finicky and difficult to setup outside of the examples. Documentation is inconsistent and the models have to be converted to a different format to run. It is possible to run a custom yolo8 model, but is challenging.
_ea1k 1/15/2026||
I'd expect that kind of thing to be the primary use case, tbh. Maybe even running whisper models?

If it could run whisper, it'd be a solid addition to a pi based home assistant setup.

joelthelion 1/15/2026||
8GB is really low.

That said, perhaps there is a niche for slow LLM inference for non-interactive use.

For example, if you use LLMs to triage your emails in the background, you don't care about latency. You just need the throughput to be high enough to handle the load.

imtringued 1/15/2026||
This looks pretty nice for what it is. However, the RAM is a bit oversized for the vast majority of applications that will run on this, which is giving a misleading impression of what it is useful for.

I once tried to run a segmentation model based on a vision transformer on a PC and that model used somewhere around 1 GB for the parameters and several gigabytes for the KV cache and it was almost entirely compute bound. You couldn't run that type of model on previous AI accelerators because they only supported model sizes in the megabytes range.

Lio 1/15/2026||
I've seen the AI-8850 LLM Acceleration M.2 Module advertised as an alternative RPi accellorator (you need an M.2 hat for it).

That's also limited to 8Gb RAM so again you might be better off with a larger 16Gb Pi and using the CPU but at least the space is heating up.

With a lot of this stuff it seems to come down to how good the software support is. Raspberry Pis generally beat everything else for that.

janalsncm 1/15/2026||
Excited to see more hardware competition at this level. Models that can run on this amount of RAM are right in the sweet spot of small enough to train on consumer-grade GPUs (e.g. 4090) but big enough to do something interesting (simple audio/video/text processing).

The price point is still a little high for most tasks but I’m sure that will come down.

sxzygz 1/15/2026|
My primary concern with this offering is vendor of the enabling silicon. I think it’s important to consider who you do business with. The industrial uptake of RPi boards has poisoned their mandate and made them forget making a world where all children can discover the magic of computing, not just the special chosen ones.
More comments...