Top
Best
New

Posted by TORcicada 1 day ago

CERN uses ultra-compact AI models on FPGAs for real-time LHC data filtering(theopenreader.org)
267 points | 124 commentspage 2
WhyNotHugo 1 day ago|
Intuitively, I’ve always had an impression that using an analogue circuit would be feasible for neural networks (they just matrix multiplication!). These should provide instantaneous output.

Isn’t this kind of approach feasible for something so purpose-built?

elcritch 22 hours ago||
https://futurism.com/scientists-create-ai-glass
incognito124 1 day ago||
You might wanna look at https://taalas.com/
lsaferite 23 hours ago||
They aren't using analog circuits, are they?
v9v 1 day ago||
Do they actually have ASICs or just FPGAs? The article seems a bit unclear.
Aegis_Labs 1 day ago||
This is the spirit. I'm doing something similar: scaling a 1.8T logic system using a budget mobile device as the primary node. Just hit 537 clones today. It's all about how you structure the logic, not the CPU power.
rakel_rakel 1 day ago||
Hey Siri, show me an example of an oxymoron!

> CERN is using extremely small, custom large language models physically burned into silicon chips to perform real-time filtering of the enormous data generated by the Large Hadron Collider (LHC).

sh3rl0ck 1 day ago||
There's no mention of SLMs or LLMs, though.

> This work represents a compelling real-world demonstration of “tiny AI” — highly specialised, minimal-footprint neural networks

FPGAs for Neural Networks have been s thing since before the LLM era.

100721 1 day ago|||
Huh? The first paragraph literally says they are using LLMs

> [ GENEVA, SWITZERLAND — March 28, 2026 ] — CERN is using extremely small, custom large language models physically burned into silicon chips to perform real-time filtering of the enormous data generated by the Large Hadron Collider (LHC).

SiempreViernes 1 day ago|||
the site might have fixed it, to me it says "artificial intelligence" instead of LLM, still bad but not" steaming pile of poo on you bank statement" bad
progval 1 day ago||
They changed it from AI to LLM then back to AI: https://theopenreader.org/index.php?title=Journalism:CERN_Us... and https://theopenreader.org/index.php?title=Journalism%3ACERN_...
msla 1 day ago||
Are they some ancient small-scale integration VLSI design? Do they broadcast on a low-frequency VHF band? Face it: Oxymorons like those are part of the technical world. "VLSI" was a current term back when whole CPUs were made out of fewer transistors than we use for register files now, and "VHF" is low frequency even by commercial broadcasting standards.
rakel_rakel 1 day ago||
haha, yea they are part of it for sure, and I'm not dunking on the use of them, but I rather smile a bit when I stumble upon them.

Like (~9K) Jumbo Frames!

randomNumber7 1 day ago||
Does string theory finally make sense when we ad AI hallucinations?
konfusinomicon 22 hours ago||
turns out we still needs more vibes
quantum_state 1 day ago||
This is a good one
quantum_state 1 day ago||
CERN has been doing HEP experiments for decades. What did it use before the current incarnation of AI? The AI label seems to be more marketing and superficial than substantial. It’s a bit sad that a place like CERN feels the need to make it public that it is on the bandwagon.
jeffreygoesto 20 hours ago||
https://madoc.bib.uni-mannheim.de/809/ is one of a gazillion papers you can find with ancient technology called web search.
FarmerPotato 23 hours ago|||
It was ten years ago I worked on an oscilloscope for CERN with FPGA trigger. You were able to update the trigger portion of the bitstream at any time, without a reset. Typically that was a FIR filter but it could be anything.

Like anything else, once you work with a system, it gives you ten ideas where to go next...

eqvinox 1 day ago||
It doesn't say LLM anywhere.
quantum_state 1 day ago||
Good catch. Corrected. Thanks!
Janicc 1 day ago||
I think chips having a single LLM directly on them will be very common once LLMs have matured/reached a ceiling.
porridgeraisin 19 hours ago||
The library they used (or used to use) is `hls4ml`. https://github.com/fastmachinelearning/hls4ml

I hacked on it a while back, added Comv2dTranspose support to it.

seydor 1 day ago||
cern has been using neural networks for decades
mentalgear 1 day ago|
That's what Groq did as well: burning the Transformer right onto a chip (I have to say I was impressed by the simplicity, but afterwards less so by their controversial Kushner/Saudi investment) .
NitpickLawyer 1 day ago|
> That's what Groq did as well: burning the Transformer right onto a chip

Are you perhaps confusing Groq with the Etched approach? IIUC Etched is the company that "burned the transformer onto a chip". Groq uses LPUs that are more generalist (they can run many transformers and some other architectures) and their speed comes from using SRAM.

More comments...