Top
Best
New

Posted by HenryNdubuaku 19 hours ago

Show HN: Needle: We Distilled Gemini Tool Calling into a 26M Model(github.com)
Hey HN, Henry here from Cactus. We open-sourced Needle, a 26M parameter function-calling (tool use) model. It runs at 6000 tok/s prefill and 1200 tok/s decode on consumer devices.

We were always frustrated by the little effort made towards building agentic models that run on budget phones, so we conducted investigations that led to an observation: agentic experiences are built upon tool calling, and massive models are overkill for it. Tool calling is fundamentally retrieval-and-assembly (match query to tool name, extract argument values, emit JSON), not reasoning. Cross-attention is the right primitive for this, and FFN parameters are wasted at this scale.

Simple Attention Networks: the entire model is just attention and gating, no MLPs anywhere. Needle is an experimental run for single-shot function calling for consumer devices (phones, watches, glasses...).

Training: - Pretrained on 200B tokens across 16 TPU v6e (27 hours) - Post-trained on 2B tokens of synthesized function-calling data (45 minutes) - Dataset synthesized via Gemini with 15 tool categories (timers, messaging, navigation, smart home, etc.)

You can test it right now and finetune on your Mac/PC: https://github.com/cactus-compute/needle

The full writeup on the architecture is here: https://github.com/cactus-compute/needle/blob/main/docs/simp...

We found that the "no FFN" finding generalizes beyond function calling to any task where the model has access to external structured knowledge (RAG, tool use, retrieval-augmented generation). The model doesn't need to memorize facts in FFN weights if the facts are provided in the input. Experimental results to published.

While it beats FunctionGemma-270M, Qwen-0.6B, Granite-350M, LFM2.5-350M on single-shot function calling, those models have more scope/capacity and excel in conversational settings. We encourage you to test on your own tools via the playground and finetune accordingly.

This is part of our broader work on Cactus (https://github.com/cactus-compute/cactus), an inference engine built from scratch for mobile, wearables and custom hardware. We wrote about Cactus here previously: https://news.ycombinator.com/item?id=44524544

Everything is MIT licensed. Weights: https://huggingface.co/Cactus-Compute/needle GitHub: https://github.com/cactus-compute/needle

522 points | 156 commentspage 5
marsulta 9 hours ago|
[flagged]
armada1122 7 hours ago||
[flagged]
nhattruongadm 18 hours ago||
[flagged]
mnvibe26x7 9 hours ago||
[flagged]
BuyG1n 13 hours ago||
[dead]
danelliot 15 hours ago||
[dead]
ElenaDaibunny 10 hours ago||
[dead]
abhijithbabu 19 hours ago||
[flagged]
fizza_pizza 6 hours ago||
[flagged]
ac29 17 hours ago|
FYI, distilling Gemini is explicitly against the ToS:

"You may not use the Services to develop models that compete with the Services (e.g., Gemini API or Google AI Studio). You also may not attempt to reverse engineer, extract or replicate any component of the Services, including the underlying data or models (e.g., parameter weights)."

Havoc 17 hours ago||
Yeah I think Google should shove that somewhere. They effectively distilled all the internet's knowledge into these models...without asking & without permission
HenryNdubuaku 17 hours ago|||
Thanks, Needle doesn’t compete with those tools though and the distillation process did not access the weights.
ilaksh 17 hours ago|||
I think GLM 5.1 or Kimi 2.6 could substitute for this type of purpose.
iAMkenough 16 hours ago|||
FYI, Gemini was developed using stolen copyrighted works without author consent. The double standard is striking.
ForHackernews 17 hours ago|||
So is copying all the books in the world.
vablings 17 hours ago|||
Oh no! They stole the model weights! Distillation "attacks" is such bullshit
xgulfie 17 hours ago||
This is being downvoted but it's worth noting if only for the "be careful" aspect.

That said, we need more people distilling models IMO, just be ready for a C&D and a ban