Top
Best
New

Posted by bundie 5 days ago

Introducing Gemma 3n(developers.googleblog.com)
403 points | 190 commentspage 3
bravetraveler 5 days ago|
Updated Ollama to use this, now neither old or new work - much productivity
rvnx 5 days ago|
Well, see it the other way, there is something positive: commenters here on HN claim that AI is useless. You can now also join the bandwagon of people who have free time.
refulgentis 4 days ago||
My post politely describing this blog post does not match Google's own app, running inference on Pixel, is downvoted to -1, below dead posts with one-off short jokes.

I am posting again because I've been here 16 years now, it is very suspicious that happened, and given the replies to it, we now know this blog post is false.

There is no open model that you can download today and run at even 1% of the claims in the blog post.

You can read a reply from someone indicating they have inside knowledge on this, who notes this won't work as advertised unless you're Google (i.e. internally, they have it binding to a privileged system process that can access the Tensor core, and this isn't available to third parties. Anyone else is getting 1/100th of the speeds in the post)

This post promises $150K in prizes for on-device multimodal apps and tells you it's running at up to 60 fps, they know it runs at 0.1 fps, Engineering says it is because they haven't prioritized 3rd parties yet, and somehow, Google is getting away with this.

kccqzy 5 days ago||
It seems way worse than other small models, including responding with complete non sequiturs. I think my favorite small model is still DeepSeek distilled with Llama 8B.
oezi 4 days ago|
The key here is multimodal.
Workaccount2 5 days ago||
Anyone have any idea on the viability of running this on a Pi5 16GB? I have a few fun ideas if this can handle working with images (or even video?) well.
gardnr 5 days ago||
The 4-bit quant weighs 4.25 GB and then you need space for the rest of the inference process. So, yeah you can definitely run the model on a Pi, you may have to wait some time for results.

https://huggingface.co/unsloth/gemma-3n-E4B-it-GGUF

refulgentis 5 days ago||
See here, long story short, this is another in a series of blog posts that would lead you to believe this was viable, but it isn't :/ https://news.ycombinator.com/item?id=44389793
Brajeshwar 4 days ago||
We need tabular data somewhere on Google that lists the titles of the products and their descriptions or functions or what they do.
ghc 5 days ago||
I just tried gemma3 out and it seems to be prone to getting stuck in loops where it outputs an infinite stream of the same word.
sigmoid10 5 days ago|
Sounds a lot like an autoregressive sampling problem. Maybe try to set temperature and repeat penalty differently.
ghc 4 days ago||
You're right, I should have checked the model settings. For some reason the default model profile in Ollama had temperature set to 0. Changing the temperature and repeat penalty worked much better than it did when I tried to correct similar behavior in the smallest phi4 reasoning model.
jeffmcjunkin 4 days ago||
Thank you, this was affecting me too.
kgwxd 4 days ago||
Can popular sci-fi go 30 seconds without some lame wad naming themselves or a product after it?
rvnx 5 days ago||
Is there a chance that we see an uncensored version of this ?
throwaway2087 5 days ago|
Can you apply abiliteration? I'm not sure if their MatFormer architecture is compatible with current techniques
tgtweak 5 days ago||
Any readily-available APKs for testing this on Android?
refulgentis 5 days ago|
APK link here: https://github.com/google-ai-edge/gallery?tab=readme-ov-file...
tgtweak 5 days ago||
Ah, I already had edge installed and it had gemma 3n-e4b downloaded... is this the same model that was previously released?
makeramen 5 days ago||
Seems like that was a preview model, unknown if this released version is different
tgtweak 5 days ago||
I think it's only pulling the older model - I see it's using the liteRT models from May.
eabeezxjc 4 days ago|
how using ASR ? translate voice to text?
More comments...