Posted by john-doe 5 hours ago
languagemodel should be an OS service..
Also, average doesn’t mean 50% lower and 50% higher.
It's the tech company's problem to convince me they are trying to do something useful to me. Come to think of it, it's their problem to convince me they still understand "useful to the customer" first.
> The roughly 4 GB × N devices of disk-storage cost, sustained, on user hardware. SSDs have a per-GB embodied carbon cost of approximately 0.16 kg CO2e per GB of NAND manufactured [18]
The estimated environmental aspect of the download also seems like an overblown point, noted for sensationalism. There are always hand-wavy numbers involved and I had to look no further than the quote above to find evidence of this. The reference for [18], "The dirty secret of SSDs: embodied carbon", incorrectly links to "Toward Carbon-Aware Networking" and makes no mention of the environmental cost of SSDs. After looking up "The Dirty Secret of SSDs: Embodied Carbon" myself, I was able to see the same methodologies as I was expecting used [1].
> We conducted an analysis encompassing 94 Life Cycle Assessment (LCA) reports, which collectively quantify the embodied cost of SSDs. Owing to the scarcity of direct and up-to-date LCA studies focused specifically on SSDs. We compiled a dataset comprising LCA reports pertaining to Server, Workstation, Desktop, Laptop, and Chromebook products, all of which feature SSDs
All these studies rely on metrics extrapolated from layered assumptions and end up being used by those who try to use them as objective numbers.
Which apparently means it'll never install btw, even if I were to run Chrome. Another comment said they check for 22GB free space
4gb definitely isn’t a negligible amount of space on most people’s devices.
The quite successful it would seem MacBook Neo has 256GB of storage in its base configuration.
A MacBook Air and a basic sub $1000 Dell laptop starts at 512GB.
> To put 4gb in context, I currently have 2 tabs open that nearly take up 4gb.
You are conflating disk and memory.
> The fact Chrome also has a way to disable this makes it kind of a nothingburger in my opinion.
There’s a reason they picked an opt-out model for this, and not an opt-in approach.
But I also see the point in it. We recently did a hackathon, and I considered relying on Gemma 4 for privacy considerations. The local model could interpret the user’s natural language request and derive less privacy revealing requests to form based on that.
But then, a web app that shows people a loading screen while it downloads a 4GB model probably wouldn’t be a best-selling UX.
I never conflated anything. I said it's a neglible amount of space for current hardware, which I still believe.
If anything, the fact that I think the amount of space is acceptable for the amount of ram a modern laptop has exaggerates the point.
> There’s a reason they picked an opt-out model for this, and not an opt-in approach.
That's the approach they take for most of their features.
> But then, a web app that shows people a loading screen while it downloads a 4GB model probably wouldn’t be a best-selling UX.
Which seems to be the motivation of having these local models embedded in the browser's available resources: https://developer.chrome.com/docs/ai/prompt-api
So if you see this as just a new feature that provides some on-device AI, it's a bit, so what? A new feature? The last GT7 or Flight Sim patch was bigger than this, what's the big deal, etc.
However, that's not really what's going on. It theory Chrome gives you a local LLM that can provide local AI powered features. In practice, everything gets sent to the cloud anyway so the local LLM seems mostly to exist as a disguise for that, which is shady AF.
As others have pointed out, the solution is https://www.firefox.com/. And whilst it's been trendy on HN for several years to slag off Firefox and Mozilla, I went back to Firefox as my daily driver several years ago, and Chrome's high-handed enforcement of Manifest V3 extensions (meaning no full fat uBlock Origin) has only served to cement that decision.
It's mostly been great. The only downside is that some sites don't work properly on Firefox, and I'm 99.999% sure that's not Firefox's fault.
For example, Paypal's post-login verification step breaks so every time I want to buy something using Paypal I have to switch to Chrome. And, no, disabling uBlock Origin and other extensions on Paypal doesn't help - I've done this already. Seriously, Paypal, it's been months: will you please just fix signing in and paying on Firefox, please?
And many sites will assume you're a bot first and ask questions later if you hit them with anything other than Chrome or Safari... which is also extremely lame and scummy.
https://developer.chrome.com/docs/ai/prompt-api
>With the Prompt API, you can send natural language requests to Gemini Nano in the browser.
Also, the next version of Gemini Nano will be based directly on Gemma 4 (so not distilled, not Gemini at all except for the name)[2].
So no, it's not a frontier model. Those don't run on your phone or in your browser.
[1]: https://developer.android.com/blog/posts/ml-kit-s-prompt-api...
[2]: https://android-developers.googleblog.com/2026/04/AI-Core-De...
Also, distillation is how most of these smaller models are made from the biggest models. That process largely defines the frontier along most of the curve.