Top
Best
New

Posted by john-doe 5 hours ago

Google Chrome silently installs a 4 GB AI model on your device without consent(www.thatprivacyguy.com)
313 points | 323 commentspage 2
jbverschoor 1 hour ago|
And that will be 4GB per chrome instance I assume? (not profiles, instances) And what happens with each electron app if it uses chrome?

languagemodel should be an OS service..

kgeist 46 minutes ago|
Electron uses Chromium and nothing prevents them from disabling it, if it ever ends up there.
ponyous 4 hours ago||
The site is currently unavailable 503 so I can't read it. But I wonder, what should you consent to? Every dependency? Every dependency above 1GB?
scorpioxy 4 hours ago||
Maybe consent is not an appropriate term. Perhaps an acknowledgement and a way to say "I don't want this" would be a more suitable approach. I feel like a flag to turn off LLMs is useful. Firefox added something like this in a recent release. I don't know how much they're downloading or how much they run it, nor would I be a good judge if it's necessary or not, but I don't want that functionality in my browser so turned it off.
derangedHorse 1 hour ago|||
There's a setting in `chrome://flags` mentioned in the post that allows users to turn this off. I guess people want opt-in consent rather opt-out consent which there's always debate about. Some people say it degrades the experience for the majority of users who would opt-in for the happiness of the few possibly already detracting users.
cwillu 4 hours ago||||
Isn't that asking for consent?
oriettaxx 3 hours ago|||
the subject has been faced many years ago an super well applied in EU privacy regulations: Google knows it very well, and in super details and I have no doubt they will be fined for this despite all reduction of it thanks to their lobbying (and corruptions, too, in my super personal opinion): this fact well explain EU fines based on company's income.
socalgal2 1 hour ago||
why would they be fined for this? In fact a local LLM is exactly the opposite direction of a privacy concern. The local LLM gives an answer generated locally and never uploaded to a server.
oriettaxx 1 hour ago||
[dead]
nottorp 4 hours ago|||
Extra power and ram usage without your permission, for example.
whizzter 4 hours ago|||
Exactly, for all the hate of Windows, I could at least just look for shit named co-pilot and uninstall it for a pretty nice experience on my new computer. Phones aren't always as straightforward (especially jarring as "Google services" are required in Sweden on Android for stuff like mobile identity systems).
StingyJelly 3 hours ago||
This is so absurd... I have to keep an old (rooted in order to hide that adb is enabled) phone connected to my home server just to use such app, because grapheneos without google services is apparently not secure enough.
izacus 3 hours ago||||
Does that include the CPU burning cat girl captchas or not?
mightysashiman 4 hours ago||||
Don't install chrome in the first place then
nottorp 4 hours ago||
I'm logged in to work in Chrome and to personal stuff in Firefox :)
cluckindan 3 hours ago||||
Hello iOS upgrade.
trvz 4 hours ago||||
Read the article, it's not about that, but a mere 4GB of storage.
paganel 2 hours ago|||
4GB of storage is not a “mere” thing, to the contrary.
socalgal2 1 hour ago||
It is in 2026. Average daily household usage is at ~25gig. That's average, so 50% are more than that
trvz 50 minutes ago||
It sounds like you’re talking about network usage, but this is about storage.

Also, average doesn’t mean 50% lower and 50% higher.

nottorp 4 hours ago|||
Oh and why is it there? Do you really think it's not loaded and executed automatically by default, so some Google executive can justify their "AI" spend?
joegibbs 4 hours ago||
I don’t. Do you have any actual evidence they’re doing that beyond the vibe?
nottorp 1 hour ago||
Do I look like law enforcement? I don't have to do innocent until proven guilty.

It's the tech company's problem to convince me they are trying to do something useful to me. Come to think of it, it's their problem to convince me they still understand "useful to the customer" first.

KeplerBoy 4 hours ago|||
That ship has sailed on the web a long time ago.
tdeck 4 hours ago||
Somebody's promotion packet depended on pushing this through the approval process.
pezgrande 4 hours ago||
If anything I am glad a bit of shift to local llm's. Their gemma4 is pretty powerful for such small model so I guess that's what they are delivering.
dwedge 3 hours ago||
Man the longer all this crap goes on the more I realise Stallman was right
kushalpatil07 2 hours ago||
I was working on on-device AI for 3 years. This was the prime idea we were exploring, how can someone undercut the OS providers and ship an LLM that other apps can also use on-device. Like if meta decides to do this, it can serve an API to all mobile app companies for an on-device LLM long before the OS is there. This is Google's way of reaching LLM distribution on laptops, since they don't have their own
derangedHorse 1 hour ago||
Does anyone else find the writing in the article to be overdramatic? Including a 4gb is a negligible amount of space for current hardware and Chrome is not known as the browser to run on resource constrained devices. To put 4gb in context, I currently have 2 *tabs* open that nearly take up 4gb. The fact Chrome also has a way to disable this makes it kind of a nothingburger in my opinion.

> The roughly 4 GB × N devices of disk-storage cost, sustained, on user hardware. SSDs have a per-GB embodied carbon cost of approximately 0.16 kg CO2e per GB of NAND manufactured [18]

The estimated environmental aspect of the download also seems like an overblown point, noted for sensationalism. There are always hand-wavy numbers involved and I had to look no further than the quote above to find evidence of this. The reference for [18], "The dirty secret of SSDs: embodied carbon", incorrectly links to "Toward Carbon-Aware Networking" and makes no mention of the environmental cost of SSDs. After looking up "The Dirty Secret of SSDs: Embodied Carbon" myself, I was able to see the same methodologies as I was expecting used [1].

> We conducted an analysis encompassing 94 Life Cycle Assessment (LCA) reports, which collectively quantify the embodied cost of SSDs. Owing to the scarcity of direct and up-to-date LCA studies focused specifically on SSDs. We compiled a dataset comprising LCA reports pertaining to Server, Workstation, Desktop, Laptop, and Chromebook products, all of which feature SSDs

All these studies rely on metrics extrapolated from layered assumptions and end up being used by those who try to use them as objective numbers.

[1] https://arxiv.org/abs/2207.10793

Zekio 1 hour ago||
4gb isn't really a negligible amount, given the amount of desktops and laptops sold with just a 256gb ssd
Aachen 25 minutes ago||
Exactly. Nand is expensive. I upgraded what my laptop came with but after installing a few games, cloning repositories over the years, various projects I've done, and other regular use, it's perpetually full. 4GB is probably about half the space I have free at any given time

Which apparently means it'll never install btw, even if I were to run Chrome. Another comment said they check for 22GB free space

ElFitz 1 hour ago||
> Including a 4gb is a negligible amount of space for current hardware and Chrome is not known as the browser to run on resource constrained devices.

4gb definitely isn’t a negligible amount of space on most people’s devices.

The quite successful it would seem MacBook Neo has 256GB of storage in its base configuration.

A MacBook Air and a basic sub $1000 Dell laptop starts at 512GB.

> To put 4gb in context, I currently have 2 tabs open that nearly take up 4gb.

You are conflating disk and memory.

> The fact Chrome also has a way to disable this makes it kind of a nothingburger in my opinion.

There’s a reason they picked an opt-out model for this, and not an opt-in approach.

But I also see the point in it. We recently did a hackathon, and I considered relying on Gemma 4 for privacy considerations. The local model could interpret the user’s natural language request and derive less privacy revealing requests to form based on that.

But then, a web app that shows people a loading screen while it downloads a 4GB model probably wouldn’t be a best-selling UX.

derangedHorse 1 hour ago||
> You are conflating disk and memory.

I never conflated anything. I said it's a neglible amount of space for current hardware, which I still believe.

If anything, the fact that I think the amount of space is acceptable for the amount of ram a modern laptop has exaggerates the point.

> There’s a reason they picked an opt-out model for this, and not an opt-in approach.

That's the approach they take for most of their features.

> But then, a web app that shows people a loading screen while it downloads a 4GB model probably wouldn’t be a best-selling UX.

Which seems to be the motivation of having these local models embedded in the browser's available resources: https://developer.chrome.com/docs/ai/prompt-api

bartread 2 hours ago||
On one level, I can't figure out how bent out of shape to get over this (but read on). Software I use downloads updates all the time, adds new features all the time, and I mostly don't ask for any of it.

So if you see this as just a new feature that provides some on-device AI, it's a bit, so what? A new feature? The last GT7 or Flight Sim patch was bigger than this, what's the big deal, etc.

However, that's not really what's going on. It theory Chrome gives you a local LLM that can provide local AI powered features. In practice, everything gets sent to the cloud anyway so the local LLM seems mostly to exist as a disguise for that, which is shady AF.

As others have pointed out, the solution is https://www.firefox.com/. And whilst it's been trendy on HN for several years to slag off Firefox and Mozilla, I went back to Firefox as my daily driver several years ago, and Chrome's high-handed enforcement of Manifest V3 extensions (meaning no full fat uBlock Origin) has only served to cement that decision.

It's mostly been great. The only downside is that some sites don't work properly on Firefox, and I'm 99.999% sure that's not Firefox's fault.

For example, Paypal's post-login verification step breaks so every time I want to buy something using Paypal I have to switch to Chrome. And, no, disabling uBlock Origin and other extensions on Paypal doesn't help - I've done this already. Seriously, Paypal, it's been months: will you please just fix signing in and paying on Firefox, please?

And many sites will assume you're a bot first and ask questions later if you hit them with anything other than Chrome or Safari... which is also extremely lame and scummy.

projektfu 49 minutes ago|
Weird, I access PayPal through FF all the time. It's probably one of those weird geographical differences or something. One thing I did see is that at least one site (AliExpress) doesn't initiate the redirect after the payment, but still accepted the payment.
sigmoid10 3 hours ago|
One upside to this is that it doesn't use Gemma and instead uses Gemini. So at least for Gemini Nano (apparently called XS internally by Google) it means that the weights are now de facto open and you no longer need a current Android phone to get the latest and best model in this class. This also makes it the only open American frontier-level model right now.
HumanOstrich 3 hours ago||
Can you provide any sources for that? I'd like to learn more about this open frontier model.
sigmoid10 2 hours ago||
Sources for what? The pareto frontier of LLMs? How Google is pretty much on the line with most of their LLM products? Or this particular model? For the first two you need to look for size/cost vs. accuracy charts. There are tons of them floating around. For the latter there is not much official info except what you can infer by analyzing the weights.bin file that Chrome downloads. But it does mention Gemini in there, so it seems pretty obvious that it is from their proprietary line of models.
lxgr 2 hours ago|||
Just because it's called Gemini doesn't mean that it's somehow automatically as comparable with the frontier of small models as well, does it?
sigmoid10 2 hours ago||
All Gemini models sit around the frontier, especially if you go to smaller sizes. Google is actually more invested into efficiency than size unlike some of the other big providers.
lxgr 2 hours ago||
Do you have any benchmark details on the on-device Gemini models? I haven't found a lot of public information on these.
HumanOstrich 2 hours ago|||
Sources for your claim that the model being downloaded to Android/Chrome is Gemini instead of Gemma. Other than downloading the bin file myself and analyzing it lol.
sigmoid10 2 hours ago||
How about Google itself?

https://developer.chrome.com/docs/ai/prompt-api

>With the Prompt API, you can send natural language requests to Gemini Nano in the browser.

HumanOstrich 2 hours ago|||
Thanks. Looks like the current Gemini Nano is actually a separate model with the Gemma 3n architecture that has been distilled from Gemini 2.5 Flash[1].

Also, the next version of Gemini Nano will be based directly on Gemma 4 (so not distilled, not Gemini at all except for the name)[2].

So no, it's not a frontier model. Those don't run on your phone or in your browser.

[1]: https://developer.android.com/blog/posts/ml-kit-s-prompt-api...

[2]: https://android-developers.googleblog.com/2026/04/AI-Core-De...

sigmoid10 6 minutes ago||
Oh, now I see your problem. You confused the pareto frontier with the pure scale frontier. They are very much not the same.

Also, distillation is how most of these smaller models are made from the biggest models. That process largely defines the frontier along most of the curve.

More comments...