Posted by robin_reala 3/29/2025
According to https://chromium.googlesource.com/chromium/src/+/main/servic... it is just inference.
> These functionalities are entirely on device and do not send any data to network or store on disk.
There is also this description in the Chrome OS source code:
> ScreenAI is a binary to provide AI based models to improve assistive technologies. The binary is written in C++ and is currently used by ReadAnything and PdfOcr services on Chrome OS.
You can make up any conspiracy theory you want, but there's no evidence for it.
It’s a stupid feature for Google to enable by default on systems that are generally very low spec and badly made, but it’s not some evil data slurp. One of the most obnoxious things about enshittification is the corrosive effect it seems to have had on technical users’ curiosity: instead of researching and fixing problems, people now seem very prone to jump to “the software is evil and bad” and give up at doing any kind of actual investigation.
Not yet anyway. We’ve just seen Amazon change how all Echo’s/Alexa’s operate. It has been local-only for years and years, but now they want the audio data, so they’ve changed the Terms of Service. There’s no reason to believe Google won’t do the same thing sometime in the future.
Note to parent: it is strictly unfair to lump Google in with Amazon (and if you demonize a good actor long enough, eventually they'll aquiesce since they are already paying the reputational price). However given that they are American corporations operating on similar incentives during the Wild West (or World War) of AI aka WWAI, it makes sense to be suspicious. Heaven knows "reputational downside" is just about the only counter-veiling incentive left, since Trump has stripped consumers and investors of virtually all legal protection (see: CFPB elimination; SEC declines Hawk Tua coin grift prosecution; Trump pardons Trevor Milton). I think it is an excellent time for all of us to be extremely careful with the software we use.
This puts a dangerous amount of trust onto a company which has very clearly and explicitly signaled to everyone for decades that they do not care one iota about you, your privacy, or your safety.
Assuming that Google isn't doing anything malicious is a very unwise and ill-informed stance to take. If it isn't malicious now, it will be very soon. Absolutely no exceptions.
Enter Google 2025!
No longer just terrible search due to lack of care, and conflict of interest.
Instead, now terrible search due to AI, terrible everything due to AI, pushed everywhere and everyplace, degrading and reducing capabilities ecosystem wide.
Ridiculous and just often wrong AI gibberish on search pages, Android camera apps that blur people's faces when trying to "enhance" pics you take, and of course replacing OCR stuff that works well, with some half finished buggy AI junk.
From their doctored and made up AI demos, to an inability to make anything stable or of quality, Google has turned from world class to Nikola in a short couple of years.
There's little here worth being curious about. Tech companies made sure of that. They mostly aren't doing anything particularly groundbreaking in situations like these - they're doing the stupid or the greedy thing. And, on the off chance that the tech involved is in any way interesting, it tends to have decades of security research behind it applied to mathematically guarantee we can't use it for anything ourselves - and in case that isn't enough, there's also decades of legal experience applied to stop people from building on top of the tech.
Nah, it's one thing to fix bugs for the companies back when they tried or pretended to be friendly; these days, when half the problems are intentional malfeatures or bugs in those malfeatures, it stops being fun. There are other things to be curious about, that aren't caused by attempts to disenfranchise regular computer users.
I’m all for OP returning the computer Google broke, as sibling comments have suggested, but the curiosity route would have been fruitful for them too; I’m pretty sure the flag I posted or one of the adjacent ones will fix their issue.
I also personally found this feature kind of interesting of itself; I didn’t know that Google were doing model-based OCR and content extraction.
> on the off chance that the tech involved is in any way interesting, it tends to have decades of security research behind it applied to mathematically guarantee we can't use it for anything ourselves
My current profession and hobby is literally breaking these locks and I’m still not quite sure what you mean here. What interesting tech do you feel you can’t use or apply due to security research?
> there's also decades of legal experience applied to stop people from building on top of the tech.
Again… I’m genuinely curious what technology you feel is locked up in a legal and technical vault?
I feel that we’ve really been in a good age lately for fundamental technologies, honestly - a massive amount of AI research is published, almost all computing related sub-technologies I can think of are growing increasingly strong open-source and open-research communities (semiconductors all the way from PDK through HDL and synthesis are one space that’s been fun here recently), and with a few notable exceptions (3GPP/mobile wireless being a big one), fewer cutting edge concepts are patent encumbered than ever before.
> There are other things to be curious about, that aren't caused by attempts to disenfranchise regular computer users.
If anything I feel like this is a counter-example? It’s an innocuous and valuable feature with a bug in it. There’s nothing weird or evil going on to intentionally or even unintentionally disenfranchise users. It’s something with a feature toggle that’s happing in open source code.
> it's one thing to fix bugs for the companies back when they tried or pretended to be friendly
Here, we can agree. If a company are going to ship automatic updates, they need to be more careful about regressions than this, and they don’t deserve any benefit of the doubt on that.
That's what a decade of enshittification gets them.
I remember talking to someone from Microsoft around that time. (Who were an enemy of the opensource world at the time). They said the shine would wear off, and everyone would get annoyed and distrustful of Google too. I remember my conscious brain agreeing. But my emotional mind loved Google - we all did. I just couldn’t imagine it.
Well. It’s pretty easy to imagine now.
15 years ago now I think Google were at their worst. Google were doing a good job in my eyes until roughly the time of the DoubleClick acquisition, when they pivoted away from "we're going to do ads the Good Way with AdWords" and into "screw it, we're just going to do ads," picked up the infamous DoubleClick cookie and their general "we profile people using every piece of data we can possibly think of" approach, and started making insane product decisions like public-contacts-by-default Google Buzz.
Since then, through a combination of courts forcing them to and what seems like a somewhat genuine internal effort, Google have been adding privacy controls back in many places. I certainly don't agree with the model still, but I think that Google in 2025 are actually much less of a privacy threat than 2010 Google were.
Outside Google, 15 years ago was also the peak Browser Toolbar and Installer Wrapper Infostealers era, where instead of building crypto scams or AI-wrapper companies, the hustle bros were busy building flat-out spyware instead.
I know I'm outside of the majority on HN recently, but I generally feel that the corporate _notion_ of user privacy has actually gotten a lot better since the early 2000s, while the _implementation_ has gotten worse. That is to say, companies, especially large ones, care much more about internal controls and have much less of a "we steal lots of data and figure out how to sell it later" model. Unfortunately, at the same time, we've seen the rise of "data driven" product management, always on updates, and "product telemetry," which erode the new attitude towards privacy at a technical level by building easily exploitable troves of sensitive information.
Of course, in exchange for large companies becoming more conscious about privacy, we now have a million smaller companies working to fill the "we steal all the data" shoes. It's still a battle that's far from won.
If what you say is correct then the device is (a) not fit for purpose and (b) it's possible you may be able to claim damages on the basis that the manufacturer has changed its modus operandi without your permission or consent and it's now incompatible with the way you work, etc., etc.
If Google reckons it had the right to alter your device because you agreed to its EULA, then it seems you'd still have a case on grounds that it no longer functions as it should.
There are only two things that will stop these bastards—them realizing such behavior is draining money from their hip pockets and proper consumer and privacy legislation.
But forget the latter, democracy is stuffed, and Big Tech has it by the balls anyway.
Not everywhere. Here in Vic, Australia, I can return a product for defects any time within its “expected product lifetime”. How long is that? It’s never specified explicitly! So yeah, it kinda doesn’t matter how old a laptop is if the manufacturer pulls stunts like this. You can still give them a headache if you want to.
Europe also has great customer protection laws. And this domain is .NZ - I wouldn’t be surprised if New Zealand has decent customer protection laws too.
The US’s democracy is stuffed. But thankfully the world is much bigger than the United States.
> But forget the latter, democracy is stuffed
What does consumer and privacy legislation have to do with democracy?
They may both be important, but I see no connection between the two other than the fact that those democratically elected would be the ones making the legislation (and any legislation).
When entities other than ordinary citizens get their way—as they do—then citizens are disadvantaged. That ought to be pretty damn obvious, if not then take a look at the world around you.
For starters, examine the myriad of legislation that's beneficial to ordinary citizens that has been blocked or neutered by Big Tech/Business. Citizens may have the vote but they don't hold the power.
Democracy would have worked perfectly fine if democratically elected officials made decisions and passed legislation that they were legally allowed to pass. We may disagree with what what passed, bit that's a concern of the outcome rather than the process in which those people were elected.
I very much agree with you with regards to the problems of big tech, big business, and lobbying in general. They are technically operating within the laws created by democratically elected officials, though. That's the problem.
We need a smaller government with less reach and fewer powers. We don't need to claim that those who were democratically elected somehow escaped democracy while working within the bounds of the rules they were given, we need to limit the rules.
Two issues: first, they may be technically operating within the law but if the legislation which enacted the law was achieved by processes/means that were biased/not truly democratic (i.e.: ones that benefit them) then citizens are disadvantaged. Unequal representation is undemocratic.
Second, laws may be on the books but if the State does not prosecute when they are violated then it makes a mockery of the law. Big Tech/Business has used political power and influence to stop the State prosecuting. For example, Sherman antitrust (and its successors) have been on the books since the 1890s and the State has done essentially nothing to reign in monopolistic practices of these companies.
That's just for starters. By any objective measure democracy in the US is essentially non functional. One only has to look at the polarized political divide which is widening further by the day to see that.
https://support.google.com/chromebook/thread/286204300/utili...
https://old.reddit.com/r/chrome/comments/1et9y0m/what_is_scr...
This is apparently the source code:
https://chromium.googlesource.com/chromium/src/+/refs/tags/1...
With regards to your last sentence, I think a good first step would be to require at least security and other critical updates to be provided within the full warranty period. And this would make sense even without the (limited) warranty extension, and I actually consider it more important.
yes, of course. it may be hard to distinguish though. the device getting hot may create additional stress on the mainboard or RAM or other parts causing it to break faster.
However, this would be a great way of separating hardware and software products - and would that be so bad?
https://www.consumerprotection.govt.nz/general-help/consumer...
It’s not “training an AI model on screen contents without consent.”
It is a stupid feature for Google to enable by default: likely what’s making OP’s machine useless is that it’s running an OCR inference model on the OP’s images to index them for search.
Go to chrome://flags and disable “Enable OCR For Local Image Search” and I bet the problem goes away. The AI Service does have a few other features, but that’s the one that’s likely to be cooking the machine.
As for the other comments on this thread, I doubt there’s anything to do with GDPR here. It’s all local.
CPU time is indeed cheaper than dev time, especially if it's your users' CPUs and not yours.
* Performs image OCR on images, generically. This is then used for several features: “I type a word in the search box and it can look through my screenshots and photos,” “I’m in one of those horrible scanned image-only PDFs and I want to search,” and so on.
* Performs “main content extraction” on websites by using a screenshot of the website _alongside_ the accessibility tree for that website’s structure. It basically says “given this tree of elements and screenshot, can you prune the tree to just the elements a user would care about.” The fact that this is necessary is more an indictment of the DOM than this feature, IMO :)
[1] https://windowsreport.com/chromes-new-feature-makes-scanned-...
I seem to remember a time when they produced one of those every week.
Lately it seems to be mostly the kind of fuck-up and misstep this article talks about.
And I'm not even mentioning those where the misbehaving is actually willful.