Posted by erohead 11/13/2025
Manually installing an app via adb must, of course, be permitted. But that is not sufficient.
> Keeping users safe on Android is our top priority.
Google's mandatory verification is not about security, but about control (they want to forbid apps like ReVanced that could reduce their advertising revenue).
When SimpleMobileTools was sold to a shady company (https://news.ycombinator.com/item?id=38505229), the new owner was able to push any user-hostile changes they wanted to all users who had installed the original app through Google Play (that's the very reason why the initial app could be sold in the first place, to exploit a large, preexisting user base that had the initial version installed).
That was not the case on F-Droid, which blocked the new user-hostile version and recommended the open source fork (Fossify Apps). (see also this comment: https://news.ycombinator.com/item?id=45410805)
The only way to fight is to indoctrinate the next generation, at home, and in school, to use FOSS. People tend to stick to whatever they used in childhood. We the software engineers should volunteer in giving speeches to students about this. It is much easier to sell ideologies to younger people when they are rebellious to the institutions.
And I tried to tell my kids. And it failed mostly.
But in the long run (a decade), what is exceptional and proprietary will become common FOSS. And everybody will benefit.
I cannot think of a more detached and idiotic ruling than that.
On the other hand in the US Apple's App Store was not found to be a monopoly in the first place. Different cases about abusing dominating position also didn't go far.
No one seems to care that Apple has never allowed freedom on their devices. Even the comments here don't seem to mention it. Google was at least open for a while.
Or maybe no one mentions it just because the closed iPhone is a fait accompli at this point.
And if you're going to say this:
> Because that's the law, like it or not.
I would ask you to point me to the text in the statute requiring the courts to do that.
That's just stupid, because being anti-competitive is an emergent outcome, rather than anything specific.
Apple is definitely anti-competitive, but they exploited such a ruling so that they can skirt it. Owning a platform that no other entrants are allowed is anti-competitive - whether you're small or large. It's only when you're large that you should become a target to purge via anti-competitive laws. This allows small players to grow, but always face the threat of purging - this makes them wary of trying to take advantage too much, which results in better consumer outcomes.
What are you even saying?
Whereas google was letting Bosch sell vacuums in their megamall, but only if it uses Google dust filters and people buy only Google made dust filters and Bosch isn't allowed to sell their own dust filters in the megamall.
[1] https://som.yale.edu/sites/default/files/2022-01/DTH-Apple-n...
But how is an agreement prohibiting people from patronizing competitors not an antitrust violation? It's not a matter of who agreed to it, it's matter of what they're requiring you to agree to.
So, a lease.
Moreover, did people buying iPhones on "day 1" think they were buying them or leasing them? Did Apple call it a sale or a rental agreement?
And their mall is monopolistic if it is only for Karcher products. However, because a competitor can easily open a mall next door, it means this Karcher mall is small, and so the enforcers should leave it be. Until the day Karcher buys up all the mall space, in which case, they (regulators) start purging their mall monopoly.
The threat of being purged because you've acquired a large enough monopoly should _always_ be there. It's part of doing business in a fair environment.
How is this not even more anti-competitive?
It's fine to be mad at Google for being duplicitous, but treachery is in the nature of false advertising or breach of contract. Antitrust is something else.
"You can monopolize the market as long as you commit to it from the start" seems like the text of the law a supervillain would be trying pass in order to destroy the world.
I didn't say I liked the ruling, just that it's correct. The opposite conclusion would be absurd, that you can invent a market where there isn't one and claim a company has a monopoly over it. You would be asking the court to declare that every computing device is a de facto marketplace for software that could run on it and that you can't privilege any specific software vendor. I would love if that were true but you can hopefully agree that such a thing would be a huge stretch legally.
There is no such thing as "there is no market". There is always a market. The question is, what's in the market? The typical strategy is to do the opposite -- have Nintendo claim that they're competing with Sony and Microsoft in the same market to try to claim that it isn't a monopoly.
But then the question is, are they the same market? So to take some traditional examples, third party software that could run on MS-DOS could also run on non-Microsoft flavors of DOS. OS/2 could run software for Windows. The various POSIX-compliant versions of Unix and Linux could run the same software as one another. Samsung phones can run the same apps as Pixel phones. Which puts these things in the same market as each other, because they're actually substitutes, even though they're made by different companies.
Conversely, you can't run iOS apps on Android or get iOS apps from Google Play or vice versa. It's not because they're different companies -- both of them could support both if they wanted to -- it's that they choose not to and choices have consequences.
If you intentionally avoid competing in the same market as another company then you're not competing in the same market as that company and the absurdity is trying to have it both ways by doing that and then still wanting to claim them as a competitor.
Since that is a legally permissible action it would be an odd thing for a court to declare that doing such a thing is anticompetitive. If they did they would be declaring all locked down hardware effectively illegal. And while that might be nice it's a bit of a pipedream. Where Google fucked up is that they did license their software to 3rd parties—good for them. But then Google had some regrets and didn't like the fact that they didn't have control over those 3rd parties. But they did have some leverage in the form of Google Play and GSM because users expect it to be there on every Android phone. And then they used that leverage. That's the fuckup. They used Google Play and GSM access to make 3rd parties preinstall Chrome and kill 3rd party Android forks. They used anticompetitive practices on their competitors—other Android device manufacturers.
This situation can't occur for Apple or Nintendo because there aren't other iOS/Switch device manufacturers and they don't have to allow them to exist. They can be anticompetitive for other reasons but not this.
There is a market for these things. Nintendo sells hardware that can play Nintendo Switch games and people buy it. That's a market.
It seems like you're trying to claim that a monopoly isn't a market, but how can that possibly be how antitrust laws work? Your argument is that they don't apply to something if it is a monopoly?
> And they are legally allowed to do that.
That's just assuming the conclusion. Why should it be legal for them to exclude competitors from selling software to their customers? The obviously anti-competitive thing should obviously be a violation of any sane laws prohibiting anti-competitive practices. The insanity is the number of people trying to defend the practice.
Consider what it implies. 20th century GE could have gone around buying houses, installing a GE electrical panel and then selling the houses with a covenant that no one could use a non-GE appliance in that house ever again, or plug in any device that runs on electricity without their permission. They could buy and sell half of all the housing stock in the country and Westinghouse the other half and each add that covenant and you're claiming it wouldn't be an antitrust violation.
Apple wouldn't have been able to get their start because they'd have needed permission from GE or Westinghouse for customers to plug in an Apple II or charge an iPhone and they wouldn't get it because those companies were selling mainframes or flip phones and wouldn't want the competition. If that's not an antitrust violation then we don't have antitrust laws.
> If they did they would be declaring all locked down hardware effectively illegal.
It's fine for hardware to be locked down by and with the specific permission of the person who owns it. But how is it even controversial for the manufacturer locking down hardware for the purpose of excluding competitors to be a violation of the laws against inhibiting competition? It's exactly the thing those laws are supposed to be prohibiting.
Fortunately, those fighting, albeit a minority, have done great work in protecting this. No reason to stop now.
Stallman/StallManned Abusing the principles of the Slippery Slope to discredit perfectly rational predictions
One mandated be the establishment and one mandated by visions and freedom.
But it would be a great start.
On my work laptop I am mandated to use Windows 11 but I run (and when I have time) I develop FOSS.
How does Google know if someone has sold off their app? In most cases, F-Droid couldn't know either. A developer transferring their accounts and private keys to someone else is not easily detected.
F-Droid is quite restrictive about what kinds of app they accept, they build the app from source code themselves, and the source code must be published under a FLOSS license. They have some checks that have to pass for each new version of an app.
Although it's possible for a developer to transfer their accounts and private keys to someone shady, F-Droid's checks and open source requirements limit the damage the new developer can do.
Many times I've seen the IzzyOnDroid repository recommended, but that repo explicitly gives you the APKs from the original developers, so you don't get these benefits.
Anybody slightly competent can put horrendous back doors into any code, in such a way that they will pass F-Droid's "checks", Apple's "checks", and Google's "checks". Source code is barely a speed bump. Behavioral tests are a joke.
The fortunate thing is that 99% of people won't bother trying to break your app if it's not dead simple. Advanved security mechanisms to check for backdoors is probably something only billionaire tech companies need to worry about.
... and there's always a tradeoff in terms of how much of a deterrent anything is. The app store checks are barely measurable.
But at some point there needs to be some level of trust in anything you install. You can't rely on institutions to make sure everything is squeaky clean. They can't even do that on content platforms (or at least, they choose not to afford it).
1. The Android OS does not allow installing app updates if the new APK uses a different signing key than the existing one. It will outright refuse, and this works locally on device. There's no need to ask some third party server to verify anything. It's a fundamental part of how Android security works, and it has been like this since the first Android phone ever release.
2. F-Droid compiles all APKs on its store, and signs them with its own keys. Apps on F-Droid are not signed by the developers of those apps. They're signed by F-Droid, and thus can only be updated through and by F-Droid. F-Droid does not just distribute APKs uploaded by random people, it distributes APKs that F-Droid compiled themselves.
So to answer your question, a developer transferring their accounts/keys to someone else doesn't matter. It won't affect the security of F-Droid users, because those keys/accounts aren't used by F-Droid. The worst that can happen is that the new owner tries injecting malware into the source code, but F-Droid builds apps from source and is thus positioned to catch those types of things (which is more than can be said about Google's ability to police Google Play)
And finally,
> How does Google know if someone has sold off their app?
Google should not know anything about the business dealings of potential competitors. Google is a monopoly[1], so there is real risk for developers and their businesses if Google is given access to this kind of information.
[1]: https://www.google.com/search?q=is+google+a+monopoly%3F&udm=...
For most programs I use, they just publishing the developer's built (and signed) APK. They do their own build in parallel and ensure that the result is the same as the developer's build (thanks to reproducible builds), but they still end up distributing the developer's APK.
It's like we're supposed to save the page and grep it or something. Doesn't work in my Firefox.
Who is F-Droid? Why should I trust them?
How do I know they aren’t infiltrated by TLAs? (Three Letter Agencies), or outright bad-actors.
Didn’t F-Droid have 20 or so apps that contained known vulnerabilities back in 2022?
Who are all these people? Why should I trust them, and why do most of them have no link to a bio or repository, or otherwise no way to verify they are who they say they are and are doing what they claim to be doing in my best interests?
> Didn’t F-Droid have 20 or so apps that contained known vulnerabilities back in 2022?
Idk what specific incident you're referring to, but since they build apks themselves in an automated way, if a security patch to an app breaks the build, that needs to be fixed before the update can go out (by F-Droid volunteers, usually). In that case, F-Droid will warn about the app having known unpatched vulnerabilities.
Again, this is above and beyond what Google does in their store. Google Play probably has more malware apps than F-Droid has lines of code in its entire catalog.
You're calling it an incident like it was an attack or something, but it just seems like everyday software development. Google Play and the App Store don't let me know when apps have known vulnerabilities. I think F-Droid is coming out way ahead here.
So to my reading F-Droid comes out ahead on every metric you've listed: It has no known associations with US government agencies. They do inform you when your apps have known vulnerabilities. I'm not aware of any cases of scams or malware being distributed through F-Droid.
I highly recommend it. It's the main store I've been using on my phone for probably more than a decade now.
You cannot apply this logic to almost anyone else. Apple, Google, etc. can only give you empty promises.
For the same reason you trust many things. They have a long track record of doing the right thing. As gaining reputation for doing the wrong thing would more or less destroy them, it's a fair incentive to continue doing the right thing. It's a much better incentive that many random developers of small apps in Google's play store have.
However, that's not the only reason to trust them. They also follow a set of processes, starting with a long list of criteria saying what app's they will accept https://f-droid.org/docs/Inclusion_Policy/ That doesn't mean malware won't slip past them on occasion, but if you look at the amount of malware that slips past F-Droid and projects with similar policies like Debian and compare them to other app stores like Google's, Apple and Microsoft there is no comparison. Some malware slips past Debian's defences once every few years. I would not be surprised if new malware isn't uploaded to Google app store every few minutes. The others aren't much better.
The net outcome of all that is the open source distribution platforms like F-Droid and Debian, that have procedures in place like tight acceptance policies and reproducible builds are by a huge margin the most reliable and trustworthy on the planet right now. That isn't saying they are perfect, but rather if Google's goal is to keep their users safe they should be doing everything in their power to protect and promote F-Droid.
> How do I know they aren’t infiltrated by TLAs? (Three Letter Agencies), or outright bad-actors.
You don't know for sure, but F-Droid policies make it possible to detect if the TLA did something nefarious. The combination of reproducible builds, open source and open source's tendency to use source code management systems that provide to audit trail showing who changed every line shine a lot of sunlight into the area. Sunlight those TLA's your so paranoid about hate.
This is the one thing that puzzles me about F-Droid opposition in particular. Google is taking a small step here towards increasing accountability of app developers. But a single person signing an app is in reality a very small step. There are likely tens if not hundreds of libraries underpinning it, developed by thousands of people. That single developer can't monitor them all, and consequently libraries with malware inserted from upstream repositories like NPM or PyPi regularly slips through. Transparency the open source movement mostly enforces is far greater. You can't even modify the amount of whitespace in a line without it being picked up by some version control system that records who did it, why they did it, and when. So F-Droid is complaining about a small increase in enforced transparency from Google, when they demand far, far more from their contributors.
I get that Google's change probably creates some paper-cuts for F-Droid, but I doubt it's something that can't be worked around if both sides collaborate. This blog post sounds like Google is moving in that direction. Hear, hear!
How is this an argument in favour of being able to run whatever software you want on hardware you own?
Get a grip. Yes it might be possible the world is out to get you. But it's also possible Google is trying to do exactly what they say on the tin - make the world a safer place for people who don't know shit from clay. In this particular case, if they are trying to restrict what an person with a modicum of skillz can do on their own phone it's a piss poor effort, so I'm inclined to think it's the latter. They aren't even removing the adb app upload hole.
> 1. The Android OS does not allow installing app updates if the new APK uses a different signing key than the existing one. It will outright refuse, and this works locally on device
You missed the and private keys part of the original claim.
E.g. my now perfectly fine QR reader already has access to camera (obvious), media (to read QR in an image file or photo) and network (enhanced security by on-demand checking the URL for me and showing OG etc so I can more informed choose to open the URL)
But it could now start sending all my photo's to train an LLM or secretly make pictures of the inside of my home, or start mining crypto or whatnot. Without me noticing.
Your QR reader requires no media permission if it uses the standard file dialogs. Then it can only access files you select, during that session.
Similarly for the camera.
And in fact, it should have no network access whatsoever (and network should be a user controllable permission, as it used to be — the only reason that was removed is that people would block network access to block ads)
Sure, a QR code scanner can work fine without network. E.g. it could use the network to check a scanned URL against the "safe browsing API" or to pre-fetch the URL and show me a nice OG preview. You are correct to say you may not need nor want this. But I and others may like such features.
Point is not to discuss wether a QR scanner should have network-access, but to say that once a permission is there for obvious or correct reasons, it can in future easily get abused for other reasons. Without changing the permissions.
My mail-app needs network. Nothing prohibits it from abusing this after an update to pull in ads, or send telemetry to third parties. My sound record app needs microphone permissions. Nothing prohibits it from "secretly" recording my conversations after an update (detectable since a LED and icon will light up).
If you want to solve "app becoming malicious after an update", permissions aren't the tool. They are a tiny piece of that puzzle, but "better permissions" aren't the solution either. Nor is "better awareness of permissions by users".
> Your QR reader requires no media permission if it uses the standard file dialogs. Then it can only access files you select, during that session.
On the one hand, yes, good point, but it runs into the usual problem with strict sandboxing – it works for the simple default use case, but as soon as you want to do more advanced stuff, offer a nicer UI, etc. etc. it breaks down.
E.g. barcode scanners – yes, technically you could send a media capture intent to ask the camera app to capture a single photo without needing the camera permission yourself, but then you run into the problem that maybe the photo isn't suitable enough for successful barcode detection, so you have to ask the user to take another picture, and perhaps another, and another, and…
So much nicer to request the camera permission after all and then capture a live image stream and automatically re-run the detection algorithm until a code has been found.
That's the most baffling thing to me. There is simply no option to remove network permissions from any app on my Pixel phone.
It's one of the reasons why I avoid using mobile apps whenever I can.
I believe that would theoretically allow exfiltration of data but I don't understand all of the details behind this behavior and how far it goes.
If people always answer yes, they grow tired and eventually don't notice the question. I've seen it happen with "do you want to overwrite the previous version of the document you're editing, which you saved two minutes ago?" At that point your question is just poisoning the well. Makes sense, but still, hearsay alert.
A while ago I wanted to scan the NFC chip in my passport. Obviously, I didn't want this information to leave my device.
There are many small utility apps and games that have no reason to require network access. So "need" is not quite the right word here. They _want_ network access and they _want_ to be able to bully users into granting it.
That's a weird justification for granting it by default. But I wouldn't care if I could disable it.
Did you find a suitable app? I don't really remember, but https://play.google.com/store/apps/details?id=com.nxp.taginf... might suit you.
If a manufacturer doesn't follow the Android CDD (https://source.android.com/docs/compatibility/cdd), Google will not allow them to bundle Google's closed source apps (which include the Google Play store). It was originally a measure to prevent fragmentation. I don't know whether this particular detail (not exposing this particular permission) is part of the CDD.
INTERNET is a "normal permission", automatically granted at install time if declared in the manifest. OEMs cannot change the grant behavior without breaking compatibility because:
The CDD explicitly states that the Android security model must remain intact. Any deviation would fail CTS (Compatibility Test Suite) and prevent Play certification.
As OEM you want Carriers to sell your device above everything else, because they are able to sell large volumes.
Carriers make money using network traffic, Google is paying Revenue-Share for ads to Carriers (and OEMs of certain size). Carriers measure this as part of the average revenue per user (ARPU).
--> The device would be designed to create less ARPU for the Carrier and Google and thus be less attractive for the entire ecosystem.
E.g. TrackerControl https://github.com/TrackerControl/tracker-control-android can do it, it is a local vpn which sees which application is making a request and blocks it.
You can write your own version of it if you don't trust them.
And neither Android nor iOS a safer than modern Desktop systems. On the contrary because leaking data is its own security issue.
https://securityonline.info/androids-secret-tracking-meta-ya...
Search string for DDG: Meta proxy localhost data exfiltration
From their email pitch:
> We’re now offering from $500 to $2000 for a one-time purchase of a developer account that includes apps, or a rental deal starting from $100.
> No hidden conditions — quick process, secure agreement, and immediate payment upon verification.
> We’re simply looking for reliable accounts to publish our client apps quickly, and yours could be a perfect match.
But I don't think it is enough, or it is the right model. In other cases, when the app has dangerous permissions already, auto-update should be a no-go.
...in the absence of sandbox escape bugs.
F-Droid is not just a repository and an organization providing the relevant services, but a community of like-minded *users* that report on and talk about such issues.
Maybe that's the mistake right there?
It is a good practice only as long as you can trust the remote source for apps. Illustration: it is a good security practice for a Debian distro, not so much for a closed source phone app store.
They don't know if the person who signed the app is the developer, but should the app happen to be a scam and there is a police investigation, that is the person who will have to answer questions, like "who did you transfer these private keys to?".
This, according to Google and possibly regulators in countries where this will be implemented, will help combat a certain type of scam.
It shouldn't be a problem for YouTube Vanced, at least in the proposed form. The authors, who are already idendified just need to sign their APK. AFAIK, what they are doing is not illegal or they would have been shut down long ago. It may be a problem for others though, and particularly F-Droid, because F-Droid recompiles apps, they can't reasonably be signed by the original author.
The F-Droid situation can resolve itself if F-Droid is allowed to sign the apps it publishes, and in fact, doing that is an improvement in security as it can be a guarantee that the APK you got is indeed the one compiled by F-Droid from publicly available source code.
for now
I stopped developing for mobile systems ages ago because it just isn't fun anymore and the devices are vastly more useless. As a user, I don't use apps anymore either.
But you can bet I won't ever id myself to Google as a dev.
These are not compatible, but only because the first half is simply false. Allowing a developer to send updates is not "good" but "bad" security practice.
Which shows that the whole 'security' rigmarole by google is bullshit.
But this costs money, and the lack of it is proof google doesn't really care about user security. They're just lying.
funnily enough, I am installing google drive for computers right now (macOS), I had to download a .pkg and basically sideload the app, which is not published on the Apple Store
Why the double standard, dear Google?
You mean install the app? The fact that Apple and Google wish to suggest that software from outside their gardens is somehow subnormal doesn't mean other people need to adopt their verbiage.
Correct, I mean install the app.
Sideloading is the corporate jargon for "installing an app".
That's the funny part.
They do stuff they want to prohibit to other developers because "safety".
But we all know that Google can do massively more harm than scammers pushing their scammy apps to a few hundreds people.
For example, in today's news "Google hit with EU antitrust investigation into its spam policy".
There's a bit of irony in it and a lot of hypocrisy, IMO.
Somebody tell them that I do not want to be kept safe by Big Brother.
... and our business partners. And app developers that grab your clipboard. And their business partners. and a few more levels of data brokers. The spi^H^H^H data-vacuum must flow
Curation (and even patching) by independent, third-party volunteers with strong value commitments does protect users from this (and many other things). Code signing is still helpful for F/OSS distributions of software, but the truth is that most of the security measures related to app installation serve primarily to solve problems with proprietary app markets like Google's Play Store and Apple's App Store. Same thing with app sandboxing.
It's unfortunate but predictable when powerful corporations taint genuine security features (like anti-tampering measures, built-in encryption devices, code signing, sandboxing, malware scanning, etc.) by using them as instruments of control to subdue their competitors and their own users.
It was shady as fuck on Kaputa's part, especially given ZipoApps is an Israeli adware company, a.k.a. surveillance company, and given Israel's track record with things like using Pegasus against journalists/activists or blowing up civilian-owned beepers, this should automatically be a major security incident and at least treated as seriously as the TikTok debacle.
Kaputa should be extremely ashamed of himself and outted from the industry. I and many others would have gladly paid a yearly subscription for continued updates of the suite instead of a one-time fee, but instead of openly discussing such a model with his userbase, he went for the dirtiest money he could find.
Why not let the user decide
Letting someone else decide has potential consequences
Using F-Droid app ("automatic updates") is optional, as it should be
"Automatic updates" is another way of saying "allow somone else to remotely install software on this computer"
Some computer owners might not want that. It's their decision to make
I disable internet access to all apps by default, including system apps
When source code is provided I can remove internet access before compilation
Anyway, the entire OS is "user-hostile" requiring constant vigilance
It's controlled by an online ad services company
Surveillance as a business
The problem is the vast majority of users want this on by default; they don't want to be bothered with looking at every update and deciding if they should update or not.
It's the developers who don't want the headache of not having automatic updates.
Given the frequent complaints about the former, the notion of "permission" is dubious
That's actually possible, though app stores need to implement the modern API which F-Droid doesn't seem to do quite well (the basic version of F-Droid (https://f-droid.org/eu/packages/org.fdroid.basic/) seems to do better). Updating from different sources (i.e. downloading Signal from GPlay and then updating it from F-Droid or vice versa) also causes issues. But plain old alternative app stores can auto-update in the background. Could be something added in a relatively recent version of Android, though.
If this Verified bullshit makes it through, I expect open source Android development to slowly die off. Especially for smaller hobbyist-made apps.
there cannot exist an easy way for a typical non-technical user to install “unverified apps” (whatever that means), because the governments of countries where such scams are widespread will hold Google responsible.
Meanwhile this very fact seems fundamentally unacceptable to many, so there will be no end to this discourse IMO.
But that kind of privacy based security model is anathema to Google because its whole business model is based on violating its users' privacy. And that's why they have come with such convoluted implementation that further give them control over a user's device. Obviously some government's too may favour such an approach as they too can then use Google or Apple to exert control over their citizens (through censorship or denial of services).
Note also that while they are not completely removing sideloading (for now) they are introducing further restrictions on it, including gate-keeping by them. This is just the "boil the frog slowly" approach. Once this is normalised, they will make a move to prevent sideloading completely, again, in the future.
It could be an alternative SMS app like TextSecure. One of the best features of Android is that even built-in default applications like the keyboard, browser, launcher, etc can be replaced by alternative implementations.
It could also be a SMS backup application (which can also be used to transfer the whole SMS history to a new phone).
Or it could be something like KDE Connect making SMS notifications show up on the user's computer.
> One of the best features of Android is that even built-in default applications like the keyboard, browser, launcher, etc can be replaced by alternative implementations.
When sideloading is barred all that can easily change. If you are forced to install everything from the Google Play Store, Google can easily bar such things, again in the name of "security" - alternate keyboards can steal your password, alternate browsers can have adware / malware, alternate launcher can do many naughty things etc. etc.
And note that if indeed giving apps access to SMS / RCS data is really such a desirable feature, Google could have introduced gate-keeping on that to make it more secure, rather than gate-keeping sideloading. For example, their current proposal says that they will allow sideloading with special Google Accounts. Instead of that, why not make it so that an app can access SMS / RCS only when that option is allowed when you have a special Google Account?
The point is that they want to avoid adding any barriers where a user's private data can't be easily accessed.
Because then you still need a special Google Account to install your app when it needs to access SMS / RCS.
How about solving this problem in a way that doesn't involve Google rather than the owner of the device making decisions about what they can do with it? Like don't let the app request certain permissions by default, instead require the user to manually go into settings to turn them on, but if they do then it's still possible. Meanwhile apps that are installed from an app store can request that permission when the store allows it, so then users have an easy way to install apps like that, but in that case the app has been approved by Google or F-Droid etc. And the "be an app store" permission works the same way, so you have to do it once when you install F-Droid but then it can set those permissions the same as Google Play.
It's not Google's job to say no for you. It's only their job to make sure you know what you're saying yes to when you make the decision yourself.
They clearly addressed this option in the post, under sufficient social engineering pressure these settings will easily be circumvented. You'd need at least a 24h timeout or similar to mitigate the social pressure.
"Under sufficient social engineering pressure" is the thing that proves too much. A 24h timeout can't withstand that either. Nor can the ability for the user to use their phone to send money, or access their car or home, or read their private documents, or post to their social media account. What if someone convinces them to do any of those things? The only way to stop it is for the phone to never let them do it.
By the time you're done the phone is a brick that can't do anything useful. At some point you have to admit that adults are responsible for the choices they make.
Absolutely this! It's just nanny state all over again.
Markets are supposed to be better because you can switch to a competitor but that only applies when there is actually competition. Two companies both doing the same thing is not a competitive market.
And despite that, you assuming that dev verification means no malware. The Play Store requires developers to register with the same verification measures we're talkingand malware is hardly unheard of there.
It's plausible that Google is done some of these things, like doing some sort of data mining on everything that you type for example (steal your password), and many official google apps have ads if you don't pay them
It stands to reason that financial service industry peak bodies are in conversation with governments and digital service providers, including data providers, to try to better protect users.
There are obvious conflicting goals, and the banks / governments can’t really appear to be doing nothing.
And technical users are probably most certainly lacking a representative at the table, and are the group that has the least at stake. Whacko fringe software-freedom extremists, they probably call us.
Permissions should ~always be "accept (with optional filters)", "deny", and "lie". If the game wants contacts access and won't take no for an answer, I should be able to feed it a lie: empty and/or fake and/or sandboxed data. It's my phone and my data, not the app's.
We had it over a decade ago, xposed supported filtered and fake data for many permissions. It's strictly user-hostile that Android itself doesn't have this capability.
so no, it's not necessary at all. and many apps identify OTPs and give you an easy "copy to clipboard" button in the notification.
but that isn't all super widely known and expected (partly because not all apps or messages follow it), so it's not something you can rely on users denying access to.
https://www.google.com/search?q=ars+technica+playstore+malwa...
What was the process? Enable developer mode and grant ’can install apps’ to a browser or file browser?
Am I remembering this correctly?
The only other step is to download a file from the internet, or otherwise receive one. That’s not a technical-knowledge step though
I think that is the part that should be fixed, users should be able to allow a one time exception to avoid letting that permission activated by mistake. I don't need to allow permanently a web browser to install apps.
The comment I replied to tried to tell us some technical knowledge required.
Doesn’t sound like it?
This blog post is specifically saying there will be a way to bypass the gatekeeping on Google-blessed Android builds, just as we wanted.
> But that kind of privacy based security model is anathema to Google because its whole business model is based on violating its users' privacy.
Despite this, they sell some of the most privacy-capable phones available, with the Pixels having unlockable bootloaders. Even without unlocking the bootloader to install something like GrapheneOS, they support better privacy than the other mass market mobile phones by Samsung and Apple, which both admittedly set a low bar.
If they are concerned about malware then one of the obvious solutions would be safe guarding their play store. There is significant less scam on iphone because apple polices their app store. Meanwhile scam apps that i reported are still up on google play store.
Then you'd have the other "screaming minority" on HN show up, the "antitrust all the things" folks.
https://today.yougov.com/economy/articles/47798-most-america...
https://www.antitrustinstitute.org/wp-content/uploads/2024/1...
And that most Americans believe that bigger companies tend to have lower prices than smaller ones.
It’s not particularly clear then that there should be a lot of motivation to change things.
> more than 50% of Americans believe there is at least some competition, or a lot of competition in every sector of the economy that would be relevant to this discussion.
We're talking about Google and Apple but the relevant category would be "technology companies". Do phone platforms or mobile app distribution stores have "a lot of competition"? It's hard to see how anybody could think that. Do games and AI and web hosting? Sure they do. But they're lumping them all together.
They're also using "some competition" as the second-to-highest amount of competition even though that term could reasonably apply to a market where one company has 90% market share but not 100%, and it's confusingly similar to "not much competition". And they're somehow showing oil and gas as having less competition than telecommunications when oil and gas is a textbook fungible commodity and telecommunications is Comcast. That question has issues.
> And that most Americans believe that bigger companies tend to have lower prices than smaller ones.
This is the thing where Walmart has lower prices than the mom and pop. That doesn't imply that Walmart has better quality or service than a smaller company, and it doesn't imply that Walmart is operating in a consolidated market. Retail is objectively competitive in most areas.
Whereas when a big company is in a consolidated market, "big companies tend to have lower prices" doesn't hold and you get Google and Apple extracting 30%.
Moreover, the relevant part of that link was this part: More than two thirds of people, including the majority of both parties, support antitrust laws, six times as many people think they're not strict enough than think they're too strict and significantly more people agree with "the government should break up big tech" than disagree.
Then we could argue how high speed rail would have been cheaper if the railways had been broken up.
PS I appreciate your thoughtful response, and your contributions to HN more generally.
Eh. The rails themselves are a natural monopoly in the same way roads are. It's one of the few things it makes sense to have the government build, or at least contract to have someone build, and then provide to everyone without restriction.
Meanwhile train cars and freight hauling and passenger service aren't any more of a natural monopoly than taxis or trucks. They get monopolized if someone is allowed to leverage a monopoly over the tracks into a monopoly over the rest of it, but that's unnecessary and undesirable. Separating them out allows the market that can be competitive to be competitive. Which is the same reason you don't want a tech monopoly leveraging it into control over ancillary markets that could otherwise be competitive.
There are two main reasons train service in the US is a shambles. The first is that the population density is too low, especially in the west. How many people do you expect to be riding a train from Boise to Des Moines on a regular basis? And the second is that truck drivers don't like freight rail, car companies don't like passenger rail and oil companies don't like either one, and they all lobby against anything that would make it better in the parts of the country where it could actually work. It's hard to make something good when there are millions of voters and billions of dollars trying to get it to suck.
can you imagine the outrage from all the exact same people who are currently outraged about develeloper verification if google said they were cutting off any third-party app access to SMS/RCS?
Just look at everything they've done to break yt-dlp over and over again. In fact their newest countermeasure is a frontpage story right beside this one: https://news.ycombinator.com/item?id=45898407
But having seen how things work at large companies including Google, I find it less likely for Google's Android team to be allocating resources or making major policy decisions by considering the YouTube team. :-) (Of course if Android happened to make a change that negatively affected YouTube revenue, things may get escalated and the change may get rolled back as in the infamous Chrome-vs-Ads case, but those situations are very rare.) Taking their explanation at face value (their anti-malware team couldn't keep up: bad actors can spin up new harmful apps instantly. It becomes an endless game of whack-a-mole. Verification changes the math by forcing them to use a real identity) seems justified in this case.
My point though was that whatever the ultimate stable equilibrium becomes, it will be one in which the set of apps that the average person can easily install is limited in some way — I think Google's proposed solution here (hobbyists can make apps having not many users, and “experienced users” can opt out of the security measures) is actually a “least bad” compromise, but still not a happy outcome for those who would like a world where anyone can write apps that anyone can install.
One way to achieve this is to only allow sideloading in "developer mode", which could only be activated from the setup / onboarding screen. That way, power users who know they'll want to sideload could still sideload. The rest could enjoy the benefits of an ecosystem where somebody more competent than their 80-year-old nontechnical self can worry about cybersecurity.
Another way to do this would be to enforce a 48-hour cooldown on enabling sideloading, perhaps waived if enabled within 48 hrs of device setup. This would be enough time for most people to literally "cool off" and realize they're being scammed, while not much of an obstacle for power users.
In other words, it's not any quality of Linux other than how niche it is.
Of course that's a side effect Google probably wouldn't be sad about.
80-year-old nontechnical self can easily operate machines and devices that are much more complex and easily more dangerous than a smartphone.
And yet we're here pretending that those same people will install apps without even thinking about it.
Careless people are careless, we know that, we don't make them safer by treating everyone else like toddlers with a gun in their hands.
Yea no. Now companies have to supply two phones, one for dev and one for calling. It is hard enough to get one...
Google is not rolling this out to protect against YouTube ReVanced but only in a small number of countries. That’s an illogical conclusion to draw from the facts.
Also, its not SIDE loading. Its installing an app.
I'm not on the side of locking people out, but this is a poor argument.
Debian already is sideloaded on the graciousness of Microsoft's UEFI bootloader keys. Without that key, you could not install anything else than MS Windows.
Hence you don't realize how good of an argument it is, because you even bamboozled yourself without realizing it.
It gets a worse argument if we want to discuss Qubes and other distributions that are actually focused on security, e.g. via firejail, hardened kernels or user namespaces to sandbox apps.
This is only true if you use Secure boot. It is already not needed and insecure so should be turned off. Then any OS can be installed.
You can only turn off Secure Boot because Microsoft allows it. In the same way Android has its CDD with rules all OEMs must follow (otherwise they won't get Google's apps), Windows has a set of hardware certification requirements (otherwise the OEM won't be able to get Windows pre-installed), and it's these certification requirements that say "it must be possible to disable Secure Boot". A future version of Windows could easily have in its hardware certification requirements "it must not be possible to disable Secure Boot", and all OEMs would be forced to follow it if they wanted Windows.
And that already happened. Some time ago, Microsoft mandated that it must not be possible to disable Secure Boot on ARM-based devices (while keeping the rule that it must be possible to disable it on x86-based devices). I think this rule was changed later, but for ARM-based Windows laptops of that era, it's AFAIK not possible to disable Secure Boot to install an alternate OS.
Turning off UEFI secure boot on a PC to install another "unsecure distribution"
vs.
Unlocking fastboot bootloader on Android to install another "unsecure ROM"
... is not the exact same language, which isn"t really about security but about absolute control of the device.
The parallels are astounding, given that Microsoft's signing process of binaries also meanwhile depends on WHQL and the Microsoft Store. Unsigned binaries can't be installed unless you "disable security features".
My point is that it has absolutely nothing to do with actual security improvements.
Google could've invested that money instead into building an EDR and called it Android Defender or something. Everyone worried about security would've installed that Antivirus. And on top of it, all the fake Anti Viruses in the Google Play Store (that haven't been removed by Google btw) would have no scamming business model anymore either.
The parallels are astounding, given that Microsoft's signing process of binaries also meanwhile depends on WHQL and the Microsoft Store. Unsigned binaries can't be installed unless you "disable security features".
My point is that it has absolutely nothing to do with actual security improvements."
I agree. It is the same type of language.
> It is already not needed and insecure so should be turned off.
You know what's even less secure? Having it off.
Oh, you don't use <thing literally named ‘Secure [Verb]’>?? You must not care about being secure, huh???
Dear Microsoft: fuck off; I refuse to seek your permission-via-signing-key to run my own software on my own computer.
Also Secure boot is vulnerable to many types of exploits. Having it enabled can be a danger in its self as it can be used to infect the OS that relies on it.
No one is stopping you from installing your own keys, though?
I also dual-boot Windows and that's a whole additional can of worms; not sure it would even be possible to self-key that. Microsoft's documentation explicitly mentions OEMs and ODMs and not individual end users: https://learn.microsoft.com/en-us/windows-hardware/manufactu...
Securing the boot chain protects against a whole range of attacks, so yes, it is objectively better from a security POV.
Malware developers know how to avoid this facade of an unlocked door.
Users do not.
That's the problem. It's not about development, it's about user experience. Most users are afraid to open any Terminal window, let alone aren't even capable of typing a command in there.
If you argue about good intent from Microsoft here, think again. It's been 12 years since Stuxnet, and the malware samples still work today. Ask yourself why, if the reason isn't utter incompetence on Microsoft's part. It was never about securing the boot process, otherwise this would've been fixed within a day back in 2013.
Pretty much all other bootkits also still work btw, it's not a singled out example. It's the norm of MS not giving a damn about it.
The linked post is full of fluff and low on detail. Google doesn't seem to have the details themselves; they're continuing with the rollout while still designing the flow that will let experienced users install apps like normal.
the real pain in the butt in my present is Patreon because I can't be arsed to write something separate for it. as-is, I subscribe to people on Patreon and then never bother watching any of the exclusive content because it's too much work. some solutions like Ghost (providing an API for donor content access) get part of the way to a solution, but they are not themselves a video host, and I've never seen anyone use it.
That's not real DRM then. The real DRM is sending the content such that it flows down the protected media path (https://en.wikipedia.org/wiki/Protected_Media_Path) or equivalent. Userspace never sees decrypted plaintext content. The programmable part of the GPU never seen plaintext decrypted content. Applying some no-op blur filter would be pointless since anything doing the blur couldn't see the pixels. It's not something you can work around with clever CSS. To compromise it, you need to do an EoP into ordinarily non-programmable scanout of the GPU or find bad cryptography or a side channel that lets you get the private key that can decode the frames. Very hard.
Is this how YT works today? Not on every platform. Could it work this way? Definitely. The only thing stopping them is fear of breaking compatibility with a long tail of legacy devices.
without opening it up physically there is no way to make it stop or get the raw stream before it's displayed
Naturally that got broken too, and even worse, broken when it's only supported by a minority of devices and content, because the more devices and content it's used for the easier it is to break and the larger the incentive to do it.
If you tried to require that for all content then it would have to be supported by all devices, including the bargain bin e-waste with derelict security, and what do you expect to happen then?
That’s why a lot of low end Android devices often have problems playing DRMed content on the Web: their keyboxes got cracked open and leaked wide enough for piracy that they got revoked and downgraded down to L3.
If I'm going to live in a walled garden it's going to the fanciest
Because the hardware is so constrained an iphone lasts forever compared to a similar android. My two year old pixel is slow now, but I know people completely happy with a five year old iphone. Pause, I checked and the oldest iphone that receives updates is an iphone 11, which is the exact model I had before going back to android.
Of course I would be much happier if I didn't need to use Shizuku in the first place.
[0]: https://play.google.com/store/apps/details?id=moe.shizuku.pr...
Not only users are not connected to WiFi all the time, but in many developing countries people often have no WiFi at home and rely on mobile data instead. It's a solution, but not a solution for everyone or a solution that works all the time.
I think number of people caring about alternative app stores, F-droid or whatever is very similar to the number of people willing to use adb if necessary, so rather small.
I beg to differ:
> In early discussions about this initiative, we've been encouraged by the supportive initial feedback we've received.
> the Brazilian Federation of Banks (FEBRABAN) sees it as a “significant advancement in protecting users and encouraging accountability.” This support extends to governments as well
> We believe this is how an open system should work
Google isn't "hinting" that they're doing this under pressure, that announcement makes it quite clear that this is Google's initiative which the governments are supportive of because it's another step on a ratcheting mechanism that centralizes power.
> because the governments of countries where such scams are widespread will hold Google responsible
Your comment is normalizing highly problematic behavior. Can we agree that vague "pressure from the government" shouldn't be how policies and laws are enacted? They should make and enforce laws in a constitutional manner.
If you believe that it's normal for these companies and government officials to make shadow deals that bypass the rule of law, legal procedures, separation of powers and the entire constitutional system of governance that our countries have, then please drop the pretense that you stand for democracy and the rule of law (assuming that you haven't already).
Otherwise we need to be treating it for what it is - a dangerous, corrupt, undemocratic shift in our system of governance.
What, the same way they hold Microsoft responsible for the fact that you can install whatever you want in Windows?
Obviously, there can exist an easy way for a non-technical user to install unverified apps, because there has always been one.
Assuming this is true (ignore if you disagree), why is that? Is it that PCs never became as widespread as phones (used by lots of people who are likely targets for scammers and losing their life savings etc), or technology was still new and lawmakers didn't concern themselves with it, or PCs (despite the name) were still to a large extent "office" devices, or the sophistication of scammers was lower then, or…? Even today PCs are being affected by ransomware (for example) but Microsoft doesn't get held responsible, so why are phones different?
Once Apple created the walled garden every other company realized how good it could be for their bottom lines and attempted to do the same thing.
So, to answer your question, Microsoft got blamed for viruses and made fun of but there wasn't a better way in the mainstream. There is one now.
PCs will resist this trend for a while because it's also mainstream that they are used to do work. Many people use a PC every day with some native application from a company they have a direct contract with. For example: accounting software. Everybody can add another example from their own experience. Those programs don't come from the Windows store and it will be a long term effort to gatekeep everything into the store or move them into a web browser.
The .NET MAUI technology we had a post about yesterday is one of the bricks that can build the transition.
I don't think App Store is a better way.
From my point of view, people keep mistaking the actual progress - generalised sandboxing and reduced API surface - with the major regression - controlled distribution. At the beginning of the App Store, when the sandboxing and APIs were poor, they were frequent security issues.
Apple marketing magic is somehow convincing people that it's their questionable veting which made things secure and not the very real security innovations.
It was into this void that the “everything seems new” iPhone stepped and ventured out in a different course. I’m neither speaking for or against apples normalization of an App Store as a primary source of updates, just recalling the way things were, and positing that Apple was trying a different approach that initially offered a computing platform that wasn’t the hellscape that MS platform was quickly becoming.
Windows NT / OS2 did have more security as it was meant for shared environments, but even there, corporations ended up using stuff like Novell NetWare to get the actual networking services.
Windows 2000 was the first version of consumer windows based on the NT kernel instead of the DOS / Windows 95/98/ME based systems. I still remember running around the office updating windows 2000 machines to service pack 4 to protect us against the first real massive virus "ILOVEYOU".
Edit: Still on first coffee, sorry about the ramblings
The era of United States companies using common sense United States principles for the whole world is coming to an end.
Of course there are no good options for open hardware, but that is a related but separate problem.
Spoken like someone who has never ever worked with any hardware manufacturers. They do not need reasons for that. They all believe their mundane shit is the most secret-worthy shit ever. They have always done this. This predates google, and will outlive it.
People should have the right to run whatever software they like on the computing hardware they own. They should have the right to repair it.
The alternative is that everything ends up like smart-tvs where the options are "buy spyware ridden crap" or "don't have a tv"
There is absolutely nothing "natural" about trading your pile of government promises for the right to call government men with guns and sticks if you are alienated from the option to physically control an object. Your natural right is to control what you can defend.
Rights are what we decide them to be. Or rather, what people in power decide them to be, i.e. people who hold and issue large amounts of government promises, and recruit and direct the most men with guns and sticks.
Do what you please and get enough people to do it with you, and no one can stop you.
Note that adding "full stop" pointlessly to the end of sentences does not strengthen your argument.
You’re still missing the point the comment is making: In countries where governments are dead set on holding Google accountable for what users do on their phones, it doesn’t matter what you believe to be your natural right. The governments of these countries have made declarations about who is accountable and Google has no intention of leaving the door open for that accountability.
You can do whatever you want with the hardware you buy, but don’t confuse that with forcing another company to give you all of the tools to do anything you want easily.
I’m amazed at how gullible some people are but that’s how it is.
You can also view this as a "tragedy of the commons" situation. Unverified apps and sideloading is actively abused by scammers right now.
> Meanwhile this very fact seems fundamentally unacceptable to many, so there will be no end to this discourse IMO.
I get that viewpoint and I'm also very glad an opt-out now exists (and the risk that the verification would be abused is also very real), but yeah, more information what to do against scammers then would also be needed.
Moreover, it's not possible to provide a path for advanced users that a stupid person won't use by accident, either.
These are what drive many instances of completely missing paths for advanced users. It's not possible to stop coercion or accidents. It is literally impossible. Any company that doesn't want to take the risk can only leave advanced users completely out of the picture. There's nothing else they can do.
Google will fail to prevent misuse of this feature, and advanced users will eventually be left in the dust completely as Google learns there's no way to safely provide for them. This is inevitable.
That immediately takes the pressure off people who are being told that their bank details are at immediate risk.
And, to prevent the scammer from simply calling back once the 24 hours are gone, make it show a couple of warnings (at random times so they can't be predicted by the scammer) explaining the issue, with rejecting these warnings making the cooling off timer reset (so a new attempt to enable would need another full 24 hours).
I actually think you might be wrong about this? Imagine if Google forced you to solve a logic puzzle before sideloading. The puzzle could be very visual in nature, so even if a scammer asked the victim to describe the puzzle over the phone, this usually wouldn't allow the scammer to solve it on the victim's behalf. The puzzle could be presented in a special OS mode to prevent screenshots, with phone camera disabled so the puzzle can't be photographed in a mirror, and phone call functionality disabled so a scammer can't talk you through it as easily. Scammers would tell the victim to go find a friend, have the friend photograph the puzzle, and send the photo to the scammer. At which point the friend hopefully says "wait, wtf is going on here?" (Especially if the puzzle has big text at the top like "IF SOMEONE ASKS YOU TO PHOTOGRAPH THIS, THEY ARE LIKELY VICTIM OF AN ONGOING SCAM, YOU SHOULD REFUSE", and consists of multiple stages which need to be solved sequentially.)
In addition to logic puzzles, Google could also make you pass a scam awareness quiz =) You could interleave the quiz questions with logic puzzle stages, to help the friend who's photographing the puzzle figure out what's going on.
I guess this could fail for users who have two devices, e.g. a laptop plus a phone, but presumably those users tend to have a little more technical sophistication. Maybe display a QR code in the middle of the puzzle which opens up scam awareness materials if photographed?
Or, instead of a "scam awareness quiz" you could could give the user an "ongoing scam check", e.g.: "Did a stranger recently call you on the phone and tell you to navigate to this functionality?" If the user answers yes, disable sideloading for the next 48 hours and show them scam education materials.
If the user lacks full mental faculties, they are part of the userbase we need to protect from scams. Most likely, a user without full mental faculties who is trying to sideload will be a scam victim.
If the user lacks the necessary physical faculties to "solve a puzzle on their phone", they probably get help from friends regularly; a friend should be able to help with sideloading. Enabling sideloading should be a one-time operation right?
- install remote desktop software
- run commands in the windows terminal
- withdraw cash from the bank
- lie to the bank teller about their purpose
- insert their cash into a bitcoin ATM at a gas station
- ignore warnings about scams which appear on the screen of the ATM
- insert the scammers bitcoin address into the machine
It isn't a stretch to imagine they could convince the victim to install adb and sideload an app.
I'm surprised they didn't think of doing that sooner.
Warning about scams is fine, as is taking steps to make it harder, but once you start trying to completely remove the agency of mentally sound adults "for their own good" then we have a problem.
This is either a move towards tighter control of the platform or a government request. And somewhat ironic, given that iOS is being pressured to be a bit more open.
But it is perfectly fine to sell crypto and other complex financial assets to kids and other people that do not know they are from apps in the Play store.
If "safety" takes control from you then it is implemented. If real safety puts profits in danger then it is fight against. Quite a dystopia.
And also, I'm the owner of my device. Not my country.
Autocratic Albania banned by law ads on YouTube so if you are in Albania (or your VPN is - wink! wink!) you get to watch YouTube without ads legally
I, too, hate those autocratic countries were government act for the good of the people, instead of ruling in favour of greedy billionaires
I'm sure some private actors (for example, banks) would love that smartphones are as tight as possible (reason: [0]). Perhaps the same reason applies to Google [1]. But no, "Brazil" isn't demanding that from Google.
[0]: consider that some virus (insecure apps, for example) could somehow steal information from bank apps (even as simple as capture login information). The client might sue the bank and the bank might have to prove that their app is secure and the problem was in the client's smartphone.
[1]: the client, the bank etc might complain to Google that their Android is insecure
In ye goode olde times, the US would have threatened invasion and that would have been the end of it.
Half /s, because it actually used to be the case that the US government exercised its massive influence (and not just militarily) onto other countries for the benefit of its corporations and/or its citizens... these days, the geopolitical influence of the US has been reduced to shreds and the executive's priorities aren't set by doing what's (being perceived as being) right but by whomever pays the biggest bribes.
Seems more appropriate.
Imagine a situation in which a frightened, stressed user sees such a message on their screen. Meanwhile, a very convincing fake police officer or bank representative is telling them over the phone that they must ignore this message due to specific dangerous emergency situation to save the money in their bank account. Would the user realize at that moment that the message is right and the person on the phone is a thief? I'm not so sure.
However, there is still a danger that scammers will call after 12 hours, and they will be more convincing than educational material (or the user may not have read it).
It is unlikely it will work. Scammers are talking all the time and creating a sense of urgency, people have issues to think and listen at the same time, and they tend to drop thinking completely when in a haste. 12 hours of a break will give the victim time to think at least. Probably it will give time to talk about it with someone, or to google things.
How many virus infections and scams was Microsoft held responsible for? What about Red Hat, or Debian?
And at least let Google plainly state this, instead of inventing legal theories based on vague hints from their press releases, to explain why their self-serving user-hostile actions are actually legally mandatory.
This argument is FUD at this point.
Sovereign governments have ways to make clear what they want: they pass laws, and there needs to be no back deal or veiled threats. If they intend to punish Google for the rampant scams, they'll need a legal framework for that. That's exactly how it went down with the DMA, and how other countries are dealing with Google/Apple.
Otherwise we're just fantasizing on vague rumors, exchanges that might have happened but represent nothing (some politicians telling bullshit isn't a law of the country that will lead to enforcement).
This would be another story if we're discussing exchanges with the mafia and/or private parties, but here you're explicitely mentionning governments.
Not really. It should, but Google operate in a bunch of contries without proper rule of law.
This is the unsurprising consequence of trying to hold big companies accountable for the things people do with their devices: The only reasonable response is to reduce freedoms with those devices, or pull out of those countries entirely.
This happened a lot in the early days of the GDPR regulations when the exact laws were unclear and many companies realized it was safer to block those countries entirely. Despite this playing out over and over again, there are still constant calls on HN to hold companies accountable for user-submitted content, require ID verification, and so on.
The government(s) have to treat the middlemen as middlemen. Otherwise they are forced to act as gatekeepers.
is not covered by GDPR.
And it's a bit hard to believe that these several startups functioned without ever collecting names, emails, IP, phone number, or address of any lead or customer ever.
security = 1/convenience
or in this case: security = 1/freedom or agencyThe word "sideload" made it sound like you're smuggle something you shouldn't onto the system. Subtle word tricks like this could sneak poisons into your mind, be watchful.
Side loading was getting something to work because it should when the system hadn't caught up to the fact that it should work.
https://www.google.com/books/edition/CNET_Do_It_Yourself_IPo...
I happen to remember "sideload" as a term of art for some online file locker sites to mean saving it to your cloud drive instead of downloading it to your computer. A cool usage, but it never caught on.
I think nomenclature as it exists in the PC software universe is closest in spirit on all fronts, in describing running software as, well, running software, and describing installing as installing. While a little conspiratorial in tone they're not wrong that "sideload" pushes the impression that controlling what software you run on your phone should be understood as non-default.
The buried lede:
> a dedicated account type for students and hobbyists. This will allow you to distribute your creations to a limited number of devices without going through the full verification
So a natural limit on how big a hobby project can get. The example they give, where verification would require scammers to burn an identity to build another app instead of just being able to do a new build whenever an app gets detected as malware, shows that apps with few installs are where the danger is. This measure just doesn't add up
> We are building a new advanced flow that allows experienced users to accept the risks of installing software that isn't verified
Also this will kill any impetus that was growing on the Linux phone development side, for better or worse. We get to live in this ecosystem a while longer, let's see if people keep damocles' sword in mind and we might see more efforts towards cross-platform builds for example
If google wants a walled garden, let it wall off it's own devices, but what right does it have to command other manufactures to bow down as well? At this stage we've got the choice of dictato-potato phone prime, or misc flavour of peasant.
If you want walled garden, go use apple. The option is there. We don't need to bring that here.
This is the first sign we're getting old :) new language features feel new. The language features I picked up in school, that my parents remarked upon, were simply normal to me, not new at all. I notice it pretty strongly nowadays with my grandma, where I keep picking up new terms in Dutch (mainly loan words) but she isn't exposed to them and so I struggle to find what words she knows. Not just new/updated concepts like VR, gender-neutral pronouns, or a new word for messages that are specifically in an online chat, but also old concepts like bias. It's always been there but I'd have no idea what she'd use to describe that concept
There is also the same thing with L for loss/loser. "that's an L take", "L [person]", "take the L here", etc.
They are pretty straightforward in their meaning, basically what you described. I believe it comes from sports but they are used for any good or bad outcome regardless of whether it was a contest.
We no longer own our devices.
We're in a worse state than we were in before. Google is becoming a dictator like Apple.
Sure, they'll keep building it forever — this is just a delay tactic.
Macs blocked launching apps from unverified devs, but you can override in settings. I thought they could just do something along those lines.
Maybe this sounds dark but see also how the net is tightening around phones that allow you to run open firmware after you've bought the hardware for the full and fair price. We're slowly being relegated to crappy hobbyist projects once the last major vendors decide on this as well, and I don't even understand what crime it is I'm being locked out for
We're too small a group for commercial vendors to care. Switching away isn't enough, especially when there's no solidarity, not even among hackers. Anyone who uses Apple phones votes with their wallet for locking down the ability to run software of your choice on hardware of your choice. It's as anti-hacker as you can get but it's fairly popular among the HN audience for some reason
If not even we can agree on this internally, what's a bank going to care about the fifty people in the country that can't use a banking app because they're obstinately using dev tools? What are they gonna do, try to live bankless?
Of course, so long as we can switch away: by all means. But it's not a long-term solution
It seems like a finite solution though. Having a second phone is not something most people will do, so the apps that are relegated to run on such devices will become less popular, less maintained, less and less good
Currently, you can run open software alongside e.g. government verification software. I think it's important to keep that option if somehow possible
Wow, this really pulls back the veil. This Vendor (google) is only looking out for numero uno.
A simple yes/no alert box is not "[...] specifically to resist coercion, ensuring that users aren't tricked into bypassing these safety checks while under pressure from a scammer". In fact, AFAIK we already have exactly that alert box.
No, what they want is something so complicated that no muggle could possibly enable it, either by accident or by being guided on the phone.
The angry social media narratives have been running wild from people who insert their own assumptions into what’s happening.
It’s been fairly clear from the start that this wasn’t the end of sideloading, period. However that doesn’t get as many clicks and shares as writing a headline claiming that Google is taking away your rights.
No, until this post, Google had said that it wouldn't be possible to install an app from a developer who hadn't been blessed by Google completely on your device. That is unacceptable. This blog post contains a policy change from Google.
There may have been exaggerations in some cases but these hand wavy responses like "you can still do X but you just can't do Y and Z is now mandatory" or "you can always use Y" is how we got to this situation in the first place.
This is just the next evolution of SafetyNet & play integrity API. Remember how many said use alternatives. Not saying safetynet is bad but I don't believe their intentions were to stop at just that.
I suspect they mean you have to create a android developer account and sign the binaries, this new policy just allows you to proceed without completing the identity verification on that account.
As long as this is a one-time flow: Good, great, yes, I'll gladly scroll through as many prompts as you want to enable sideloading. I understand the risks!
But I fear this will be no better than Apple's flow for installing unsigned binaries in macOS.
Please do better.
Maybe you've just been drinking the propaganda? "Sideloading" to me rolls off the tongue no worse than "hotswapping" or "overclocking".
I fully understand that language matters and if this was an attempt by Google to de-legitimize this way of installing, that's no good. But for Christ's sake, having different names for different things is not inherently malicious.
Installing from the play store involves exactly zero knowledge of what an apk even is.
I want to flip the question around and ask you: How are you not seeing that there is a distinction?
keytool -genkeypair -keystore mykey.jks -alias myalias -keyalg RSA
The public testkey certificate is also accepted so you don’t even need to generate one.Perhaps make you do it again after each major OS update, or once a year or something.
I highly doubt this is your "top" priority. Or if it is then you're gotten there by completely ignoring Google account security.
> intercepts the victim's notifications
And who controls these notifications and forces application developers to use a specific service?
> bad actors can spin up new harmful apps instantly.
Like banking applications that use push or SMS for two factor authentication. You seem to approve those without hesitation. I guess their "top" priority is dependent on the situation.
> And who controls these notifications and forces application developers to use a specific service?
Am I alone in being alarmed by this? Are they admitting that their app sandboxing is so weak that a malicious app can exfil data from other unaffiliated apps? And they must instead rely on centralized control to disable those apps after the crime? So.. what’s the point of the sandboxing - if this is just desktop level lack of isolation?
Glossing over this ”detail” is not confidence inspiring. Either it’s a social engineering attack, in which case an app should have no meaningful advantage over traditional comms like web/email/social media impersonation. Or, it’s an issue of exploits not being patched properly, in which case it’s Google and/or vendor responsibility to push fixes quickly before mass malware distribution.
The only legit point for Google, to me, is apps that require very sensitive privileges, like packet inspection or OS control. You could make an argument that some special apps probably could benefit from verification or special approvals. But every random app?
An app can read the content of notifications if the appropriate permissions are granted, which includes 2FA codes sent by SMS or email. That those are bad ways to provide 2FA codes is its own issue.
I want that permission to exist. I use KDE Connect to display notifications on my laptop, for example. Despite the name, it's not just for KDE or Linux - there are Windows and Mac versions too.
Do apps generally do this? I've never run into one that doesn't expect me to type in the number sent via SMS or email, rather than grabbing it themselves.
I don't use a lot of apps on my android phone, though, so maybe this is a dumb question to those who do.
powerful stuff has room for abuse. I didn't really think there's much of a way to make that not the case. it's especially true for anything that you grant accessibility-level access to, and "you cannot build accessibility tools" is a terrible trade-off.
(personally I think there's some room for options with taint analysis and allowing "can read notifications = no internet" style rules, but anything capable enough will also be complex enough to be a problem)
Googles proposal was to require everyone to verify to publish any app through any channel. That would be the equivalent of a web browser enforcing a whitelist of websites, because one scam site asked for access to something bad.
If scam apps use an API designed by Google to steal user data, then they should fix that, without throwing the baby out with the bathwater.
It's not news, both iOS and Android sandboxing are Swiss cheese compared to a browser.
People should only install apps from trusted publishers (and not everything from the store is trusted as the store just gors very basic checks)
Protecting their app store revenues from competition exposes them to scrutiny from competition regulators and might be counter productive.
Many governments are moving towards requiring tech companies to enforce verification of users and limit access to some types of software and services or impose conditions requiring software to limit certain features such as end to end encryption. Some prominent people in big tech believe very strongly in a surveillance state and we are seeing a lot of buy in across the political spectrum, possibly due to industry lobbying efforts. Allowing people to install unapproved software limits the effectiveness of surveillance technologies and the revenues of those selling them. If legal compliance risks are pushing this then it is a job for voters, not Google to fix.
Certainly voters need to have their say, but often their message is muffled by the layers of political and administrative material it passes through.
- Just yesterday there was a story on here about how Google found esoteric bugs in FFMPEG, and told volunteers to fix it.
- Another classic example, about how Google doesn't give a stuff about their user's security is the scam ads they allow on youtube. Google knows these are scams, but don't care because they there isn't regulation requiring oversight.
Fixed that for you. Google's public service was both entirely appropriate and highly appreciated.
Not by the maintainers it wasn't Mr. Google.
I'd highly appreciate even if the maintainers never did anything with the report, because in that case I would know to stop using ffmpeg on untrusted files.
Again, if YOU highly appreciate their service, that's great, but FFMPEG isn't fixing a codec for a decades old game studio, so all Google has done is tell cyber criminals how to infect your Rebel Assault 2. I'm glad you find that useful.
See the POC in the report by google, the command they run is just `./ffmpeg -i crash.anim -f null /dev/null -loglevel repeat+trace -threads 1` and the only relevant part of that for being vulnerable is that crash.anim is untrusted.
Edit: And to be clear, it doesn't care about the extension. You can name it kittens.mp4 instead of crash.anim and the vulnerability works the same way.
How much they spend is no indicator of how and where they spend it, so is hardly a compelling argument.
Of course, I'm not saying we shouldn't push to improve things, but I don't think this is the right reaction either.
I think a better compromise would have been for google to require developer verification, but also allow third party appstores like f-droid that don't require verification but still are required to "sign" the apks, instead of users enabling wide-open apk sideloading. that way, hobbyists can still publish apps in third party stores, and it is a couple of more steps harder for users to fall for social engineering,because they now have to install/enable f-droid, and then find the right malicious app and download it. The apk downloaded straight from the malicious site won't be loaded no matter what.
Google can then require highlighting things like number of downloads and developer reputation by 3rd party appstores, and maybe even require an inconsistent set of steps to search and find apps to make it harder to social engineer people (like names of buttons, ux arrangements, number of clicks,etc.. randomize it all).
What frustrated me on this topic from the beginning is that solutions like what I'm proposing (and better ones) are possible. But the HN prevailing sentiment (and elsewhere) is pitchforks and torches. Ok, disagree with google, but let's discuss about how to solve the android malware problem that is hurting real people, it is irresponsible to do otherwise.
- 1: Separate verification type for "student and hobbyist"
- 2: "advanced flow" for "power users" that allows sideloading of unverified apps - I imagine this is some kind of scare-screen, but we'll see.
What you describe as "worst of both worlds" is about point 1.
I'm not sure point 2 is powerful enough to suppor things like f-droid, but again, we'll see.It's acceptable to build a system where human error can lead to catastrophic consequences, even death. Every time you go outside you encounter many of these systems.
Not everything in life can be made 100% safe, but that's no reason to stop living.
Swindlers work by that is a story as old as time. Even snakeoil salesmen were good at distracting people from obvious signs of false promises and warnings. People often overestimate their own capabilities greatly, same as there are no bad drivers on the road when you ask people about themselves.
Society must be aware we are balancing "protection" and "responsibility". If you want some freedom you must have some responsibility.
I do not mind offering to some people more "protection" if it is clear they give up some "freedom". Some might accept the risks, some will not.
Also for the specific scam they mentioned, why do apps even have permission to intercept all notifications?? Just fix that!
I fear "fixing" it would mean removing the feature entirely, which breaks many workflows. Primarily this is used for accessibility (and is controlled in the accessibility settings), but applications such as KDE Connect also make good use of it.
F-droid doesn't want to track number of installs because that is an invasion of privacy.
> require developer verification, but also allow third party appstores like f-droid that don't require verification
Now you've moved the problem from Google gatekeeping apps to Google gatekeeping app stores. We don't want either.
Yeah, if google gets to have rules over what happens by apps that have their seal of approval. that's how seals of approvals work. you're not entitled to these things. you don't have the right to publish to the android platform, if Google, wary of anti-trust suits allows a 3rd party app store, it can institute reasonable requirements.
If an appstore is willingly hosting malware, should Google still provide their seal of approval? That was supposed to be rhetoric, but I wouldn't be surprised if you told me that they should.
This is willful ignorance, I only hope you educate yourself on the harms caused by malware and malicious actors and consider taking a practical approach to finding solutions instead of dying on every single hill.
I want to distribute apps (someone might also want to simply sell them), not publish them
I don't need a publisher, internet is a publishing media already
> you don't have the right to publish to the android platform
then let me install an alternative OS on the HW i legally bought and own or pay me back.
> the harms caused by malware and malicious actors
life is full of people doing harms and malicious actors, but we don't let Google or any other company gatekeep our lives
Yeah, you're certainly not speaking for malware victims here. android is not your life, so google gatekeeping android (actually only google approved builds) is not gatekeeping your life.
You certainly should be able to load an alternative OS. isn't that what lineage and other android distributions do already?
Not device integrity (locked bootloader, signed image, which can be done with alternative OS) but "play integrity" so approved by google. In other words, you can't run android without Google's services, google's builtin ads.
And the alternative is iOS.
Your freedoms are not the subject of this topic, not even remotely. Google isn't even banning you from doing anything on android phone, this is strictly about approving android builds by phone vendors, you're not even the subject here. Google doesn't want to approve android builds that allow sideloading. You can still install lineage.
Your argument here is actually "fascist authoritarian", you want to impose your views on the general public, that sideloading should be enabled. Having an option for yourself and other willing people to just not just vendor built android is not enough, you want the public to also leave the gates open so you can sideload your random apk's.
Oh, and for the record, my post was about finding a compromise, not a false dichotomy as you presented. If you made a car without a seatbelt it won't be allowed on the roads, if a phone vendor also builds an unsafe android where random devs an sideload apks, that shouldn't be allowed. Forget Google, governments should be enforcing the sideload ban lol.
You don't appreciate your freedoms and insist on abusing them, so actual freedoms end up being taken away!
How are people being obtuse for refusing to compromise for solutions on a problem which doesn’t exist?
You can’t misrepresent the situation, establish that one American company having absolute control on what people do with their devices is somehow the norm and then complain that people won’t meet you halfway.
I'll give you the benefit of doubt and assume you're just not well informed.
Millions of people are losing billions of dollars. Women are having their private media published to the masses. People are getting divorces, fired from jobs,etc.. because of android malware. The problem is nearly non-existent on iPhones to the most part, because they lock that down (but now thanks to "my freedom" type of freedom abusers are changing that too).
Apple already does this. You can't publish a driver for Windows without verifying your identity and buying an expensive code signing cert. Google isn't doing anything new, matter of fact, they're not doing enough! this still permits things like lineageos and other android builds to be installed -- that's your freedom. But since the prevailing sentiment is to resist a more secure way of doing things, the outcome will be that all smartphones will only load signed kernels/firmware in the future, and all signers will be required to id themselves, this will kill a lot of android builds.
This is why compromise is important. Your liberties are important to you, but you can't just dismiss the harm to the masses like that and refuse to find a compromise or a solution, that's how you lose what little freedom you have.
This is why things like "chat control" keep creeping up, and they will succeed down the road.
I shouldn't need an internet connection just to make an app for a device I own.
But having done it, I'm actually pretty impressed with the existing security. At least on my S24, you have to both enable sideloading at the system level, and enable each specific app to be allowed to "Install other apps" (e.g. when I first tried to launch the APK that I had downloaded from Firefox, I received a notification that I would need to whitelist Firefox to be allowed to install apps. I decided no, and instead whitelisted my File Manager app and then opened the APK through that).
I then installed F-Droid, allowed it to install other apps, installed NewPipe, and then toggled back off the system-level sideloading setting. NewPipe still works, and I don't think anything else can install. This satisfies my security paranoia that once the door to sideloading is opened that apps can install other apps willy-nilly. Not so.
So I really don't see what this new initiative by Google solves, other than, as others have said, control. The idea that somehow all user security woes come from sideloading apps and they would somehow be safe if they simply stuck strictly to the Play Store is patently untrue, given the number of malware-laden apps currently lurking in the Play Store.
Based on this feedback and our ongoing conversations with the community, we are building a new advanced flow that allows experienced users to accept the risks of installing software that isn't verified. We are designing this flow specifically to resist coercion, ensuring that users aren't tricked into bypassing these safety checks while under pressure from a scammer. It will also include clear warnings to ensure users fully understand the risks involved, but ultimately, it puts the choice in their hands. We are gathering early feedback on the design of this feature now and will share more details in the coming months.
I'm cautiously optimistic though. I'm generally okay with nanny features as long as there's a way to turn them off and it sounds like that's what this "advanced flow" does.
absolutely no. this is for the user side. but if you're a developer who is planning to publish the app in alternative play store/from your website, you have to do verification flow. please read the full text.
Still, it seems like good news, so I'll take it.
Anyway, I am already planning for a future in which Google does not feature as prominently as did until now. Small steps so far ( grapheneOS ), but to me the writing the wall is unmistakable. Google got cold feet over feedback and now they can allow things.
When negative publicity ends, they will start working towards further locking it in again. I am personally done with passively accepting it. It might be annoying, but it degoogling is a simple necessity.
This. Currently I am still a paying Google customer for a few things running my freelance side business. I am in the process of migrating my data out of Google Drive and migrating my photos out as well.
Next step is taking back control over my email infrastructure. Especially as google nowadays sorts quite a relevant number of important mail to spam, while allowing more and more crap to pass into my inbox.
Also they one sidedly raised the price because they now have AI included. Fuck them - I am not using their shitty AI and I did not buy that. I am using AI daily - just not the crap product Google shoved down my throat.
garpheneOS/postmarketOS are next on my list. As I have a tertiary device around, I will during the dark months ahead set this up and see if it fits my needs.
With Arch now my daily driver (except for the main job), I plan to use way less US tech vendor crap. There are so many beautiful and not to difficult to use OS solutions out there, easily hostable on servers inside a more sensible jurisdiction.
Also currently working on a solution to get around the enshittified YouTube experience. Without it becoming an unreasonable effort to still watch the interesting things on my big screen in the living room. But automated AI audio translations did this in for me. I already find the automated title translations to be abhorrent - now, having had the absolute shit experience of starting a video and having it dubbed by an awful AI voice was just a bit too much for me.