Posted by validatori 1 day ago
Let's say OP takes a very different turn with their software that I am comfortable with - say reporting my usage data to a different country. I should be able to say "fuck that upgrade, I'm going to run the software that was on my phone when I originally bought it"
This change blocks that action, and from my understanding if I try to do it, it bricks my phone.
It's not at all obvious that this is what happens. To begin with, do you regard the average phone thief as someone who even knows what expected value is?
They want drugs so they steal phones until they get enough money to buy drugs. If half the phones can't be resold then they need to steal twice as many phones to get enough money to buy drugs; does that make phone thefts go down or up?
On top of that, the premise is ridiculous. You don't need to lock the boot loader or prevent people from installing third party software to prevent stolen phones from being used. Just establish a registry for the IMEI of stolen phones so that carriers can consult the registry and refuse to provide service to stolen phones.
It's entirely unrelated to whether or not you can install a custom ROM and is merely being used as an excuse because "prevent theft somehow" sounds vaguely like a legitimate reason when the actual reason of "prevent competition" does not.
They know if their fence went from offering them $20/phone to offering $5/phone, it's not worth their time to steal phones any more.
> Just establish a registry for the IMEI of stolen phones so that carriers can consult the registry and refuse to provide service to stolen phones.
This seems like something that the average HNer is going to get equally riled up about as a surveillance and user freedom issue.
Regulations have made it pretty hard to sell catalytic converters, but there's still thefts cause some theives are really out of the loop, but I think it's been reduced by a lot. Still a few people who want to fill up their stolen trailer with cats before they go to the scrap yard, though.
A strong lock system that prevents stolen phones from being used is better than a global IMEI denylist because phones that can't be connected to a cell network but are otherwise usable still have value, some networks won't participate in a global list, and some phones can have their IMEI changed if you can run arbitrary software on them (which is maybe a bigger issue, but still steal phone -> wipe -> change IMEI -> resell is stopped if you can't wipe the stolen phone)
This is what we've empirically seen as Apple went from having devices which could trivially be reflashed and resold without much impediment to now most iPhones being locked and their hardware parts cryptographically tied together.
https://techcrunch.com/2015/02/11/apples-activation-lock-lea...
The rates of phone theft have gone radically down since phone makers have made it harder to reflash and part out the parts of the phones.
I don't understand what business incentives they would have to make "reduce global demand for stolen phones" a goal they want to invest in.
We cant have nice things because bad people abused it :(.
Realistically, we're moving to a model where you'll have to have a locked down iPhone or Android device to act as a trusted device to access anything that needs security (like banking), and then a second device if you want to play.
The really evil part is things that don't need security (like say, reading a website without a log in - just establishing a TLS session) might go away for untrusted devices as well.
You've fallen for their propaganda. It's a bit off topic from the Oneplus headline but as far as bootloaders go we can't have nice things because the vendors and app developers want control over end users. The android security model is explicit that the user, vendor, and app developer are each party to the process and can veto anything. That's fundamentally incompatible with my worldview and I explicitly think it should be legislated out of existence.
The user is the only legitimate party to what happens on a privately owned device. App developers are to be viewed as potential adversaries that might attempt to take advantage of you. To the extent that you are forced to trust the vendor they have the equivalent of a fiduciary duty to you - they are ethically bound to see your best interests carried out to the best of their ability.
The model that makes sense to me personally is that private companies should be legislated to be absolutely clear about what they are selling you. If a company wants to make a locked down device, that should be their right. If you don't want to buy it, that's your absolute right too.
As a consumer, you should be given the information you need to make the choices that are aligned with your values.
If a company says "I'm selling you a device you can root", and people buy the device because it has that advertised, they should be on the hook to uphold that promise. The nasty thing on this thread is the potential rug pull by Oneplus, especially as they have kind of marketed themselves as the alternative to companies that lock their devices down.
I think it would be far simpler and more effective to outlaw vendor controlled devices. Note that wouldn't prevent the existence of some sort of opt-in key escrow service where users voluntarily turn over control of the root of trust to a third party (possibly the vendor themselves).
You can already basically do this on Google Pixel devices today. Flash a custom ROM, relock the bootloader, and disable bootloader unlocking in settings. Control of the device is then held by whoever controls the keys at the root of the flashed ROM with the caveat that if you can log in to the phone you can re-enable bootloader unlocking.
With virtualization this could be done with the same device. The play VM can be properly isolated from the secure one.
It's funny, GP framed it as "work" vs "play" but for me it's "untrusted software that spies on me that I'm forced to use" vs "software stack that I mostly trust (except the firmware) but BigCorp doesn't approve of".
Well I don't entirely, but in that case there's even less of a choice and also (it seems to me) less risk. The OEM software stack on the phone is expected to phone home. On the other hand there is a strong expectation that a CPU or southbridge or whatever other chip will not do that on its own. Not only would it be much more technically complex to pull off, it should also be easy to confirm once suspected by going around and auditing other identical hardware.
As you progress down the stack from userspace to OS to firmware to hardware there is progressively less opportunity to interact directly with the network in a non-surreptitious manner, more expectation of isolation, and it becomes increasingly difficult to hide something after the fact. On the extreme end a hardware backdoor is permanently built into the chip as a sort of physical artifact. It's literally impossible to cover it up after the fact. That's incredibly high risk for the manufacturer.
The above is why the Intel ME and AMD PSP solutions are so nefarious. They normalize the expectation that the hardware vendor maintains unauditable, network capable, remotely patchable black box software that sits at the bottom of the stack at the root of trust. It's literally something out of a dystopian sci-fi flick.
A lot of my phones stopped receiving firmware updates long ago, the manufacturer just simply stopped providing them. The only way to safely use them is to install custom firmware that are still address the problems, and this eFuse thing can be used to prevent custom firmware.
This eFuse is part of the plot to prevent user from accessing open source firmware, it's just that. Your "user safety" jargon cannot confuse people anymore, after all the knowledge people (at least the smart few) has learned during the years.
This is not what's happening here, though.
Once they have hardware access who cares? They either access my data or throw it in a lake. Either way the phone is gone and I'd better have had good a data backup and a level of encryption I'm comfortable with.
This not only makes it impossible to install your own ROMs, but permanently bricks the phone if you try. That is not something my hardware provider will ever have the choice to make.
It's just another nail in the coffin of general computing, one more defeat of what phones could have been, and one more piece of personal control that consumers will be all too happy to give up because of convenience.
Can you explain it in simpler terms such that an idiot like me can understand? Like what would an alternative OS have to do to be compatible with the "current eFuse states"?
The linked page seems to indicate that the EDL image is also vendor signed. Wouldn't that mean they're official?
Unless I've misunderstood, the EDL image is tied to the same set of fuses as the XBL image so it's only useful to recover if the fuses don't get updated. Which seems like an outlandish design choice to me because it means that flashing a new XBL leaves you in a state where you lack the fallback tooling (hence the reports of people forced to replace the motherboard) and also that if there's anything wrong with the new XBL that doesn't manifest until after the stage where it blows the fuses then the vendor will have managed to irreversibly brick their own devices via an only slightly broken update.
> The anti-rollback mechanism uses Qfprom (Qualcomm Fuse Programmable Read-Only Memory), a region on Qualcomm processors containing one-time programmable electronic fuses.
What a nice thoughtful people to build such a feature.
That’s why you sanction the hell out of Chinese Loongson or Russian Baikal pity of CPU — harder to disable than programmatically “blowing a fuse”.
You may not want trusted computing and root/jailbreak everything as a consumer, but building one is not inherently evil.
Because in the case of smartphones, there is realistically no other option.
> For example if they don't trust it, they may avoid logging in to their bank on it.
Except when the bank trusts the system that I don't (smartphone with Google Services or equivalent Apple junk installed), and doesn't trust the system that I do (desktop computer or degoogled smartphone), which is a very common scenario.
I recently moved to Apple devices because they use trusted computing differently; namely, to protect against platform abuse, but mostly not to protect corporate interests. They also publish detailed first-party documentation on how their platforms work and how certain features are implemented.
Apple jailbreaking has historically also had a better UX than Android rooting, because Apple platforms are more trusted than Android platforms, meaning that DRM protection, banking apps and such will often still work with a jailbroken iOS device, unlike most rooted Android devices. With that said though, I don't particularly expect to ever have a jailbroken iOS device again, unfortunately.
Apple implements many more protections than Android at the OS level to prevent abuse of trusted computing by third-party apps, and give the user control. (Though some Androids like, say, GrapheneOS, implement lots that Apple does not.)
But of course all this only matters if you trust Apple. I trust them less than I did, but to me they are still the most trustworthy.
What do you mean by this? On both Android and iOS app developers can have a backend that checks the status of app attestation.
Also, "checking the status of app attestation" is the wrong approach. If you want to use app attestation that way, then you should sign/encrypt communications (requests and responses) with hardware-backed keys; that way, you can't replay or proxy an attestation result to authorize modified requests.
(I believe Apple attestation doesn't directly support encryption itself, only signing, but that is enough to use it as part of a key exchange process with hardware-backed keys - you can sign a public key you're sending to the server, which can verify your signature and then use your public key to encrypt a server-side public key, that then you can decrypt and use to encrypt your future communications to the server, and the server can encrypt its responses with your public key, etc.)
Users don't have a choice, and they don't care. Bitlocker is cracked by the feds, iOS and Android devices can get unlocked or hacked with commercially-available grey-market exploits. Push Notifications are bugged, apparently. Your logic hinges on an idyllic philosophy that doesn't even exist in security focused communities.
https://arstechnica.com/information-technology/2024/10/phone...
https://peabee.substack.com/p/everyone-knows-what-apps-you-u...
About Apple I just don't know enough because I haven't seriously used them for years
The carriers in the US were caught selling e911 location data to pretty much whoever was willing to pay. Did that hurt them? Not as far as I can tell, largely because there is no alternative and (bizarrely) such behavior isn't considered by our current legislation to be a criminal act. Consumers are forced to accept that they are simply along for the ride.
People would stop taking photos with their camera that they didn't want to be public.
If Google did something egregious enough legislation might actually get passed because realistically, if public outcry doesn't convince them to change direction, what other option is available? At present it's that or switch to the only other major player in town.
...and not because, in truth, they don't care?
How would we even know if people distrusted a company like Microsoft or Meta? Both companies are so deeply-entrenched that you can't avoid them no matter how you feel about their privacy stance. The same goes for Apple and Google, there is no "greener grass" alternative to protest the surveillance of Push Notifications or vulnerability to Pegasus malware.
Would they? Nobody that I know would.
Persistent bootkits trivial to install
No verified boot chain
Firmware implants survived OS reinstalls
No hardware-backed key storage
Encryption keys extractable via JTAG/flash dump
Modern Secure Boot + hardware-backed keystore + eFuse anti-rollback eliminated entire attack classes. The median user's security posture improved by orders of magnitude.It's not that trusted computing is inherently bad. I actually think it's a very good thing. The problem is that the manufacturer maintains control of the keys when they sell you a device.
Imagine selling someone a house that had smart locks but not turning over control of the locks to the new "owner". And every time the "owner" wants to add a new guest to the lock you insist on "reviewing" the guest before agreeing to add him. You insist that this is important for "security" because otherwise the "owner" might throw a party or invite a drug dealer over or something else you don't approve of. But don't worry, you are protecting the "owner" from malicious third parties hiding in plain sight. You run thorough background checks on all applicants after all!
See also:
https://github.com/zenfyrdev/bootloader-unlock-wall-of-shame
We just had the Google side loading article here.
All I'm saying is that we have to acknowledge that both are true. And, if both are true, we need to have a serious conversation about who gets to choose the core used in our front door locks.
The fact that it's locked down and remotely killable is a feature that people pay for and regulators enforce from their side too.
At the very best, the supplier plays nice and allows you to run your own applications, remove whatever crap they preinstalled and change to font face. If you are really lucky, you can choose to run practically useless linux distribution instead of practically useful linux distribution with their blessing. Blessing is a transient thing that can be revoked any time.
Why not?
Obviously we don't have that. But what stops an open firmware (or even open hardware) GSM modem being built?
The same thing that stops you from living on a sea platform as a sovereign citizen or paying for your groceries with bitcoin. Technically you can, but practically you don't.
If you want to sell it commercially, you can opensource all you want, but the debug interface and bootloader integrity would have to be closed shut for production batch.
At best, you can do what the other comment refers to -- instead of using the baseband as a source of root of trust, make it work like wifi modules. This of course comes at a cost of having a separate SoC. Early motorola smartphones (EZX series) did that -- Linux part talked to the gsm part literally over usb. It came with all kinds of fun, including sound being khmm... complicated. I don't remember whether they shared the RAM zo. You don't want to share you RAM with a funny blob without reading fine print about who sets up the mappings, right?
Figuring out all of that costs money and money have to come from somewhere, which means you also have to resist the pressure to not become part of the problem. And then the product that comes out is 5 years too late for the spec and 1.5 times too expensive for the vague promise of "trust me bro, I will only blow the e-fuse to fix actual CVEs".
https://hackaday.com/2022/07/12/open-firmware-for-pinephone-...
Is it? I remember MotoMing of EZX years to be actually separate and maybe the latest failed attempts at linux phone had one, but I'm under impression the most common way to do it is a SoC where one core is doing baseband and the other(s) are doing linux and they also share the physical RAM that is part of the same SoC. I don't follow the happenings close enough to say it's 100% of all phones and people call me out saying mediatek is totally حلال in this department. It's not like I'm going to touch anything with mtk ever to check.
The governments can ban this feature and ban companies from selling devices with that.
I’m sure CIA was not founded after covid :-)
> So that’s how in an event of war US adversaries will be relieved of their devices
Any kind of device-unique key is likely rooted in OTP (via a seed or PUF activation).
The root of all certificate chains is likely hashed in fuses to prevent swapping out cert chains with a flash programmer.
It's commonly used to anti rollback as well - the biggest news here is that they didn't have this already.
If there's some horrible security bug found in an old version of their software, they have no way to stop an attacker from loading up the broken firmware to exploit your device? That is not aligned with modern best practices for security.
You mean the attacker having a physical access to the device plugging in some USB or UART, or the hacker that downgraded the firmware so it can use the exploit in older version to downgrade the firmware to version with the exploit?
The evil of the type of attack here is that the firmware with an exploit would be properly signed, so the firmware update systems on the chip would install it (and encrypt it with the PUF-based key) unless you have anti-rollback.
Of course, with a skilled enough attacker, anything is possible.
... which describes US border controls or police in general. Once "law enforcement" becomes part of one's threat model, a lot of trade-offs suddenly have the entire balance changed.
Most SoCs of even moderate complexity have lots of redundancy built in for yield management (e.x. anything with RAM expects some % of the RAM cells to be dead on any given chip), and uses fuses to keep track of that. If you had to have a strap per RAM block, it would not scale.
As of efuses, they are present essentially anywhere. In any SoC and microcontroller. They are usually used to store secrets (keys) and for chip configuration.
The linked wiki article written in a way that the reader might assume that OnePlus did something wrong, unique, anti-consumer, or something along the lines. Quite the contrary: OnePlus issued updated official firmware with burned the anti-rollback bit to prevent older vulnerable official firmware from being installed. Either new bootloader-level vulnerability has been found, or some kind of bootloader-level secret has leaked from OnePlus, with which the attacker can gain access to the smartphone's data it should not have. By this update, OnePlus secured data of the smartphone owners again.
You still can unlock the bootloader and install custom firmware (with bumped anti-rollback version in the firmware metadata I guess, that would require newer custom firmware or a recompilation/header modification for the older). Your device with the custom firmware installed won't receive the official firmware update to begin with, so it could not be bricked.
I assume that's also why China is investing so heavily into open source risc-v
When you really need it, like to download maps into the satnav, you can connect it to your home WiFi, or tether via Bluetooth.
OnePlus and other Chinese brands were modders-friendly until they suddenly weren't, I wouldn't rely on your car not getting more hostile at a certain point
Shhh. Nobody tell him where his phone, computer, and vast majority of everything else in his house was made.
Which sadist decided that that is a good number for an emergency call?
it's from IT Crowd https://www.youtube.com/watch?v=HWc3WY3fuZU
My ownership is proved by my receipt from the store I bought it from.
This vandalization at scale is a CFAA violation. I'd also argue it is a fraudulent sale since not all rights were transferred at sale, and misrepresented a sale instead of an indefinite rental.
And its likely a RICO act, since the C levels and BOD likely knew and/or ordered it.
And damn near everything's wire fraud.
But if anybody does manage to take them to court and win, what would we see? A $10 voucher for the next Oneplus phone? Like we'd buy another.
Fabricated or fake consent, or worse, forced automated updates, indicates that the company is the owner and exerting ownership-level control. Thus the sale was fraudulently conducted as a sale but is really an indefinite rental.
If I buy a used vehicle for example, I have exactly zero relationship with the manufacturer. I never agree to anything at all with them. I turn the car on and it goes. They do not have any authorization to touch anything.
We shouldn't confuse what's happening here. The engineers working on these systems that access people's computers without authorization should absolutely be in prison right alongside the executives that allowed or pushed for it. They know exactly what they're doing.
Generally speaking and most of the time, yes; however, there are a few caveats. The following uses common law – to narrow the scope of the discussion down.
As a matter of property, the second-hand purchaser owns the chattel. The manufacturer has no general residual right(s) to «touch» the car merely because it made it. Common law sets a high bar against unauthorised interference.
The manufacturer still owes duties to foreseeable users – a law-imposed duty relationship in tort (and often statute) concerning safety, defects, warnings, and misrepresentations. This is a unidirectional relationship – from the manufacturer to the car owner and covers product safety, recalls, negligence (on the manufacturer's behalf) and alike – irrespective of whether it was a first- or second-hand purchase.
One caveat is that if the purchased second-hand car has the residual warranty period left, and the second-hand buyer desires that the warranty be transferred to them, a time-limited, owner-to-manufacturer relationship will exist. The buyer, of course, has no obligation to accept the warranty transfer, and they may choose to forgo the remaining warranty.
The second caveat is that manufacturers have tried (successfully or not – depends on the jurisdiction) to assert that the buyer (first- or second-hand) owns the hardware (the rust bucket), and users (the owners) receive a licence to use the software – and not infrequently with strings attached (conditions, restrictions, updates and account terms).
Under common law, however, even if a software licence exists, the manufacturer does not automatically get a free-standing right to remotely alter the vehicle whenever they wish. Any such right has to come from a valid contractual arrangement, a statutory power, or the consent, privity still works and requires a consent – all of which weakens the manufacturer's legal standing.
Lastly, depending on the jurisdication, the manufacturer can even be sued for installing an OTA update on the basis of the car being a computer on wheels, and the OTA update being an event of unauthorised access to the computer and its data, which is oftenimes a criminal offence. This hinges on the fact that the second-hand buyer has not entered into a consentual relationship with the manufacturer after the purchase.
A bit of a lengthy write-up but legal stuff is always a fuster cluck and a rabit hole of nitpicking and nuances.
> the manufacturer can even be sued [...] This hinges on the fact that the second-hand buyer has not entered into a consentual relationship with the manufacturer after the purchase.
Wait, but the first owner (presumably, for the sake of argument) agreed to this. Why isn't it the first owner's fault for not disclosing it to the second owner? Shouldn't they be sued instead? How is a manufacturer held responsible for an agreement between parties that they could not possibly be expected to have knowledge of?
For example, if the first owner actively misrepresented the position (for example, they said «no remote access, no subscriptions, no tracking» when they knew the opposite), the second owner might have a misrepresentation claim against the first owner. But that is pretty much where the buck stops.
> «How can a manufacturer be liable for an agreement it cannot know about?».
That is not the right framing. The manufacturer is not being held liable for «an agreement between the first owner and the second owner». The manufacturer is being held liable for its own conduct (access/modification by virtue of an OTA update) without authorisation from the _current_ rights-holder because liability follows the actor.
It happens because, under common law, 1) the first owner’s consent does not automatically bind the second owner, 2) consent does not normally run with the asset, and 3) a «new contract with the second owner» does not arise automatically on resale. It arises only if the second owner consciously assents to manufacturer terms (or if a statute creates obligations regardless of assent).
So the manufacturer is responsible because it is the party _acting_. If the manufacturer accesses/modifies without a valid basis extending to the current owner or user, it owns that risk.
I am not saying that «every unwanted OTA update is a crime». All I am saying is that the legal system has a concept of «unauthorised modification/access», and the contention is over whether the access or modification was authorised or not.
For example suppose I ask someone to come demolish my fence next week when nobody is home. And then I sell the house in between. So is the company supposed to run a title check the moment they arrive, because the owner may no longer have the authority they once had prior to that moment?
Or say I click Accept on an agreement, sleep/hibernate the device right as installation is about to start, and then transfer the rights to the device. Now the vendor is responsible for not running a title check or asking for confirmation a second time before the first confirmation? And I'm in the clear because I never claimed there's no installation pending?
I can't imagine the law really works this way... these sound absurd. Surely there's gotta be much more to it than what you're describing?
In fact, the separation of concerns actually makes things simpler as the property rights do transfer with the property sale (a car, a house, a computer, etc.), and the contractual obligations do not travel with the asset (unless the law or a properly formed new agreement makes it travel – jurisdiction dependent). It is also important to note that the contract between the former owner and the manufacturer does not automatically lapse with the property sale.
Let's pick the two examples apart.
> […] I ask someone to come demolish my fence next week when nobody is home. And then I sell the house in between. So is the company supposed to run a title check the moment they arrive, because the owner may no longer have the authority they once had prior to that moment?
They are not required to, but it is very prudent of them to ascertain that the person who signed the contract happens to be the current owner of the house before they commence the demolition works – unless dealing with a litany of lawsuits is their core business. By doing so, they save time and money.
Now, imagine that, as the previous owner of the house, you also instructed the company to demolish the fence and demolish the entire house after. It is hard to imagine that the new owner would be delighted or feel ecstatic about finding their newly acquired house to have been wiped out of existence.
From the legal perspective, the demolishing company would be trespassing on the property that now belongs to somebody else, and they are in no position to proceed as the contractual rights stay with the previous owner and not with the property [0]. So in this situation, it creates a dispute (and – not unfathomably – a legal action) between the previous owner and the demolishing company, which the new owner is not privy to. Again, such a separation appears logical to me. Otherwise, the new owner would inherit a barrage of clandestine or dodgy contracts that the first owner might have signed in the past.
> Or say I click Accept on an agreement […]
Same separation still applies:
1. The vendor’s contract with the first owner can remain on foot.
2. That does not automatically authorise a post-sale access/modification of the second owner’s device.
In real litigation, what happens next turns on how «authorisation» is evidenced and managed. If the system is designed so that the physical device is still cryptographically tied to the old account, a court may treat that as strong evidence of practical authorisation, but it is not the same as legal authorisation by the current owner if the current owner never agreed. Practically, however, the new owner simply wipes the device out or resets it, and I do not think that it is commonplace for new owners to sue the manufacturer for merely applying an update, although the possibility is there.All of the above segues into… the practical implications of separating property and contractual rights. Especially in the case of computing hardware (and EV's as well!), they have become particularly important in today's world, where vendors have been increasingly trying to move towards the rent-seeking model, where they want the device sale to be seen as a lease or a licence to use but not the right to own the device.
Common law insists on the separation between property rights in the physical asset and contractual or statutory rights governing any assented or connected services (including the software). Vendors/manufacturers may market modern computing hardware as an inseparable «hardware–software package» and frame the transaction as a licence to use rather than ownership, but that characterisation does not, by itself, displace the purchaser’s ownership of the tangible chattel (e.g. a car or a laptop). The line common law draws is therefore real, but the contemporary contest is about how far licensing and service dependency can be used to diminish the practical incidents of ownership.
[0] Unless the new owner has acknowledged and agreed to the demolishing works in a separate contract.
Google Pixel would like to have a word. Though they regressed since they stopped shipping device trees in AOSP.
Basically breaking any kind of FOSS or repairability, creating dead HW bricks if the vendor ceases to maintain or exist.
I still sometimes ponder if oneplus green line fiasco is a failed hardware fuse type thing that got accidentally triggered during software update. (Insert I can't prove meme here).
I have however experienced that a ISP will write to you because you have a faulty modem (some Huawei device) and asks you to not use it anymore.
Not surprisingly, stolen phones tend to end up in those locations.
The effects on custom os community is causing me worried ( I am still rocking my oneplus 7t with crdroid and oneplus used to most geek friendly) Now I am wondering if there are other ways they could achieved the same without blowing a fuse or be more transparent about this.
Google pushed a non-downgradable final update to the Pixel 6a.
I was able to install Graphene on such a device. Lineage was advertised and completely incompatible, but some hinted it would work.
You absolutely do not, this is an extremely healthy starting position for evaluating a corporations behavior. Any benefit you receive is incidental, if they made more money by worsening your experience they would.
I don't believe for a second that this benefits phone owners in any way. A thief is not going to sit there and do research on your phone model before he steals it. He's going to steal whatever he can and then figure out what to do with it.
Thieves don't do that research to specific models. Manufacturers don't like it if their competitors' models are easy to hawk on grey markets because that means their phones get stolen, too.
Thieves these days seem to really be struggling to even use them for parts, since these are also largely Apple DRMed, and are often resorting to threatening the previous owner to remove the activation lock remotely.
Of course theft often isn't preceded by a diligent cost-benefit analysis, but once there's a critical mass of unusable – even for parts – stolen phones, I believe it can make a difference.
This makes sense and much less dystopia than some of the other commenters are suggesting.
Android's normal bootloader unlock procedure allows for doing so, but ensures that the data partition (or the encryption keys therefore) are wiped so that a border guard at the airport can't just Cellebrite the phone open.
Without downgrade protection, the low-level recovery protocol built into Qualcomm chips would permit the attacker to load an old, vulnerable version of the software, which has been properly signed and everything, and still exploit it. By preventing downgrades through eFuses, this avenue of attack can be prevented.
This does not actually prevent running custom ROMs, necessarily. This does prevent older custom ROMs. Custom ROMs developed with the new bootloader/firmware/etc should still boot fine.
This is why the linked article states:
> The community recommendation is that users who have updated should not flash any custom ROM until developers explicitly announce support for fused devices with the new firmware base.
Once ROM developers update their ROMs, the custom ROM situation should be fine again.
This feature doesn't allow unlocking the bootloader (as in, execute a custom ROM), it's designed to install factory-signed code. However, using it to "restore" an old, vulnerable factory code would obviously cause issues.
Sophisticated actors (think state-level actors like a border agent who insists on taking your phone to a back room for "inspection" while you wait at customs) can and will develop specialized tooling to help them do this very quickly.
Which includes old, vulnerable versions and all patched, newer versions. By burning in the minimum version, the old code now refuses to boot before it can be exploited.
This is standard practice for low-level bootloader attacks against things like consoles and some other phone brands.
They don't want the hardware to be under your control. In the mind of tech executives, selling hardware does not make enough money, the user must stay captive to the stock OS where "software as a service" can be sold, and data about the user can be extracted.
Give ROM developers a few weeks and you can boot your favourite custom ROMs again.
To be fair, they are right: the vast majority of users don't give a damn. Unfortunately I do.
Specifically GrapheneOS on Pixels signs their releases with their own keys. And with the rollback protection without blowing out any fuses.
I know that all these restrictions might make sense for the average user who wants a secure phone.. but I want an insecure-but-fully-hackable one.
OnePlus just chose the hardware way, versus Apple the signature way
Whether for OnePlus or Apple, there should definitively be a way to let users sign and run the operating system of their choice, like any other software.
(still hating this iOS 26, and the fact that even after losing all my data and downgrading back iOS 18 it refused to re-sync my Apple Watch until iOS 26 was installed again, shitty company policy)
There is a good reason to prevent downgrades -- older versions have CVEs and some are actually exploitable.
What exactly is it comparing? What is the “firmware embedded version number”? With an unlocked bootloader you can flash boot and super (system, vendor, etc) partitions, but I must be missing something because it seems like this would be bypassable.
It does say
> Custom ROMs package firmware components from the stock firmware they were built against. If a user's device has been updated to a fused firmware version & they flash a custom ROM built against older firmware, the anti-rollback mechanism triggers immediately.
and I know custom ROMs will often say “make sure you flash stock version x.y beforehand” to ensure you’re on the right firmware, but I’m not sure what partitions that actually refers to (and it’s not the same as vendor blobs), or how much work it is to either build a custom ROM against a newer firmware or patch the (hundreds of) vendor blobs.
The abl firmware contains an anti rollback version that is checked with the eFuse version.
The super partition is a bunch of lvm logical partitions on top of a single physical partition. Of these, is the main root filesystem which is mounted read only and protected with dm-verity device mapping. The root hash of this verity rootfs is also stored in the signed vbmeta.
Android Verified Boot also has an anti rollback feature. The vbmeta partition is versioned and the minimum version value is stored cryptographically in a special flash partition called the Replay Protected Memory Block (rpmb). This prevents rollback of boot and super as vbmeta itself cannot be rolled back.
This doesn't make sense unless the secondary boot is signed and there is a version somewhere in signed metadata. Primary boot checks the signature, reads the version of secondary boot and loads it only if the version it's not lower than what write-once memory (fuse) requires.
If you can self-sign or disable signature, then you can do whatever boot you want, as long as it's metadata satisfies the version.