Top
Best
New

Posted by todsacerdoti 10/13/2025

Modern iOS Security Features – A Deep Dive into SPTM, TXM, and Exclaves(arxiv.org)
237 points | 32 comments
Moral_ 10/13/2025|
SEAR and the Apple team does an excellent job of security on iOS, and should be commended greatly on that.

Not only are they willing to develop hardware features and plumb that throughout the entire stack, they're willing to look at ITW exploits and work on ways to mitigate that. PPL was super interesting, they decided it wasn't 100% effective so they ditched it and came up with other thigs.

Apple's vertical makes it 'easy' to do this compared to Android where they have to convince the CPU guys at QC or Mediatek to build a feature, convince the linux kernel to take it, get it in AOSP, get it in upstream LLVM, etc etc.

Pointer authentication codes (PAC) is a good example, Apple said f-it we'll do it ourselves. They maintained a downstream fork of LLVM, and built full support, leveraged in the wild bypasses and fixed those up.

dagmx 10/13/2025||
One of the knock on benefits of this too is increased security across all platforms as long as someone exercises that code path on one of apples new processors with a hardened runtime.

In theory it makes it easier to catch stuff that you can’t simply catch with static analysis and it gives you some level of insight beyond simply crashing.

chatmasta 10/14/2025|||
I buy Apple products not just because they do a great job with security and privacy, but because they do this without needing to do it. They could make plenty of money without going so deep into these features. Maybe eventually it’d catch up with them but it’s not like they even have competition forcing them to care about your privacy.

Their commitment to privacy goes beyond marketing. They actually mean it. They staffed their security team with top hackers from the Jailbreak community… they innovated with Private Relay, private mailboxes, trusted compute, multi-party inference…

I’ve got plenty of problems with Apple hypocrisy, like their embrace of VPNs (except for traffic to Apple Servers) or privacy-preserving defaults (except for Wi-Fi calling or “journaling suggestions”). You could argue their commitment to privacy includes a qualifier like “you’re protected from everyone except for Apple and select telecom partners by default.”

But that’s still leagues ahead of Google whose mantra is more like “you’re protected from everyone except Google and anyone who buys an ad from Google.”

OptionOfT 10/14/2025||
What is non-private about Wi-Fi calling?
chatmasta 10/14/2025||
If you have it enabled, then every thirty seconds (regardless of whether you’re actively on a call), your phone will make a request to a signaling server owned by your mobile ISP. So if you’re on T-Mobile and traveling in some other country with no cell service, but you’re connected to WiFi, then T-Mobile will see your public IP address. (IIRC, this also bypasses any VPN Profile you have enabled on your device, because the signaling system is based on a derivative of IPSec that could have problems communicating over an active VPN tunnel.)

I found out about this when I was wiresharking all outbound traffic from my router and saw my phone making these weird requests.

Apple actually does warn you about this in the fine print (“About WiFi calling and privacy…”) next to the toggle in Settings. But I didn’t realize just how intrusive it was.

I know my mobile ISP can triangulate my location already, but I don’t want to offer them even more data about every public IP of every WiFi network I connect to, even if I’m not roaming at the time.

devttyeu 10/14/2025|||
And after all that hardcore engineering work is done, iMessage still has code paths leading to dubious code running in the kernel, enabling 0-click exploits to still be a thing.
aprotyas 10/14/2025|||
That's one way to look at it, but if perfection is the only goal post then no one would ever get anywhere.
wat10000 10/14/2025||||
What's the dubious code?

Running something in the kernel is unavoidable if you want to actually show stuff to the user.

michaelt 10/14/2025||
In ~2020, it was:

Attacker sends an imessage containing a PDF

imessage, like most modern messaging apps, displays a preview - which means running the PDF loader.

The PDF loader has support for the obsolete-but-part-of-the-pdf-standard image codec 'JBIG2'

Apple's JBIG2 codec has an exploitable bug, giving the attacker remote code execution on the device.

This exploit was purchased by NSO, who sold it to a bunch of middle eastern dictatorships who promptly used it on journalists.

https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...

wat10000 10/14/2025||
None of that ran in the kernel. Everything happens within a single process up until the sandbox escape, which isn't even covered in your article. The article's sequel* goes into detail about that part, which involves subverting a more privileged process by exploiting logic errors to get it to execute code. The only involvement by the kernel is passing IPC messages back and forth.

* https://googleprojectzero.blogspot.com/2022/03/forcedentry-s...

walterbell 10/14/2025||||
Disable iMessage via Apple Configurator MDM policy and enable Lockdown Mode.
Citizen8396 10/14/2025||
I imagine the latter is sufficient.

PS: make sure you remove that pesky "USB accessories while locked allowed" profile that Configurator likes to sneak in.

walterbell 10/15/2025||
Need an open-source MDM profile policy linter.
mikevm 10/14/2025||||
[dead]
kmeisthax 10/14/2025|||
Why would a nation-state actor need access to your kernel when all the juicy stuff[0] is in the iMessage process it's already loaded into?

[0] https://xkcd.com/1200/

pjmlp 10/14/2025|||
Google could have added MTE for a couple of years now, but apparently don't want to force it on OEMs as part of their Android certification program, it is the same history as with OS updates.
palata 10/14/2025|||
Don't the Pixels have MTE? Definitely GrapheneOS does, at least to some extent.
pjmlp 10/14/2025||
Kind of, you need to enable it on developer tools, also Pixels are Google's, not the other OEMs.

https://developer.android.com/ndk/guides/arm-mte

https://source.android.com/docs/security/test/memory-safety/...

kangs 10/14/2025|||
to be fair, most of MTE's benefit is realized by having enough users running your apps with MRE enabled, rather than having it everywhere.

This is because MTE facilitate finding memory bugs and fixing them - but also consumes (physical!) space and power. If enough folks run it with, say Chrome, you get to find and fix most of its memory bugs and it benefits everyone else (minus the drawbacks, since everyone else has MTE off or not present).

trade offs, basically. At least on pixel you can decide on your own

alerighi 10/14/2025||
They do that now because they care about your security, but to make it difficult to modify (jailbreak) your own devices to run your own software that is not approved by Apple.

What they do is against your interests, for them to keep the monopoly on the App Store.

EasyMark 10/14/2025||
It can be both things, security and user lock in, those are orthogonal goals.
darkamaul 10/14/2025||
Loosely related, but they also announced an increase in their bug bounty program during Ivan Krstić's Keynote at Hexacon. [0]

[0] https://security.apple.com/blog/apple-security-bounty-evolve...

bfirsh 10/14/2025||
Whenever I read about it, I am surprised at the complexity of iOS security. At the hardware level, kernel level, all the various types of sandboxing.

Is this duct tape over historical architectural decisions that assumed trust? Could we design something with less complexity if we designed it from scratch? Are there any operating systems that are designed this way?

KerrAvon 10/14/2025||
>Is this duct tape over historical architectural decisions that assumed trust?

Yes, it's all making up for flaws in the original Unix security model and the hardware design that C-based system programming encourages.

> Could we design something with less complexity if we designed it from scratch? Are there any operating systems that are designed this way?

Yes, capability architecture, and yes, they exist, but only as academic/hobby exercises so far as I've seen. The big problem is that POSIX requires the Unix model, so if you want to have a fundamentally different model, you lose a lot of software immediately without a POSIX compatibility shim layer -- within which you would still have said problems. It's not that it can't be done, it's just really hard for everyone to walk away from pretty much every existing Unix program.

fragmede 10/14/2025|||
> seL4 is a fast, secure and formally verified microkernel with fine-grained access control and support for virtual machines.

https://medium.com/@tunacici7/sel4-microkernel-architecture-...

It's missing "the rest of the owl", so to speak, so it's a bit of a stretch to call it an operating system for anything more than research.

Citizen8396 10/14/2025|||
Vulnerabilities are inevitable, especially if you want to support broad use cases on a platform. Defense-in-depth is how you respond to this.
MBCook 10/14/2025|||
iOS is based on MacOS is based on NeXT is a Unix.

It’s been designed with lower user trust since day one, unlike other OSes of the era (consumer Windows, Mac’s classic OS).

Just how much you can trust the user has changed overtime. And of course the device has picked up a lot of a lot of of capabilities and new threats such as always on networking in various forms and the fun of a post Spectre world.

kangs 10/14/2025|||
why not do both :)

I think that there's also inherent trust in "hardware security" but as we all know its all just hardcoded software at the end of the day, and complexity will bring bugs more frequently.

kmeisthax 10/14/2025|||
Yes, but they're architectural decisions made at Bell Labs in the 70s. iOS was always designed with the assumption that no one is trustworthy[0], not even the owner of the device. So there is a huge mismatch between "70s timesharing OS" and "phone that doesn't believe you when you say 'please run this code'" That being said, most of these security features are not duct-tape over UNIXisms that don't fit Apple's walled garden nonsense. To be clear, iOS has the duct-tape, too, but all that lives in XNU (the normal OS kernel).

SPTM exists to fix a more fundamental problem with OS security: who watches the watchers? Regular processes have their memory accesses constrained by the kernel, but what keeps the kernel from unconstraining itself? The answer is to take the part of the kernel responsible for memory management out of the kernel and put it in some other, higher layer of privilege.

SPRR and GLs are hardware features that exist solely to support SPTM. If you didn't have those, you'd probably need to use ARM EL2 (hypervisor) or EL3 (TrustZone secure monitor / firmware), and also put code signing in the same privilege ring as memory access. You might recognize that as the design of the Xbox 360 hypervisor, which used PowerPC's virtualization capability to get a higher level of privilege than kernel-mode code.

If you want a relatively modern OS that is built to lock out the user from the ground-up, I'd point you to the Nintendo 3DS[1], whose OS (if not the whole system) was codenamed "Horizon". Horizon had a microkernel design where a good chunk of the system was moved to (semi-privileged) user-mode daemons (aka "services"). The Horizon kernel only does three things: time slicing, page table management, and IPC. Even security sensitive stuff like process creation and code signing is handled by services, not the kernel. System permissions are determined by what services you can communicate with, as enforced by an IPC broker that decides whether or not you get certain service ports.

The design of Horizon would have been difficult to crack, if it wasn't for Nintendo making some really bad implementation decisions that made it harder for them to patch bugs. Notably, you could GPU DMA onto the Home Menu's text section and run code that way, and it took Nintendo years to actually move the Home Menu out of the way of GPU DMA. They also attempted to resecure the system with a new bootloader that actually compromised boot chain security and let us run custom FIRMs (e.g. GodMode9) instead of just attacking the application processor kernel. But the underlying idea - separate out the security-relevant stuff from the rest of the system - is really solid, which is why Nintendo is still using the Horizon design (though probably not the implementation) all the way up to the Switch 2.

[0] In practice, Apple has to be trustworthy. Because if you can't trust the person writing the code, why run it?

[1] https://www.reddit.com/r/3dshacks/comments/6iclr8/a_technica...

encom 10/14/2025||
Security in this context means the intruder is you, and Apple is securing their device so you can't run code on it, without asking Apple for permission first.
astrange 10/14/2025|||
That makes no sense for a phone because you go outside with it in your pocket, leave it places, connect to a zillion kinds of networks with it, etc. It's not a PC in an airgapped room. It is very easy for the user of the device to be someone who isn't you.
thewebguyd 10/14/2025|||
It can be both.

Any sufficiently secure system is, by design, also secure against it's primary user. In the business world this takes the form of protecting the business from its own employees in addition to outside threats.

fsflover 10/14/2025||
Have they fixed regular, unencrypted connections on updates and apps launch yet?

https://sneak.berlin/20231005/apple-operating-system-surveil...