Posted by todsacerdoti 1 day ago
Not only are they willing to develop hardware features and plumb that throughout the entire stack, they're willing to look at ITW exploits and work on ways to mitigate that. PPL was super interesting, they decided it wasn't 100% effective so they ditched it and came up with other thigs.
Apple's vertical makes it 'easy' to do this compared to Android where they have to convince the CPU guys at QC or Mediatek to build a feature, convince the linux kernel to take it, get it in AOSP, get it in upstream LLVM, etc etc.
Pointer authentication codes (PAC) is a good example, Apple said f-it we'll do it ourselves. They maintained a downstream fork of LLVM, and built full support, leveraged in the wild bypasses and fixed those up.
Their commitment to privacy goes beyond marketing. They actually mean it. They staffed their security team with top hackers from the Jailbreak community… they innovated with Private Relay, private mailboxes, trusted compute, multi-party inference…
I’ve got plenty of problems with Apple hypocrisy, like their embrace of VPNs (except for traffic to Apple Servers) or privacy-preserving defaults (except for Wi-Fi calling or “journaling suggestions”). You could argue their commitment to privacy includes a qualifier like “you’re protected from everyone except for Apple and select telecom partners by default.”
But that’s still leagues ahead of Google whose mantra is more like “you’re protected from everyone except Google and anyone who buys an ad from Google.”
PS: make sure you remove that pesky "USB accessories while locked allowed" profile that Configurator likes to sneak in.
Running something in the kernel is unavoidable if you want to actually show stuff to the user.
Attacker sends an imessage containing a PDF
imessage, like most modern messaging apps, displays a preview - which means running the PDF loader.
The PDF loader has support for the obsolete-but-part-of-the-pdf-standard image codec 'JBIG2'
Apple's JBIG2 codec has an exploitable bug, giving the attacker remote code execution on the device.
This exploit was purchased by NSO, who sold it to a bunch of middle eastern dictatorships who promptly used it on journalists.
https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...
* https://googleprojectzero.blogspot.com/2022/03/forcedentry-s...
In theory it makes it easier to catch stuff that you can’t simply catch with static analysis and it gives you some level of insight beyond simply crashing.
https://developer.android.com/ndk/guides/arm-mte
https://source.android.com/docs/security/test/memory-safety/...
This is because MTE facilitate finding memory bugs and fixing them - but also consumes (physical!) space and power. If enough folks run it with, say Chrome, you get to find and fix most of its memory bugs and it benefits everyone else (minus the drawbacks, since everyone else has MTE off or not present).
trade offs, basically. At least on pixel you can decide on your own
What they do is against your interests, for them to keep the monopoly on the App Store.
Is this duct tape over historical architectural decisions that assumed trust? Could we design something with less complexity if we designed it from scratch? Are there any operating systems that are designed this way?
It’s been designed with lower user trust since day one, unlike other OSes of the era (consumer Windows, Mac’s classic OS).
Just how much you can trust the user has changed overtime. And of course the device has picked up a lot of a lot of of capabilities and new threats such as always on networking in various forms and the fun of a post Spectre world.
Yes, it's all making up for flaws in the original Unix security model and the hardware design that C-based system programming encourages.
> Could we design something with less complexity if we designed it from scratch? Are there any operating systems that are designed this way?
Yes, capability architecture, and yes, they exist, but only as academic/hobby exercises so far as I've seen. The big problem is that POSIX requires the Unix model, so if you want to have a fundamentally different model, you lose a lot of software immediately without a POSIX compatibility shim layer -- within which you would still have said problems. It's not that it can't be done, it's just really hard for everyone to walk away from pretty much every existing Unix program.
https://medium.com/@tunacici7/sel4-microkernel-architecture-...
It's missing "the rest of the owl", so to speak, so it's a bit of a stretch to call it an operating system for anything more than research.
I think that there's also inherent trust in "hardware security" but as we all know its all just hardcoded software at the end of the day, and complexity will bring bugs more frequently.
Any sufficiently secure system is, by design, also secure against it's primary user. In the business world this takes the form of protecting the business from its own employees in addition to outside threats.
[0] https://security.apple.com/blog/apple-security-bounty-evolve...
https://sneak.berlin/20231005/apple-operating-system-surveil...