Moral_ 2 days ago

SEAR and the Apple team does an excellent job of security on iOS, and should be commended greatly on that.

Not only are they willing to develop hardware features and plumb that throughout the entire stack, they're willing to look at ITW exploits and work on ways to mitigate that. PPL was super interesting, they decided it wasn't 100% effective so they ditched it and came up with other thigs.

Apple's vertical makes it 'easy' to do this compared to Android where they have to convince the CPU guys at QC or Mediatek to build a feature, convince the linux kernel to take it, get it in AOSP, get it in upstream LLVM, etc etc.

Pointer authentication codes (PAC) is a good example, Apple said f-it we'll do it ourselves. They maintained a downstream fork of LLVM, and built full support, leveraged in the wild bypasses and fixed those up.

  • chatmasta 17 hours ago

    I buy Apple products not just because they do a great job with security and privacy, but because they do this without needing to do it. They could make plenty of money without going so deep into these features. Maybe eventually it’d catch up with them but it’s not like they even have competition forcing them to care about your privacy.

    Their commitment to privacy goes beyond marketing. They actually mean it. They staffed their security team with top hackers from the Jailbreak community… they innovated with Private Relay, private mailboxes, trusted compute, multi-party inference…

    I’ve got plenty of problems with Apple hypocrisy, like their embrace of VPNs (except for traffic to Apple Servers) or privacy-preserving defaults (except for Wi-Fi calling or “journaling suggestions”). You could argue their commitment to privacy includes a qualifier like “you’re protected from everyone except for Apple and select telecom partners by default.”

    But that’s still leagues ahead of Google whose mantra is more like “you’re protected from everyone except Google and anyone who buys an ad from Google.”

    • OptionOfT 14 hours ago

      What is non-private about Wi-Fi calling?

      • chatmasta 13 hours ago

        If you have it enabled, then every thirty seconds (regardless of whether you’re actively on a call), your phone will make a request to a signaling server owned by your mobile ISP. So if you’re on T-Mobile and traveling in some other country with no cell service, but you’re connected to WiFi, then T-Mobile will see your public IP address. (IIRC, this also bypasses any VPN Profile you have enabled on your device, because the signaling system is based on a derivative of IPSec that could have problems communicating over an active VPN tunnel.)

        I found out about this when I was wiresharking all outbound traffic from my router and saw my phone making these weird requests.

        Apple actually does warn you about this in the fine print (“About WiFi calling and privacy…”) next to the toggle in Settings. But I didn’t realize just how intrusive it was.

        I know my mobile ISP can triangulate my location already, but I don’t want to offer them even more data about every public IP of every WiFi network I connect to, even if I’m not roaming at the time.

  • dagmx 2 days ago

    One of the knock on benefits of this too is increased security across all platforms as long as someone exercises that code path on one of apples new processors with a hardened runtime.

    In theory it makes it easier to catch stuff that you can’t simply catch with static analysis and it gives you some level of insight beyond simply crashing.

  • devttyeu a day ago

    And after all that hardcore engineering work is done, iMessage still has code paths leading to dubious code running in the kernel, enabling 0-click exploits to still be a thing.

    • aprotyas a day ago

      That's one way to look at it, but if perfection is the only goal post then no one would ever get anywhere.

    • walterbell a day ago

      Disable iMessage via Apple Configurator MDM policy and enable Lockdown Mode.

      • Citizen8396 21 hours ago

        I imagine the latter is sufficient.

        PS: make sure you remove that pesky "USB accessories while locked allowed" profile that Configurator likes to sneak in.

        • walterbell 7 hours ago

          Need an open-source MDM profile policy linter.

    • wat10000 21 hours ago

      What's the dubious code?

      Running something in the kernel is unavoidable if you want to actually show stuff to the user.

      • michaelt 20 hours ago

        In ~2020, it was:

        Attacker sends an imessage containing a PDF

        imessage, like most modern messaging apps, displays a preview - which means running the PDF loader.

        The PDF loader has support for the obsolete-but-part-of-the-pdf-standard image codec 'JBIG2'

        Apple's JBIG2 codec has an exploitable bug, giving the attacker remote code execution on the device.

        This exploit was purchased by NSO, who sold it to a bunch of middle eastern dictatorships who promptly used it on journalists.

        https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...

        • wat10000 18 hours ago

          None of that ran in the kernel. Everything happens within a single process up until the sandbox escape, which isn't even covered in your article. The article's sequel* goes into detail about that part, which involves subverting a more privileged process by exploiting logic errors to get it to execute code. The only involvement by the kernel is passing IPC messages back and forth.

          * https://googleprojectzero.blogspot.com/2022/03/forcedentry-s...

    • kmeisthax 16 hours ago

      Why would a nation-state actor need access to your kernel when all the juicy stuff[0] is in the iMessage process it's already loaded into?

      [0] https://xkcd.com/1200/

  • pjmlp a day ago

    Google could have added MTE for a couple of years now, but apparently don't want to force it on OEMs as part of their Android certification program, it is the same history as with OS updates.

    • kangs 21 hours ago

      to be fair, most of MTE's benefit is realized by having enough users running your apps with MRE enabled, rather than having it everywhere.

      This is because MTE facilitate finding memory bugs and fixing them - but also consumes (physical!) space and power. If enough folks run it with, say Chrome, you get to find and fix most of its memory bugs and it benefits everyone else (minus the drawbacks, since everyone else has MTE off or not present).

      trade offs, basically. At least on pixel you can decide on your own

  • alerighi a day ago

    They do that now because they care about your security, but to make it difficult to modify (jailbreak) your own devices to run your own software that is not approved by Apple.

    What they do is against your interests, for them to keep the monopoly on the App Store.

    • EasyMark a day ago

      It can be both things, security and user lock in, those are orthogonal goals.

bfirsh a day ago

Whenever I read about it, I am surprised at the complexity of iOS security. At the hardware level, kernel level, all the various types of sandboxing.

Is this duct tape over historical architectural decisions that assumed trust? Could we design something with less complexity if we designed it from scratch? Are there any operating systems that are designed this way?

  • Citizen8396 21 hours ago

    Vulnerabilities are inevitable, especially if you want to support broad use cases on a platform. Defense-in-depth is how you respond to this.

  • MBCook 21 hours ago

    iOS is based on MacOS is based on NeXT is a Unix.

    It’s been designed with lower user trust since day one, unlike other OSes of the era (consumer Windows, Mac’s classic OS).

    Just how much you can trust the user has changed overtime. And of course the device has picked up a lot of a lot of of capabilities and new threats such as always on networking in various forms and the fun of a post Spectre world.

  • KerrAvon 20 hours ago

    >Is this duct tape over historical architectural decisions that assumed trust?

    Yes, it's all making up for flaws in the original Unix security model and the hardware design that C-based system programming encourages.

    > Could we design something with less complexity if we designed it from scratch? Are there any operating systems that are designed this way?

    Yes, capability architecture, and yes, they exist, but only as academic/hobby exercises so far as I've seen. The big problem is that POSIX requires the Unix model, so if you want to have a fundamentally different model, you lose a lot of software immediately without a POSIX compatibility shim layer -- within which you would still have said problems. It's not that it can't be done, it's just really hard for everyone to walk away from pretty much every existing Unix program.

  • kangs 21 hours ago

    why not do both :)

    I think that there's also inherent trust in "hardware security" but as we all know its all just hardcoded software at the end of the day, and complexity will bring bugs more frequently.

  • encom a day ago

    Security in this context means the intruder is you, and Apple is securing their device so you can't run code on it, without asking Apple for permission first.

    • astrange 17 hours ago

      That makes no sense for a phone because you go outside with it in your pocket, leave it places, connect to a zillion kinds of networks with it, etc. It's not a PC in an airgapped room. It is very easy for the user of the device to be someone who isn't you.

    • thewebguyd 20 hours ago

      It can be both.

      Any sufficiently secure system is, by design, also secure against it's primary user. In the business world this takes the form of protecting the business from its own employees in addition to outside threats.