02. March 2016 · Categories: Apple, Politics

The iPhone currently cannot be protected against backdoors that Apple is forced to make, and in general it is impossible to defend against that. There is only one intermediate step that Apple can still take to make breaking into an iPhone more difficult: ensure that the user must approve any update before it is applied. Combine this with the ability to check a cryptographic hash of the update, and you now make it incredibly difficult to target individual iPhones for accepting backdoors: you no longer can surreptitiously push backdoors, they would go to all phones, greatly increasing the risk of discovery and collateral damage.

Apple will need to change the processor to make it happen. The current architecture has no place to store the users consent securely: only the UID key is secret, and any data stored is on externally accessible flash memory. So an attacker could save the flash content, use the backdoor OS to generate the approval key, and then place back the backup user data: on the next boot, the backdoor has access to the memory. So we need the ability to store this consent safely within the processor itself, which means adding a small amount of embedded flash. Embedded flash is relatively easy to read when you are willing to destroy the processor, so it should be encrypted with the UID to make this task more difficult. Since the guide is not clear about it, any RAM used by the secure enclave needs to be either encrypted or on-chip to prevent side channel attacks. There are now chips that have real time RAM encryption baked in, this would be very helpful for the enclave as well.

It is important to keep in mind that there is nothing that can protect us from an insider attack. We can only work hard to ensure that security cannot be reduced after the phone has left the factory1. This is why Apple needs to fight so hard to keep the trust of its users: a government mandated backdoor would completely and permanently destroy the trust into that software. It would be the end for closed sourced operating systems and also applications, the risk that they are used to backstab us would be just too great. This is also the best reason why those backdoors will not be granted in the end: the risk from terrorism is simply not that great that it would justify losing billions of annual tax revenues alone, especially since strong encryption is now widely and publicly available. And I believe several countries with strong constitutions would be more than happy to lay the welcome mat for Apple, should the US decide otherwise.

The following discussion assumes that you have read the iOS Security Guide. The goal of the changes is that when we load iOS, it will only be able to access user data when the update was previously authorized by the user. For this, we will add extra flash storage to the processor, which has a dedicated interface with exactly two functions: create new key and load key into AES unit. Unfortunately this will be relatively expensive: an entire flash unit with error correction ability and random number generation implemented in dedicated hardware to prevent any software backdoors, but now transistor counts are so high that this perfectly doable. This key will take over the role of the file system key (FSK), and it will also be used to encrypt the class keys. Now the boot loader is changed that it will check the OS not only for a valid Apple signature, but also for a SHA512 hash encrypted with the FSK. Should the hash not match, the boot loader will destroy the FSK and create a new one, effectively erasing all user data. Depending on the available space in the boot loader, we can add two additional steps to make accidentally losing your data less likely: It can ask for confirmation with a specific key combination, and it can allow the user to still provide his passcode via USB as a special recovery mode. Ideally the boot code would allow you to enter the passcode, but this is probably way too much code to be practical.

Should the iOS image become corrupted, we would use iTunes to restore the image, and also use it to ask for the device passcode to sign the image. This adds a new vulnerability in that the computer running iTunes could be hacked, but given that it would only be used when recovering a broken image, it would be a rare occurrence. We could work around this if we would create a known good passcode recovery image, whose hash would be fixed in the bootloader, allowing passcode entry directly on the device. With the hash, its content would be fixed, preempting later attempts at introducing a backdoor.

Addendum: In order to prevent replay attacks, where you use the current OS and replace the flash memory after every try, the replay counter also needs to be on chip, and only accessible to the secure enclave. We can avoid having extra security measures, because before any untrusted code would be able to run, the bootloader will already have destroyed the FSK. The replay counter is updated before every attempt and after every successful login. With a hundred logins per day, the life expectancy of the counter will have to be a few million writes, which is very doable with Flash using partial word writes.


  1. There are two places especially vulnerable to sabotage: the masks for the processor could be subtly altered to weaken the keys or create a backchannel. Or the UID could be recorded during production. The UID though will likely be generated internally by the random number generator, preventing any recording.