What are the options with current technology to provide encryption backdoors? And which policy goals should we have?

As policy, we want backdoors that provide inherent checks on abuse. When police searches a house, it is obvious to the neighbors, it requires the expense of manpower to do the search, and the subject of the search knows about it. Similarly, we want any mechanism to be expensive enough to deter casual use, obvious enough that it does not happen behind our backs, and safe enough that criminals cannot exploit it.

On these counts, a master key is awful policy. It would have to be kept secure in at most a handful offline locations. There would be a huge temptation to make more copies to provide better access, and most governments will want to have their own master key for their subjects. There will be so many copies in the end that at least one will be stolen. The cost for each decryption would be so low, and the act so invisible, that there would be no deterrent from overusing it. Worse, if the master key was stolen, it would be very hard to detect, and probably impossible to prove. And as Jonathan Zdziarski observes, the legal system will basically force the tool into the open for validation, making such a threat very plausible.

A much better alternative would be to include a random key on each device1 used to store enough bits of the password that the full one could be brute forced on a 10 million dollar computer in a week or so. We would also need to update this regularly to account for the increasing processor power that a cracking device can have over time. One would retrieve the key by destroying the processor and carefully checking the nonvolatile storage on it to determine the device key, not an easy task given the small size of such structures. As policy, it would be much better, since it is expensive to hack each device. There is no scaling that after you cracked one, further ones would become much cheaper. You need the device in your physical possession so it is both an additional protection, and you cannot work undetected. And the special equipment needed to analyze the chips would make it difficult for criminals to acquire such equipment unnoticed.

This would require discussion about how difficult we want to make breaking into phones. What should be the price such cracking should be costing? The real big problem is that anything that would be worth spending millions on prosecuting would find it worthwhile to actually use proper encryption software that is effectively unbreakable, while we cannot make backdoors so weak that people would be subject to persecution by repressive regimes. In the end, backdoors are only effective against criminals doing awful stuff while being stupid enough not to employ proper encryption, or for checking on information stored by crime victims. Such a backdoor could be provided such that people can consciously activate it so that in case of death there is still a way to access information. But in general, we know that nowadays we leave such a thick digital trace in all our interactions, plus clandestine surveillance is now so powerful, that law enforcement has more than enough other venues to fight crimes.


  1. The classical way to store secrets on a processor have been eFuses, which are relatively large, in the μm range, and can be read quite effectively. More modern approaches use anti fuses, for example from Sidense and Kilopass. These are much more difficult to read, and I suspect are the technology Apple is currently using to store their per device keys. To read out the keys, companies like Chipworks do look at chips very carefully, and it is interesting to read about the technology used