02. March 2024 · Categories: Apple, Politics

Apple has now confirmed that they will continue supporting Progressive Web Apps in the EU, iOS will serve them using Webkit. Over at gamesfray, Florian Müller explains why he believes this to be illegal. I believe this interpretation is wrong, that Apple overreacted, and that PWA never fell under the interoperability clauses of the Digital Markets Act. The important parts are

  • article 5, part 7:

    The gatekeeper shall not require end users to use, or business users to use, to offer, or to interoperate with, an identification service, a web browser engine or a payment service, or technical services that support the provision of payment services, such as payment systems for in-app purchases, of that gatekeeper in the context of services provided by the business users using that gatekeeper’s core platform services.

  • article 6, part 7:

    The gatekeeper shall allow providers of services and providers of hardware, free of charge, effective interoperability with, and access for the purposes of interoperability to, the same hardware and software features accessed or controlled via the operating system or virtual assistant listed in the designation decision pursuant to Article 3(9) as are available to services or hardware provided by the gatekeeper. Furthermore, the gatekeeper shall allow business users and alternative providers of services provided together with, or in support of, core platform services, free of charge, effective interoperability with, and access for the purposes of interoperability to, the same operating system, hardware or software features, regardless of whether those features are part of the operating system, as are available to, or used by, that gatekeeper when providing such services.

    The gatekeeper shall not be prevented from taking strictly necessary and proportionate measures to ensure that interoperability does not compromise the integrity of the operating system, virtual assistant, hardware or software features provided by the gatekeeper, provided that such measures are duly justified by the gatekeeper.

PWAs are in my opinion not covered. In 5(7), only business users using the core platform service (iOS) get the right to chose their web browser engine. This means iOS apps, not web pages1. And 6(7) only demands that you get access to the same features that Apple uses, it does not include the right to replace the underlying implementation. So it only means that rival web browsers will be able to install PWAs, not replace the underlying browser engine for PWAs. The Safari App does not run PWAs, this is done by iOS with the help of Webkit, this is thus not an ability rival web browsers could demand access to.


  1. Were we to regard the browser as a core platform service, then a literal reading could require the browsers to allow web sites to select an engine of their choice. This is technically absurd, and legally dubious, as this is obviously aimed at forced bundling, as clearly laid out in note 43. And I doubt the DMA wants to force end users to tolerate crypto mining engines destroying their batteries. 
26. January 2024 · Categories: Apple, Politics

Apple has now published their proposal to comply with the Digital Markets Act. Apple now allows you to avoid the App Store while charging €0.50 annually for all app installs above a million. This is in clear violation of Article 6, §7 of the DMA, which says:

The gatekeeper shall allow providers of services and providers of hardware, free of charge, effective interoperability with, and access for the purposes of interoperability to, the same hardware and software features accessed or controlled via the operating system or virtual assistant listed in the designation decision pursuant to Article 3(9) as are available to services or hardware provided by the gatekeeper. Furthermore, the gatekeeper shall allow business users and alternative providers of services provided together with, or in support of, core platform services, free of charge, effective interoperability with, and access for the purposes of interoperability to, the same operating system, hardware or software features, regardless of whether those features are part of the operating system, as are available to, or used by, that gatekeeper when providing such services.

The gatekeeper shall not be prevented from taking strictly necessary and proportionate measures to ensure that interoperability does not compromise the integrity of the operating system, virtual assistant, hardware or software features provided by the gatekeeper, provided that such measures are duly justified by the gatekeeper.

It is important to understand that the App Store is separate from iOS, according to the DMA. Which means that Apple has to provide third parties access to iOS on the same terms as it uses for the App Store and its other built in apps. It cannot charge developers for that access, but it could charge users. Maybe 0.50€ per app per year, with the first 20 free. But then it would need to count the built in apps against that, and removing a built in app would free a slot for an external app, so that users can substitute Google Maps for Apple Maps. And they need to charge the same fee independently of where the app came from, App Store apps would incur the same costs as apps from other sources. This is clearly undesirable for Apple, to create an incentive to throw out built in apps to save money, and to no longer sample apps from the App Store. It also economically advantages large cooperations, and incentives them to create a single mega app, in order to save on the fees. This is not good from a policy point of view.

I recently came across a paper on the energy costs of video streaming, written by the Borderstep Institute in German. It repeats the myth of insanely high streaming energy costs. That seems to have started with a fatally flawed analysis by the Shift Project, now reasonably refuted by the IEA.

The Borderstep paper takes the average data transfer volume and energy consumption of data centers, as well as that for the broadband network, calculates an average efficiency of 30 kBit/Ws for broadband, and 70 kBit/Ws for the data centers, and uses that to extrapolate 1000W power consumption for streaming a 25 MBit/s 4K stream. This result is wrong, because it neglects the actual engineering:

  • Streaming uses very little processing power on the server side

    All popular videos are stored already fully encoded on the servers, in multiple bit rates, to improve responsiveness. Even a typical home NAS, drawing 40W power, can stream 40 4K streams in parallel over a 1 Gbit/s connection. In the data centers, we see units drawing 1000W serving 20 Gbit/s. Even with a generous allowance for overheads and reserve capacity I’d be surprised if a 4K stream draws more than 5W, for an efficiency of 5 Mbit/Ws. Given that the typical HDD delivers 50 to 100 Mbit/Ws, this feels reasonable.

  • Streaming is mostly via CDNs, and the backbone is now fiber

    With a Content Delivery Network, the data will only have to travel a few hops and at most a few hundred kms, and this can be done efficiently with fiber. DWDM type connections manage 160km with 1 Gbit/s with the transceiver drawing 4W per side, that is 125 Mbit/Ws. Assuming that the switch draws the same power, that we need at most 5 hops, and adding a 5x safety margin, we end up at 2.5 MBit/Ws. This is 10W for the backbone, likely even lower.

  • The last mile for broadband has basically a fixed power consumption

    While in the backbone and data centers you aggregate demand, and so can ensure a pretty high load and efficiency, the last mile is essentially always on. So the energy consumption stays the same, no matter whether you use it sparingly for surfing, or intensely for streaming. You draw 10W if you have fiber, 20W if you have DSL. Your WiFi router at home will also draw a fixed 5 to 10 Watts.

A 4K stream with fiber to the home, and an efficient router will draw up to 5W + 10W + 10W + 5W = 30W, a low bit rate stream over DSL with an old router will be 0.5W + 1W + 20W + 10W = 31.5W. This is similar to the 18W for data transmission the IEA assumes, without taking your home wifi into account. It also shows that with current technology 4K is roughly the tipping point where the backbone starts to consume more power than the last mile, and we need to start monitoring our consumption. Still, this is 300 GB/day.

If you are worried about the climate impact of streaming, look at the TVs instead. A 65″ HD display draws 200W, six times the power needed to move the data.

31. August 2023 · Categories: Politics

One interesting article on the Distribution of car distances travelled shows that a plug in hybrid with 75km electrical range would be able to cover 75% to 80% of all driving using the battery alone, and it did not include the ability to recharge your car at work.

This makes them a really good policy choice to reduce carbon emissions. The worry with e-cars is that they will all come equipped with 80kWh batteries just to allow the rare road trip, a battery that even with just 1000 cycles is good for 400000km, and will likely deteriorate more from aging than from actual use. Instead a 75km plugin would just need a 12kWh battery, much closer to the actual live span of a car, and also much lighter, 100kg instead of over 400kg.

Currently there is not a singly electric car on the market that is affordable, and able to cover longer distances. A 25000€ petrol car can easily travel 800km in 6.5 hours, while starting with just 100km range remaining, but electric cars with sufficient charge speed for this are only available with huge batteries, raising the cost into the 80000€ plus range, and requiring you to travel at more than 160 kph to reach that average speed.

The plugin will for 1/5th the battery resources enable a 80% reduction in fuel consumption, reduce city pollution by having enough range to run there fully electric, produce half the road damage, be less dangerous to others in a crash, and would not require huge upfront costs for buying a car. As such it is a much better policy choice than forcing everyone onto super heavy electric cars.

The alternative would be for sufficient cooling on the batteries to allow a 5 minute fill up. Then people would be willing to use a 250km range car, knowing that extra range is a fast top up, and not a half hour wait. A few such experiments are now conducted, and I sure hope this becomes standard soon. It would also require a massive extension of fast charging stations on highways, half of which would only be used during a handful of peak holiday travel days. I suspect the cars are ready much earlier than the charging infrastructure.

25. July 2023 · Categories: Politics

The EU is currently considering a new battery directive1, and as it stands, it will effectively ban AirPods, and probably even smartwatches.

The problem is article 11, which basically forces everything using batteries weighting less than 5 kg to use user replaceable batteries, with only one exception when “specifically designed to operate primarily in an environment that is regularly subject to splashing water, water streams or water immersion, and that are intended to be washable or rinseable”. That makes it applicable for earbuds, smart watches, phones, and tablets. All of these will become seriously worse products because of this misguided law.

To understand why, a user replaceable battery needs to be more robust. It will have a plastic housing instead of directly fitting the battery pack, the housing will have significantly less flexibility to fit a battery tightly, and you need to properly secure the pack. User replaceable also means that you no longer can use glue to seal your device water tight. Instead you will use some kind of rubber seal, as they were used in tough compact cameras, like the Olympus TG-6. From experience, you need to be careful. The surfaces for the seal must be clean for the seal to be any good, especially there must never be any sand on them. Then you need to fasten the seals and batteries. You can either use a cap as in cameras, which use up a lot of space, or you use screws, which are a potential ingress point for water, and you also run the risk that users will fasten them wrongly. You really want a torque sensor to tighten them properly. In short, a user replaceable battery increases weight, and volume, and makes environmental sealing heavier. And for batteries that last at least as long as the product, they add avoidable waste.

Looking at the Fairphone and the iPhone 14, you see the impact a replaceable battery can have: 225g instead of 172g, IP54 rated instead of IP68. Apart from a few idealist, people just prefer the much better phone, and with Apple offering battery replacement for €119, this is a tradeoff many are willing to make.

For smartwatches, this is a disaster. I want my watch on during a swim, so it needs really good protection, and I want it to be smallish2. Design goals that currently cannot be achieved with replaceable batteries. But a watch is primarily used outside the water, even if swimming is critical use case, so the exception in the law does not take.

And for AirPods, they are worn inside our ears, so from an ergonomic perspective they need to be as light as possible. One AirPod Pro is just 5.3g, so even adding 1g to support a replaceable battery is bad. Apple uses a 25mAh battery, which I estimate to weight around 500mg, comparing them with a 210mAh, 3.2g CR2032. And the environment calls for good sealing, as you do not want your sweat to destroy them.

These are real design tradeoffs, and the law needs to recognize this. This means that it should be possible to have a battery replacement service as an alternative, when other design goals would be compromised. Difficult to put this into legal requirements, but let us try.

Amendment to Article 11:

  1. Devices are conditionally exempt from the user changeable requirement when one or more important design goals recognized in this article would be negatively impacted

  2. Exempt devices need to conform to all of the points below

    1. be designed that their batteries allow at least 1500 charge cycles, or that they last for at least 3 years for at least 95% of users before needing replacement.

    2. provide a battery replacement service that costs at most 50€ in 2023 euros more than a typical battery of roughly the same capacity, volume and endurance.

    3. once the battery replacement service is stopped, the necessary documentation must be provided free of charge for third parties to provide this service instead. Any necessary patents and other rights need to be licensed on a FRAND basis. This obligation includes tools, processes and replacement parts.

    4. use at least 20% of their volume or weight for batteries, or weight less than 100g.

  3. Important design goals are for

    1. Asset trackers:
      1. Robustness against manipulation

      2. Environmental sealing

      3. Weight

    2. Devices that are designed to be worn on the body, taken with you all day, or to be used handheld for at least 15 minutes continuously:

      1. Weight

      2. Size

      3. Environmental sealing

      4. Robustness against rough handling

    3. Devices intended to be portable, and be used at multiple locations. These can be exempt if the smallest dimension of their rectangular bounding box is less than 20mm.

      1. Weight

      2. Size

    4. Devices intended to be used regularly in a harsh environment. These devices do not need to meet the volume/weight threshold from 2.4.

      1. Environmental sealing

      2. Robustness against rough handling

  4. This conditional exemption is valid until two years after a comparable device with a user replaceable battery is available on the market that provides for those design goals at least as well, and with a volume and weight penalty of at most 5% of the entire device, with comparable life time hardware costs.


  1. The law text, as adapted in first reading by the parliament, is found here 

  2. The Apple Watch 8 45mm is 39g, with a 310mAh battery. That battery would be an estimated 5g to 6g. 
17. November 2020 · Categories: Politics

It is highly disconcerting to see the corona numbers rising again among almost the entire Western world, even though our understanding of it has much improved. I see a few areas where the successes of the Pacific Rim are not copied, and so a lot of unneeded harm is done:

Aerosol transmission is still denied as important, even among some leading public figures. The details can be found in this FAQ. With aerosols, not only is social distancing important, but we need to put additional emphasis on ventilation, and on mask wearing. Indoor spaces where we have high incidence rates or longer stay times should be avoided. Avoid meeting people indoors. At home, it means that wearing masks and keeping distance when you have guests will not compensate for sharing a badly ventilated room for several hours. And if you want to have a larger gathering, you need to realize that it can only be safe if everyone is virus free. So guests need to isolate for two weeks beforehand, and should for a week afterwards as well, to prevent a possible infection from spreading wide.

Contact tracing is failing badly. We need to accept that we cannot do contact tracing without keeping detailed records, and we need to accept that contact tracing is privacy invasive, and that the solution is to punish people abusing that information, and to effectively control access to it, not to anonymize the data so early that it becomes ineffective. We need to keep records where and when people are in indoor spaces, or other gatherings. This can be done with QR codes, and keeping a history on your phone that is only shared when you are infected. We also need to accept that transmission is a lot via super spreading, and that there are quite a lot of asymptomatic spreaders. Thus we will be much more successful if we not only trace the people potentially infected by our case, but also figure out where our case got infected. A super spreader might be asymptomatic, but at least one of the infected should show symptoms, and so allow us to find them. And lastly we need to be aware of our tracing capacity, and keep cases low enough to prevent it to be overwhelmed. This includes having around 200 spare tests per identified case to quickly test identified contacts.

Testing and Tracing turnaround times must be kept below 2 days. We need to test our case, contact the contacts, and have them tested as well within this time to be able to isolate effectively before you become infectious.

Elimination would have been overall a lot cheaper than the half measures taken in Europe and the US. You are comparing three weeks with no output to a year with 15% output reduction and permanent stress. It would have been best to announce an October lockdown already in June, lasting for three weeks, with time to prepare. And the lockdown then so stringent to eliminate all cases.

Compliance is the large issue with lockdowns, there are always some who believe that they can bend the rules. But I believe it is much more an issue with an unending, year long, lockdown lite to keep numbers just in check. For a mere three weeks, with enough time to prepare, compliance is a much better sell.

07. May 2020 · Categories: Politics

With everyone looking at Bluetooth to enable a privacy preserving manner of contact tracing, it is time to acknowledge that it will not work. There are multiple ways in which Bluetooth contact tracing is insufficient:

Accuracy of contact distance is low with only Bluetooth. On the one side, the distance can be a lot shorter than estimated when something blocks our phones, but not our heads. On the other hand, it can easily penetrate walls and glass, so people separated by such barriers can be seen as contacts, even if there is no way past for the virus.

No context is provided, so we do not know anything about ventilation or interaction between people.

Large adoption is necessary, so we likely cannot avoid forcing people to use the tracing.
To solve these problems, we need to acknowledge that contact tracing is inherently invasive, include rough geo tracking data to help people remembering what they were doing, and use a lot of contact tracers to provide judgement. Acknowledging this allows us to concentrate on the different set of problems we need to solve for tracing anyways:

Is there enough testing capacity? Without it, we run the risk of isolating too many people. Also we will not be able to learn which kind of interactions have a high transmission risk.

Is the data secure? When we allow contact tracers access to invasive information about ourselves, we need to make sure that this will not be abused. We need infrastructure security against hacking, we must have monitoring in place to prevent tracers from abusing the often personal information they are learning, and add laws1 to ensure that nobody will try to use this data for anything else than contact tracing and understanding transmission pathways. It is also important that any of the historical data created from tracing will be thrown away once we have found all the contacts, and only statistics about transmission modes are being kept.

How do we select and train tracers? We need a human factor to also look for the few people that are not using smartphones, and to help with organizing the necessary quarantines. And we need a selection process to filter out people that would violate confidentiality.

Are there safeguards against mission creep? With such an invasive setup, it is much less likely that we will tolerate it for anything but contact tracing. Also ensuring that all the data collection is opt-in, even when using it is mandatory, creates a kill switch that will stop the infrastructure once the crisis is over.

All of these problems are still difficult enough to solve, and the earlier we stop wasting resources chasing after a pipe dream, we will have a proper discussion about to implement actually efficient contact tracing.


  1. Basically stiff penalties plus jail time, and a mandate to throw away data when adding additional uses. A consequence is that we would accept to let murderers go free, instead of using the tracing data. 
29. October 2016 · Categories: Politics · Tags:

Mit den Stimmenverlusten der großen Koalition werden Überhangmandate im Deutschen Bundestag immer wahrscheinlicher. Ich denke, es ist daher an der Zeit, einen grundsätzlich neuen Ansatz zur Bundestagswahl einzuschlagen, um die Notwendigkeit von solchen Mandaten zu beseitigen. Das Ziel sollte es bleiben, direkt gewählte Abgeordnete mit einem Verhältniswahlrecht zu verknüpfen. Zusätzlich ist es wünschenswert, dass mehr Stimmen für eine Partei auch immer mehr Abgeordnete zur Folge haben. Ich schlage daher vor, dass wir folgendes tun:

  • Landeslisten bleiben erhalten

  • Wir verringern die Anzahl der Wahlkreise auf 199, jeder Kreis stellt einen direkten Abgeordneten und zwei über die Landeslisten

  • Für jeden direkt gewählten Abgeordneten verliert dessen Partei die entsprechende Anzahl Stimmen für die Landeslisten. Sollten dann keine Stimmen mehr übrig bleiben, wird die Partei bei den Landeslisten nicht berücksichtigt

Im Detail wäre dies dann folgendes:

Verteilung der Wahlkreise Die Wahlkreise werden auf die Bundesländer gemäß deren Einwohnerzahlen verteilt. In einem ersten Schritt berechnen wir dies rein proportional, jedes Land hat dann Anspruch auf x,y Wahlkreise, x ganze, und einen 0,y fraktionellen Anteil. Wir verteilen wir die Teilkreise dann, indem wir für jedes Land den Bruch 0,y/min(1, x,y) bilden, und den Ländern dann absteigend die verbleibenden Kreise zuordnen. Dies führt zu einer leichten Überrepräsentation kleiner Länder, was aufgrund deren geringerem politischen Gewichts unproblematisch sein sollte.

5% Hürde Die Hürde wird reformiert, indem die Grenze auf 5% der Listenstimmen in Bundesländern, welche mindestens 25% der Gesamtbevölkerung ausmachen, gesetzt wird. Dies erlaubt mehr regionale Vielfalt, und stellt sicher, dass die CSU eine eigene Partei bleiben kann. Zugleich bleibt da eine ausreichende Grenze, um eine zu große Zersplitterung der Parteien zu unterbinden.

Direktmandate Direkt werden Abgeordnete mit der Mehrheit der abgegebenen Stimmen gewählt. Da die direkten Kandidaten nur noch 1/3 der Abgeordneten stellen, ist es vertretbar, dass wir keinen Ausgleich mehr vornehmen, da dies nur bei weniger als 33% der Listenstimmen und zugleich allen Direktmandaten notwendig wäre. Bei einem derart gesplittetem Ergebnis wäre eine gewisse Bevorzugung großer Parteien im Interesse einer stabilen Regierungsbildung wünschenswert.

Listenmandate Um die Verteilung der Listenmandate festzustellen, ziehen wir den Parteien Stimmen entsprechend ihren gewonnen Direktmandaten ab. Zuerst werden die Stimmen für Parteien, welche die Hürde nicht geschafft haben, abgezogen, wodurch n Stimmen übrig bleiben. Bei k Kreisen werden dann den Parteien für jedes errungene Direktmandat n/3k Stimmen abgezogen. Danach scheiden Parteien mit einer negativen Stimmenzahl aus, und die Listenmandate werden auf dieselbe Art wie die Wahlkreise verteilt, gemäß den Brüchen 0,y/x,y. Dies bevorteilt kleine Parteien und solche mit vielen Direktmandaten. Ich denke hierdurch kommt es zu einem guten Kompromiss zwischen Meinungsvielfalt und stabilen Mehrheiten.

Alternativstimmen Um auch den Wählern, die eigentlich einen gescheiterten Kandidaten vorziehen, eine bessere Mitsprache zu erlauben, können wir die Wähler Alternativen benennen lassen, die bei einem Scheitern ihrer ersten Wahl zum Zuge kommen sollen. Bei den Listen: Eine Alternative, alle Parteien unterhalb der Hürde scheiden aus, und die Stimmen werden auf die Alternativen verteilt. Bei den Kandidaten: eine Alternative, die zwischen 1 und 4 Stimmen erhalten kann. Die Erste Wahl erhält immer 4 Stimmen. Die Kandidaten scheiden nach steigenden Erstwahlstimmen aus, worauf dann deren Alternativstimmen verteilt werden. Sollte es Kandidaten geben, die mindestens 50% Zustimmung aus Erst- und Alternativstimmen erhalten, scheiden alle anderen sofort aus.

Nun könnte man natürlich versuchen, seine Partei aufzuspalten, eine direkte und eine für die Liste, so dass die Direktkandidaten keinen Einfluß mehr auf die Listenstimmen hätten. Zum einen stünde ein solcher Weg natürlich auch der Konkurrenz offen, zum anderen könnte man das gleiche heute schon mit einer Reihe “unabhängiger” Kandidaten erreichen. Ich denke, dass die dabei auftretenden Überzeugungsprobleme schon abschreckend wirken. Wenn man dann auch noch mit einer Landesliste antreten muss, will man mehr als einen Direktkandidaten nominieren, denke ich nicht, dass sich solche Tricks durchsetzen lassen.

14. April 2016 · Categories: Politics

The outcome of the Dutch Ukraine referendum shows how problematic they are: it was a consultive one, so people have felt free to use it as a protest vote. Even the organizers have openly said that they do not care about Ukraine, they have openly highjacked the process to organize a vague protest vote, and in the event just 32% of voters bothered to even vote.

There is a reason that we vote for representatives: our modern world is complicated, there are a lot of competing interests and to create a good policy, an acceptable compromise to deal with issues is hard. It takes time to properly understand things, which most people do not have. In addition, people enthusiastic about a policy will skew the turnout and we can get results that most people do not really want. So we should reform the referendum process to ensure that decisions are made by informed, representative people. This should lead us to the following process:

  • for consultations, use sampling: choose hundred people randomly, put them into a hotel for a week, brief them on the issue at hand, and get their concerns and priorities.

  • for referendums, treat them as a corrective mechanism against the risk that politicians ignore an issue.

A consultation mechanism provides much more nuance, and chances to figure out which parts of specific legislations are problematic. It might be that a policy is good, but that not enough help had been provided to aid those affected by the change. It might be that the concern is something related, and that we need to address that as well. It is a perfect companion to our democracy, significantly cheaper to implement than a referendum, and would underscore the consultive nature of such a people’s senate.

Making referendums a corrective mechanism would mean that it should have high hurdles to justify overruling the parliament. So I suggest that the acceptance threshold should be 60% of votes cast, and 40% of eligible voters1. This is a high hurdle, but it would ensure that any act passed this way has the broad support of the people, and has enough standing that parliament cannot simply overrule it2. One needs to engage almost the entire population to get one passed, and so it puts pressure on ensuring that proposals are well thought out. Also such a mechanism should allow us to introduce new bills, to increase the chance that it actually addresses the basic concerns. The Swiss allow parliament to add its own version for a referendum, and I believe this has greatly improved the quality of the passed referendums.


  1. One consequence is that the required turnout is between 40% when everyone votes for it, and 66.7% when 60% do. It also removes the incentive for people to stay away to keep participation below the required threshold. 
  2. I believe a constitution changing majority should be required for an overrule, or a new referendum. 
02. March 2016 · Categories: Apple, Politics

The iPhone currently cannot be protected against backdoors that Apple is forced to make, and in general it is impossible to defend against that. There is only one intermediate step that Apple can still take to make breaking into an iPhone more difficult: ensure that the user must approve any update before it is applied. Combine this with the ability to check a cryptographic hash of the update, and you now make it incredibly difficult to target individual iPhones for accepting backdoors: you no longer can surreptitiously push backdoors, they would go to all phones, greatly increasing the risk of discovery and collateral damage.

Apple will need to change the processor to make it happen. The current architecture has no place to store the users consent securely: only the UID key is secret, and any data stored is on externally accessible flash memory. So an attacker could save the flash content, use the backdoor OS to generate the approval key, and then place back the backup user data: on the next boot, the backdoor has access to the memory. So we need the ability to store this consent safely within the processor itself, which means adding a small amount of embedded flash. Embedded flash is relatively easy to read when you are willing to destroy the processor, so it should be encrypted with the UID to make this task more difficult. Since the guide is not clear about it, any RAM used by the secure enclave needs to be either encrypted or on-chip to prevent side channel attacks. There are now chips that have real time RAM encryption baked in, this would be very helpful for the enclave as well.

It is important to keep in mind that there is nothing that can protect us from an insider attack. We can only work hard to ensure that security cannot be reduced after the phone has left the factory1. This is why Apple needs to fight so hard to keep the trust of its users: a government mandated backdoor would completely and permanently destroy the trust into that software. It would be the end for closed sourced operating systems and also applications, the risk that they are used to backstab us would be just too great. This is also the best reason why those backdoors will not be granted in the end: the risk from terrorism is simply not that great that it would justify losing billions of annual tax revenues alone, especially since strong encryption is now widely and publicly available. And I believe several countries with strong constitutions would be more than happy to lay the welcome mat for Apple, should the US decide otherwise.

The following discussion assumes that you have read the iOS Security Guide. The goal of the changes is that when we load iOS, it will only be able to access user data when the update was previously authorized by the user. For this, we will add extra flash storage to the processor, which has a dedicated interface with exactly two functions: create new key and load key into AES unit. Unfortunately this will be relatively expensive: an entire flash unit with error correction ability and random number generation implemented in dedicated hardware to prevent any software backdoors, but now transistor counts are so high that this perfectly doable. This key will take over the role of the file system key (FSK), and it will also be used to encrypt the class keys. Now the boot loader is changed that it will check the OS not only for a valid Apple signature, but also for a SHA512 hash encrypted with the FSK. Should the hash not match, the boot loader will destroy the FSK and create a new one, effectively erasing all user data. Depending on the available space in the boot loader, we can add two additional steps to make accidentally losing your data less likely: It can ask for confirmation with a specific key combination, and it can allow the user to still provide his passcode via USB as a special recovery mode. Ideally the boot code would allow you to enter the passcode, but this is probably way too much code to be practical.

Should the iOS image become corrupted, we would use iTunes to restore the image, and also use it to ask for the device passcode to sign the image. This adds a new vulnerability in that the computer running iTunes could be hacked, but given that it would only be used when recovering a broken image, it would be a rare occurrence. We could work around this if we would create a known good passcode recovery image, whose hash would be fixed in the bootloader, allowing passcode entry directly on the device. With the hash, its content would be fixed, preempting later attempts at introducing a backdoor.

Addendum: In order to prevent replay attacks, where you use the current OS and replace the flash memory after every try, the replay counter also needs to be on chip, and only accessible to the secure enclave. We can avoid having extra security measures, because before any untrusted code would be able to run, the bootloader will already have destroyed the FSK. The replay counter is updated before every attempt and after every successful login. With a hundred logins per day, the life expectancy of the counter will have to be a few million writes, which is very doable with Flash using partial word writes.


  1. There are two places especially vulnerable to sabotage: the masks for the processor could be subtly altered to weaken the keys or create a backchannel. Or the UID could be recorded during production. The UID though will likely be generated internally by the random number generator, preventing any recording.