09. September 2014 · Categories: Apple

An interesting presentation from Apple today. On the iPhone side, I was surprised that we no longer have a small 4" model to choose from. One handed operation has become more cumbersome than it used to be, and there are enough people out there with small enough hands to find 4.7" already on the large side. But this is something that we can only properly judge with the phone in our hands. But it could well mean that 5S sales will remain stronger than they used to be.

Even though Apple was doing its best to hide it, by basing the comparison on the first model, this was by far the smallest advance in computing power ever for the iPhone. A 25% increase in speed means that with desktop class performance comes also only desktop class performance improvement. Should this stay this way, and it is likely that it will, given all the problems Intel has in getting its 14nm process under control, then we will see the same lengthening of the replacement cycle coming from PCs to phones: 25% annually instead of 80% means we need now 5 years instead of 2 for the same improvement.

Payments could well be a much larger deal for the US than Europe, given the relatively archaic state of the US payments infrastructure. Given the high cost of an iPhone, Apple Pay would in Europe not be able to become the sole or cheapest way for a customer to pay (there must by law be a common, surcharge free option to pay, and iPhones are just not common enough to be that option), but together with Continuity it could become a great way to pay on the web.

The watch is nice, using fitness as the angle for people to start using it. What it’s true value will be, we will see. Notifications and universal access control token seem two promising options; I would have loved to have Touch ID as well on the device. As long as battery life is sufficient, it would seem to me that it will have low replacement cycles, with the option of deferring processing to the iPhone it necessary.

07. September 2014 · Categories: Politics

With the referendum on the future of Scotland coming soon, I am surprised to the extent the discussion is not yet realizing what independence really means. Based on the comments from the SNP, you get the impression that it means “Life will continue as usual, but Scotland will keep its oil revenue.”

In reality it much more useful to look at Ireland to see how an independent Scotland would fare, that it would become a reasonably prosperous member of the EU with close to no power to shape EU policy.

Since every member of the EU must vote in favor of Scotland joining the EU, with especially Spain not keen to see a secession, it is highly unlikely that it will be able to negotiate special terms. Given that the Euro must eventually be adapted by new members, Scotland will likely have the choice between:

  • Keeping the pound by following UK monetary policy, and forgoing EU membership, opting to become a member of the EEA instead.
  • Adopting the Euro and becoming a full EU member.

In addition, should it become an EU member, it will definitely loose the British membership rebate, and should it decide to want to continue its policy of free university education, it will no longer be able to charge fees to rUK students.

I believe the core problem for Scots lies in the fact that England constitutes 85% of the population of the UK, and so they feel marginalized in their ability to shape policy. As such, it might be better policy to break up England into more equal pieces, and maybe more importantly, to create a British football team to forge a common national identity (Maybe, with the ability to call upon Dalglish, Giggs, and Bale at different times, a British team would have even managed to win more trophies).

Scotland has certainly increased my appreciation of the extent the breakup of Prussia has helped in forming a stable German state, with no Bavaria or Saxony vying for independence.

07. July 2014 · Categories: Software

One of the problems with C++ is that it has a very brittle syntax, meaning that it is easy to trip over subtle differences:

In this code, what is c? Given the definition of b, it sure looks like a class instantiation, but it actually is a function declaration that returns a class D. This is a problem that could easily be solved by requiring a function declaration without a parameter to use void, as in:

This is more verbose, but it is also more difficult to get wrong. I believe this is what will kill C/C++ in the future: the unwillingness to sacrifice some backward compatibility for a safer syntax, even if one could do a perfect automatic translation.

02. June 2014 · Categories: Apple

Apple have made some impressive improvements in ease of use with Swift, their new programming language. It breaks with the past, and they have followed a laudable design goal in making the language safe to use. Comparing it to C#, it still lacks a good deal of power: no exceptions, dynamic type system or an equivalent of LINQ. On the other hand, the new Playground is a good prototyping tool, and their rigerous commitment to keeping it a statically compiled language yields impressive performance which can only be good for power consumption on mobile devices. Personally I have the impression that it has removed a lot of syntactic cruft, the biggest improvement over Objective-C are generics. It still lacks some of the power, but it has captured most of the important stuff from C#.

With iCloud Drive, they now have a way to allow apps to work on the same document, but it feels like a bolt-on solution. Firstly it seems to be via the cloud only, which is pretty crazy for important documents where you cannot trust Apple thanks to National Security Letters. And then there is no solution in sight to group and organize documents from diverse applications.

Continuity is the answer to Microsofts All-In-One Windows 8, providing great interoperability of multiple devices optimized for their own job. I am curious how this is implemented and to which extend they will protect any data that needs to travel via the cloud.

08. May 2014 · Categories: Software

I have previously railed about the lack of support for bit fields in embedded libraries. It turns out there is a good reason for this: they are not portable, and very ill defined in the standard.

In order to provide reasonably portable implementations, we should provide a well defined alternative to bit fields, which capture the essence of what we are now doing with bit manipulation.

  • Each bit field must be based on a basic integer type. This makes the mapping to storage explicit, and reduces the chance for counting errors.
  • You can explicitly choose whether the fields start with the highest or lowest bit
  • You can specify whether the access to the field must be atomic or not
  • You can define a force value for each field. This value is written to a volatile bit field whenever you update any field, unless you specifically have changed the value in the update statement, and it also serves as the default during initialization.
  • We force the compiler to collect consecutive write statements to a field, and update the field with exactly one write, when you use the comma operator to separate the statements.

We could define define bit fields as follows:

Then we can clear either one or both of flags easily:

Things are a bit more complicated when we mix stuff:

And I wonder whether we should add new operators ->{ and .{. This would allow us to write pClear->{clear_a = 1, clear_b = 1}; to simplify updating multiple fields in one go.

27. April 2014 · Categories: Apple

With iPad sales y/y essentially flat, some people worry about iPad having plateaued, and no longer taking the world by storm. The basic worry behind this is that the iPad has become good enough, and so people will start to lengthen their replacement cycles, as they are doing now for PCs. But to which extend is this true?

As far as computing power is concerned, the A7 chip is already very close in performance per cycle and core to the latest Intel chips. This suggests that any further increases will have to come from using faster clocks, where we only have quite limited room to grow left. This leaves other attributes to provide desirable improvements.

The iPad would still benefit greatly from further weight reductions, as well as getting Touch ID, while the camera on the iPhone could do with some better low light ability. Also both would benefit from becoming more resistant to the elements, but otherwise I am at a loss what I would want to see improved. Now I did not see Touch ID coming, so there could still be positive surprises out there.

But I believe that people are already well served, and this will make them less likely to upgrade. For the carriers, the iPhone is already very good at generating photo and video traffic, and getting the customer a new model would no longer significantly increase his data usage. This leaves as a good reason for subsidizing a new phone increased spectrum efficiency, but carriers are still busy expanding LTE, and have not started on deploying a successor.

Apple depends mainly on two things to achieve huge margins in the iOS business:

  • carrier subsidies allow them to price the iPhone $200 higher than otherwise possible, and effectively hide from the end user the huge margins of the iPhone

  • pricing each doubling of flash capacity at $100 provides strong price discrimination

Both are only possible as long as iOS provides superior value compared to Android. But as can be seen with the Mac, Apple is perfectly capable of maintaining a value differentiation. Also Apple will rather sell you a quality product for a healthy margin that you will then replace less often, as it is an easy win-win: 35% every three years is better than 10% every two years, and it is better for the customer too, 270% instead of 330% over six years. Of course, this setup will not support 50% iPhone margins, which means a 100% profit markup.

To see when iOS becomes good enough, Europe will be the first market where carrier subsidies will come under pressure, since they have a single standard allowing you to keep your old phone when switching carriers.

25. April 2014 · Categories: Politics

With the fresh FCC proposals, the discussion of what an open internet should be is starting anew. The problem is an underlying tension between providing open access as a matter of public policy, in order to strengthen free discussions and online innovation, and giving the carrier incentives too improve network infrastructure.

It arises from the fact that internet infrastructure is a natural monopoly, especially the wired kind, thanks to the serious capital investment required. There are now two ways to deal with it:

  • recognize it as a monopoly, and regulate it accordingly.

    In this case, the states or counties will have tenders for the provision, and they will select one provider for the service, just as it is now done with electricity distribution and water.

  • keep the current setup, and enforce neutrality on only a basic chunk.

    In this model, the first, say, 4 MBit/s would fall under neutrality, and the rest could be used as the carrier pleases, in order to give them an incentive to improve their networks. This basic chunk would be increased from time to time, as average provisioning improves, and to ensure that carriers continue to improve their offerings so that they can sell the more lucrative advanced service.

12. April 2014 · Categories: Software

On the Cortex-M processors, you typically use critical sections to isolate accesses of interrupt handlers from the main program. The assembler needed for this is pretty straightforward, the question is how do we best implement it in C++? First a version for Gnu C++:

Here we do a few things to ensure the compiler generates correct and optimal code for us:

  • The saved interrupt status is accessed directly as a register. This allows the compiler to keep the status in a register if beneficial, and since you need to move it into a register for access anyways, no performance is lost.
  • The fake dependency on line 5 ensures lines 4 and 5 are never swapped. We could do this in one __asm statement, but splitting it into two allows the compiler to insert a store between the two of them, if needed, which minimizes the time spent blocking interrupts.
  • Lines 6 and 9 are memory barriers which prevent the compiler from moving any memory accesses outside the lock.
  • The volatile keyword is needed to ensure that the compiler does not optimize these statements away, as they appear to him to have no effect.
  • Be careful when calling it, it must be InterruptLock var;

If you are using the Keil compiler, the lock would look pretty much the same. Here the intrinsic __schedule_barrier() is used to ensure that the compiler does not reorder code across the lock.

05. April 2014 · Categories: Software

One of the instructions cut from the M0 with regard to the Cortex-M3 core was SMULL. This instruction is extremely helpful when you want to do fixed point arithmetic with more than 16 bits. Compilers typically emulate this instruction, so that you can write:

To implement its function, int64_t z = x * y, we need to add the individual products. Let ~ denote a sign extended halfword, and 0 a zeroed halfword, then the multiplication x × y or [a,b] * [c,d] can be calculated as 00[b*d] + ~[~a*d]0 + ~[~c*b]0 + [~a*~c]00.

To do this efficiently in assembly, we need to take into account that we only have 8 registers to work with, and that we have a carry to transport between the lower and upper word. So we will start with calculating the middle terms, and add the remaining ones at the end. The following code takes its parameters in r0 and r1, and returns the product in r1:r0.

24. March 2014 · Categories: Software

I am currently busy adapting to the Cortex processor family, and I am amazed how ancient some of the coding practices have remained. A very typical case in point are the device header files used. Let us take as an example part of the definition of the interface to a DMA controller.

The relevant definitions for the destination width field look like this. First there is a normal struct definition without any structure:

and the definition of the width field is all done with macros for manual assembly:

Normal people would define this in proper C++, using something like

Unfortunately, you are basically forced to use these ancient methods, because the compilers, even the highly regarded one from Keil, do not properly optimize when you are changing multiple fields in such a struct. If one updates several fields with constants, the compiler strangely enough does not optimize them. For example, if you would update a byte:

then the compiler would even on highest optimization add the constants individually, even though there is no volatile forcing him to do so. And to top it off, the debugger is not even able to read enumerations in the bit field union, even though it can handle standard enums and plain int bitfields just fine.

This is really too bad, because using bitfields properly would allow a good editor to make completion suggestions for the relevant enum, and it would automatically catch any assignment errors.