24. March 2014 · Categories: Software

I am currently busy adapting to the Cortex processor family, and I am amazed how ancient some of the coding practices have remained. A very typical case in point are the device header files used. Let us take as an example part of the definition of the interface to a DMA controller.

The relevant definitions for the destination width field look like this. First there is a normal struct definition without any structure:

and the definition of the width field is all done with macros for manual assembly:

Normal people would define this in proper C++, using something like

Unfortunately, you are basically forced to use these ancient methods, because the compilers, even the highly regarded one from Keil, do not properly optimize when you are changing multiple fields in such a struct. If one updates several fields with constants, the compiler strangely enough does not optimize them. For example, if you would update a byte:

then the compiler would even on highest optimization add the constants individually, even though there is no volatile forcing him to do so. And to top it off, the debugger is not even able to read enumerations in the bit field union, even though it can handle standard enums and plain int bitfields just fine.

This is really too bad, because using bitfields properly would allow a good editor to make completion suggestions for the relevant enum, and it would automatically catch any assignment errors.

07. January 2014 · Categories: Apple, Software

Apple has announced more than 10 billions of sales on the App Store in 2013. An impressive number, meaning that an iOS user spent between 10 and 20 dollars this years on app purchases. What they have not released is a breakdown of sales into the different categories. We can get an approximation by simply counting the top grossing list, for example the Dutch one:

iPhone iPad
Games 114 132
Apps 57 30
Social 21 3
News 7 30
Apple 1 5

A few days later, I found the first place on the top grossing list for the iPad occupied by an app that is neither a game or a periodical as place 42 for Skype, and 58 for Evernote.

There are quite a few more apps on the iPhone, and they are either for navigation or mobile data recording. We can see from the distribution that iPads are the champs of consumption, while iPhones are used to stay in touch. What I find surprising is that there are so few creative apps bought on the iPad, quite a difference to the image projected by Apple in their advertising. Is this because these apps are priced too low? Or do people need more time to adjust to the iPad to start using them for more than entertainment? Or is it the holidays in full effect?

30. November 2013 · Categories: Apple, Software

There are quite a few people who cling to the file system as a necessary component for getting any complicated work done on a computer and who therefore believe iOS to be inherently unable to meet complex productivity tasks.

This is pretty wrong headed, because they think that they need the filesystem to be able to organize documents according to projects they are working on. But this can be done without exposing the filesystem to the user. And given how primitive the file system as an organizational tool is, it should not be too difficult to create something better.

The other advantage is that it allows multiple programs to cooperate on the same document. Again this is difficult to setup, and a more structured way to provide this power, with clear boundaries between the individual tools, should make it easier to use and also safer, as you could better control the content at these defined interface points.

In the end the file system enables some nice collaboration scenarios, but these are essentially hacks, and for most people it is far too easy to screw up with files. A more structured approach, where you can share documents in apps to a project manager, which then orchestrates everything, would be a much better way. It would enable advanced workflows without adding a burden to users with simpler needs.

23. November 2013 · Categories: Apple, Software

Computers, be it as desktop, notebook, tablet or smartphone, are amazing devices that come to live thanks to the software supplied for it. As such we buy them for the apps we use on them.

Unless our needs are basic, and can be satisfied with the built-in apps, this means we want specialized software to buy and use. And because apps are the driver, we want a device that has long term app support. This is why the Apple strategy of reduced fragmentation and premium features is so useful in supporting the app market. Fewer variations to think about and to test against, software frameworks updated even in the installed base, leading to faster pickup of new features, and ensuring that everyone has hardware that supports great software.

And since the premium that one needs to pay to get the best app platform has come down a lot since in the last decades, from thousands to a couple hundred dollars at most, it means that now much more people valuing apps can easily afford to get the optimal platform as well.

This is the huge advantage Apple has in the mobile world, and it still gives them a considerable lead. But there are problems; Ben Thompson voices concerns about the App Store sustainability, and they are the big issue facing Apple today: With competing tablets and phones slowly becoming competent enough as an app platform, the quality of the apps we want is bound to become the deciding factor in which platform to choose.

This would suggest an opening for the competition: Offer better terms for developers to get sustaining app businesses and get users to follow the apps. But it is not really a threat, as Apple can easily counter these terms, they are not hard to implement, and it has the cash to buy out anyone really threatening.

On the other hand, AirDrop shows that Apple is willing to put in hardware that only gets used a year later. This was a brilliant move on Apple’s part: Since the feature requires two devices to work together, they have unveiled the feature only when there was already enough of an installed base for it to be useful. This means that new features already have enough hardware supporting it to make it worthwhile for developers to code for it.

The threat would come from online services. But all that shared data still needs apps to process and use them; and making a good app is an expensive process that you can only perform on multiple platforms for the most popular data. Performance will take awhile to improve enough to allow browser apps to become good enough, and once everything you want to do can easily be done in a browser app, only then will app platforms become irrelevant. We should not forget that one of the important characteristics of computers is their multi function character, this means that integrated platforms remain viable as long as they perform one important function significantly better than the competition, and relatively open services like Dropbox are actually a help for platforms because they enable you to pick and choose the best native apps for each device and data type you are using.

The question when browser apps will be fast enough is difficult to answer. The A7 chip provides, when normalized to the same frequency and core count, almost the same performance as the latest Intel Haswell chips, which means that Apple will slow down in its performance improvements. And since touch based input requires fast response times even more than a classical PC, the performance bar will actually be higher in mobile, especially when taken together with the limited power budgets of these devices. So I’d guess that there will remain enough cases needing native performance for the next decade.

21. October 2013 · Categories: Photos, Software

Thom Hogan sees the great future for cameras in a modular system. I am not so sure that this is the way forward, given that sensor technology is getting fast enough to provide for all uses in one module: a 40 MPixel sensor could do 15 fps with the full image size, 60 fps at 4K video, and 240 fps at 2K video. The Nikon D800 shows that we are close, missing less than a factor 4 in performance. The only impediment is for a smart person to figure out how to use the sensor backside to provide parallel pixel binning processing to ensure that the main processor is not overloaded, while still being able to make use of the entire sensor to collect the light for video.

And 40 MPixel is about the best you can get in optical resolution from a full frame sensor, so if you need even more, you will need to either stitch or move to larger formats. We see this when comparing lenses tested with the D600 and D800: the benefit of the D800 is typically less than 20% extra resolution, so we are already strongly at the margin of what the lens is capable of.

The real problem will be for the camera makers to figure out how to integrate their cameras with computer control. The basics are extremely simple: use Bluetooth LE and WiFi Direct to provide seamless integration with a tablet for control, and provide an SDK to allow third parties to use these channels for complete remote control. The extremely low power consumption of Bluetooth LE would allow the interface to remain always on without draining the battery, and we could activate lower latency connections on demand. Of course this requires some coordination with tablet makers, but getting this done is essential; you open your camera app and snap a photo, any extra steps will cause friction and frustration.

09. October 2013 · Categories: Software

With the latest PC shipment data, we see a constant decline since the iPad appeared. Essentially the PC is loosing almost completely the home part of the market, and even in its old business domain, mobile tasks are moving away from it. Given that the iPad is a much better match for these tasks, the decline is not surprising. But it is interesting to see how Microsoft is accelerating the decline:

  • Windows XP is still widely used. But instead of making it easy to switch to Windows 8, you need to install Vista and 7 as well to upgrade without having to reinstall your programs. Oh, and forget about moving to the 64bit version. Lots of unnecessary hassle will only keep people from upgrading.

  • Windows 8 is a dog when running on high resolution displays. The future of PC computing is in the one niche tablets cannot fill: providing ample space to work with a lot of information at once. Think 36" Retina displays as the great goal. How do you intend to compete when you still ignore the future?

  • Windows is priced quite high, and the individual licenses are much more expensive than what manufacturers pay. So you will be upgrading with your new computer, maybe once every 6 years now that they are fast enough. Microsoft gets maybe $72 for the license, that makes $12 per year. Compare that to Apple, which prices each update at $19, but also brings out one every year. The price is suddenly cheap enough that apps can safely demand the latest revision. Who has the happier users? And I would not be surprised if Apple actually makes more money from the OS per user than Microsoft.

Having played with iOS 7 for awhile now, I like the overall direction very much, there are just a few small issues that irritate me:

  • Home screen backgrounds

    This is probably the most annoying part, and a clear sign of Ive not able to imagine bad taste among customers. Because of the very light text you absolute need a background with low contrast to be able to read it easily. Most backgrounds that worked fine in iOS 6 break on iOS 7. Here Apple really should offer a bold text option for the home screen only.

  • FaceTime

    FaceTime with the dark background looks annoying, it is probably used to differentiate it from the phone app, which actually offers the same functionality, but with a much more friendly face.

    Update: FaceTime actually uses your home screen background darkened, it is a bug that causes it to become black.

  • Notes

    Unlike a lot of other people, I actually liked marker felt as the font. It gave notes that casual feeling, showing us that they are not finished, but mostly just thoughts jotted down to be remembered later. Helvetica Neue Light on the other hand has a very finished, elegant feeling to it, more suitable for beautifully crafted words than incomplete notes.

  • Long animation times

    The animations to open the home screen could be a bit faster, it makes me feel I am waiting. The app switcher animations on the other work quite well.

In the end these are minor issues once you have changed your home screen background. I am quite positively surprised how fluent the new design language already is.

14. July 2013 · Categories: Software

Drew Crawford wrote an interesting analysis about the speed of mobile web apps. Basically the JITs are roughly 5 times slower and the use of garbage collection means that effectively available memory is 4 times lower because otherwise performance becomes way too slow.

The performance impact of javascript is actually worse for data/image manipulation, since 1) you always have range checks for every access and 2) you cannot access that great parallel coprocessor, the GPU, and it is by design. As for memory performance, the only way out would be to move the large objects over to manual allocation to reduce GC overhead. This means using small handle objects under GC pointing to a large native buffer with explicit dealloc.

The big question raised is: when will web apps become good enough? Hardware wise performance per watt for single threaded tasks improves roughly linearly with process size reduction, while parallel performance would ideally raise by the cubic of the reduction. This means that single thread performance should rise enough within the next 4 years. But we now have reached a plateau with regard to raw speed, and the feature width has come down to only 100 silicon atoms. This means that we will see an end to Moore’s Law in the not to distant future. On the one hand, we are nearing the size where quantum effects and simple thermal degradation will become too much; on the other hand the PC crisis shows that many people see performance as being good enough, and this, together with the cloud now available as a backup for the odd difficult job, will reduce demand and so the capital available to fund fast progress. I’d guess that 25 atoms (5.5nm) would pose a limit, meaning only a 16 fold increase in transistor count and 4 fold increase in speed would still be available, so that we will have to rely on software improvements as well to achieve sufficient performance.

There is a limit to how fast we can make JavaScript without impacting safety: We need to guarantee that even a malicious program cannot cause problems, especially that it cannot access memory it does not own. Combine this with a very dynamic language model that makes inlining a pain, and we will need to see some progress in automatic proof techniques to reduce the performance gap to native code. And with the reliance on software, we need to keep in mind that every platform vendor has an incentive to maintain a performance gap to native apps: They want easy access to the web and simple web front ends, but the real work apps should stay native.

30. June 2013 · Categories: Apple, Software

We have basically three categories of apps: entertainment, social, and productivity. In general, only productivity apps are able to generate platform lock-in: entertainment is consumed, and rare are the games that keep our imagination for more than a few months, while social depends on providing the largest audience and so has a very strong incentive to be cross platform.

This makes productivity apps extremely important for platform owners, but we can see on the Apple App Store that they are playing second fiddle to games. The problem is that for such apps to thrive, they need recurring revenues, so that they can be improved to stay relevant, while the store has grown from selling songs, and so lacks upgrade pricing, and especially mechanisms to check out apps before committing to them.

I wonder what Apple’s thinking about it is. The complaints have been known for years, and not much has improved. Apple continues to dominate the productivity category with excellent apps sold for relatively low prices, and one wonders whether Apple is afraid of dominant apps emerging on the store, having been burnt when Adobe pretty much abandoned them for Windows.

This means that you need to work around a relatively hostile environment when you want to sell productivity apps into the mass market. The only category working well already are tax preparation packages, as you need a new version every year to follow tax changes, so it makes prefect sense to sell a new package every year. For all other software a subscription model would work best, but it is only tolerated as long as you have a server component with it as well; also there is the issue that customers do not want to pay a permanent rent to be able to access their own files.

What to do now

The only way you can get a trial version into the store is to provide a full version that can only handle one document (or for social apps be limited to 10 contacts). If this does not fit your app, your only chance is to provide a good video demonstration on the app website.

Upgrades are not supported, so you will need to put a new version into the store, and inform your old customers with a notification that the new version is now available, and that for the first weekend it is on sale to ease upgrades. This should be the only sale you are doing, so that people are not starting to wait for the price to drop again.

What Apple can do

Everyone would love to see upgrade pricing, but I doubt that Apple will provide this: they want software to move to the newest iOS for "free", so that users can easily upgrade to new hardware.

A great help would be the ability to provide upgrades to apps no longer on sale, as long as the same app is also made available for sale on a different sales ID. This would enable a slightly longer support cycle: For every major iOS release you put a new version on sale, and remove the older ones from sale. You provide the current version as an update to your old customers for two or three additional iOS revisions, before stopping with updates. Once you have provided your last update, say for 6.1.3, and 7.0 is out, you could then put the old version back on sale for old devices, with a clear statement that it will not see any updates anymore.

There is also quite some room to improve app discovery on the store, enough to warrant its own post later.

05. June 2013 · Categories: Apple, Software

If we compare Android and iOS security, we see that Google has copied Apple’s approach quite extensively, so that only two important differences remain:

  • iOS uses the 256bit version of AES, while Android uses the 128bit version

    In theory this is not such a large difference, because the key schedule for 256bit is not especially good, but 2^65 bits are still 4 million terabytes working memory for the shortcut.

  • iOS has a unique device AES key fused into the hardware.

    It prevents people from easily using a specialized computer for cracking, and you can only check one key every 80ms on device. You could extract the device key by checking with an electron microscope, but this is very expensive, error prone, and destroys the chip. Or you can still brute force it, but then you will search for a random 256bit key. 2^128 operations would need 10^19 years on a parallel cracker doing 1000 GigaOps per second.

This means that in practice iOS is much more secure than Android, because almost all passwords have much lower entropy than 128 bits; a password consisting of random letters and numbers would need to be 25 characters long to achieve it.