29. October 2023 · Categories: General

I was surprised to be reminded how much power RAM can consume in a laptop or NAS. NAS typically use slightly older technology, for example the Sinology NAS can be expanded with dual rank DDR4 laptop modules. I found a Micron technical note giving a deep dive. From page 24 there is an example for a 4GB configuration, where they calculate 1.6W typical draw, of which 350mW is background power. The other state is self refresh, which on a 1GB module is 25mA + 5mA @1.2V (Page 324). This gives the following estimates for DDR4 power consumption, scaled to 16GB:

State Power
Used 6.4W
Unused 1.4W
Standby 0.6W

These are rough estimates, given that multiple modules on the same bus add additional parasitic draws.

As such, you should aim for 50% utilization under load. Less, and you pay quite a bit in power for unused capacity, more, and your risk of needing paging increases, which will consume more power than unused RAM would, and drastically reduces performance.

29. March 2020 · Categories: General

Remember that the virus can already be spread by people not yet showing symptoms, or so mild ones that they are not correctly identified. So you need to be careful not to spread it always, even without symptoms.

What everyone needs to do

Keep at least 1.5m distance to other people, preferably 3m. Since the transmission is mostly via droplets from breathing, especially coughs and sneezes, they will fall down within this distance. There was a case where a passenger in a bus infected other people sitting 4.5m away, and also another person entering the bus 30min later, while some people closer by got lucky and were not infected. This clearly demonstrates that you should be taking extra care in enclosed spaces, where longer stay durations will saturate the air more with virus particles.

Regularly wash your hands with soap, or alcohol based disinfectant if not available. The virus can stay viable for up to 72 hours on surfaces, especially steel and plastics, and you can pick it up with your hands and then spread it to your face. Wash them before leaving the house, after coming back home, and often while outside. Remember to open any doors on your way to the hand basin before leaving the house, you don’t want to contaminate them when coming back home. If you are worried about a package being contaminated, you can let it rest somewhere for three days, after which the virus will be deactivated.
Also regular washing will irritate your skin, so do not forget to care for your hands. An alternative could be water with ozone mixed in.

Do not touch your face. Easier said that done. But make sure when you are outside your home. It helps to clean your face before leaving the house, as it reduces irritations that might trigger you to touch your face.

Wear a simple face mask. Masks do not protect you very much, but even a mask made of cotton will reduce the droplets you are spreading. They are most helpful to reduce community spread. Cotton masks do not work for long, they start to lose effectiveness already after half an hour, and should be replaced after one hour. For some background, see this Medium post by Sui Hang.

Avoid close contact with many different people. You might need to see some people to get through a longer lockdown, but we ideally want to be split into a lot of small, separate groups to prevent the virus spreading far. Putting together a small friends group, where everyone agrees to see only people within this group is pretty reasonable. But keep in mind that this needs to include everyone sharing a single home, so two friends with family would already be a group of 8, and making them larger is not really helpful.

What we need to do as a society

Organize widespread testing. We probably need to test around 100 people for every fresh case to find everyone infected, and to isolate them. This will mean 10 million or more tests daily for the entire world to control it. Without it we can never be sure who is clean of the virus, and cannot remove the lockdown.

Much improved contact tracing. We need to use the tracing abilities a modern phone can give us to quickly locate contacts, without needing a lot of man power. We can use both Bluetooth for automated contact lists, or GPS for time/location matching.

Closing down public transport, because it causes many people to be in close contact. We should be able to provide fast ebikes as an alternative.

Support people that cannot do their jobs because of the shutdown. This is a temporary demand shock, which does not change the underlying viability of the business once the lockdown is no longer needed. We have enough man power to keep everyone fed, so we can afford to keep them fed well. This will also mean that we should freeze payments for capital that temporarily cannot utilized because of lockdowns. Not only the businesses currently not having any demand, but also the capital owner need to make sacrifices to get through this situation.

Why did it get so bad?

We lost many opportunities to limit the spread of the virus thanks to incompetence and also poisonous politics.

When the virus started last year in Wuhan, local doctors sounded the alarm, but the communist party violently suppressed it, so that when they finally admitted to it, we probably already had 1000s infected.

The WHO was way too late in acknowledging that there is human to human transmission. Taiwan was already informing the WHO about it on 31.12.2019, while WHO still repeated China’s insistence that there wasn’t on 14.1.2020. And the poisonous way China is treating Taiwan also meant that Taiwan could not warn the world.

Then all countries not having experienced SARS did not understand the severity of the situation, even with the example of exponential growth and the severe lockdown in place in China. They for way too long considered it just a bad flu, didn’t grasp that a doubling time of 3 days means 1000 times more cases within a month, and didn’t take China serious, seeing the lockdown as a crude tool deployed by a backwater, instead of an indication of how serious COVID-19 is.

They also did (and still do) not grasp that a disease that is quite similar to influenza can spread quite a while undetected unless you are testing very aggressively. So all travel warnings came way too late.

The endgame

Mortality seems to rise with age, leaving people below 50 with a worst case mortality of 1%, and maybe 5% with permanent health damage. This is still awful, but it is low enough that societies will not collapse.

Also there is the paradox of Japan: Even though the country does not have a lockdown, limited testing and contact tracing, corona is not tearing though the country. About the only precaution seems to be widespread wearing of face masks.

28. September 2019 · Categories: General

Since the introduction of the iPhone 4, high resolution screens have become the norm in mobile, and now every mobile computer Apple makes has a Retina screen, after the retirement of the old MacBook Air. Now the transition is moving on to external displays, with intermediate steps of 5K (iMac, 5120 by 2880) and 6K (Pro Display XDR, 6016 by 3384), but we are not yet there with affordable 8K displays.

As can be seen in the following table, data rates increase quite quickly with larger displays. This is with 8 bits per color and 60 Hz.

Resolution Data
3840 x 2048 11.3 Gbit/s
5120 x 2880 21.2 Gbit/s
6016 x 3384 29.3 Gbit/s
ditto, 10bpc 36.6 Gbit/s
7680 x 3200 35.4 Gbit/s
7680 x 4096 45.3 Gbit/s

DisplayPort is the standard technology to connect monitors, and it supports 26 Gbit/s since 1.3, and version 2.0 from June 2019 will extend that to up to 77 Gbit/s, with options for 37 Gbit/s and 50 Gbit/s.

We see in this that all the new monitor options from Apple make full use of step changes in the available bandwidth, to provide the most pixels that one can send over a given monitor cable. Apple does not like compression, as it could introduce artefacts. The alternative is DSC (Display Stream Compression), which has been developed with video and text display in mind, and should only produce small, imperceptible losses.

So my expectation is that any computer that can support the 6K 10bpc resolution will also allow a Retina 38″ display, while for 8K displays we will likely see compression as an intermediate step until 50 Gbit/s bandwidth has matured.

Retina class large displays are used with a working distance that higher resolution is only marginally appreciated. Probably a 1.5x increase would be sufficient, but could lead to issues because of scaling artefacts with old software. Many people might see more benefit from switching over to a higher color depth instead, and it could well be that 8K 10bpc will become the new standard, combining two nice improvements into one compelling upgrade story.

The last improvement vector are refresh rates. While 60 Hz are enough for a fluent display, we have multiple components in the display chain that can lead to additional lag. This includes buffering, where we calculate the image with information at the start of the frame, but have the entire image only ready at the end, the encoding delay for transferring the screen buffer info from the computer to the display, and also a possible quantisation effect when the refresh cycles of the computer and display are not synchronised. The proper way to reduce this would be proper synchronisation between both sides, but this requires full stack optimalization:

  • Sync the frame starts between computer and monitor

  • Improve you graphics pipeline to be able to do deadline scheduling, and start to calculate your next frame just late enough that it becomes available just before you switch over to that output buffer.

Even though it would conserve power, the difficulty of knowing the calculation complexity of a scene beforehand leads most games to just calculate frames as fast a possible, and then throw away extra frames. This is also the reason that some use triple buffering to smooth the output, allowing a small overrun on one frame by taking time from the next one without the user noticing.

A two frame delay is about the optimum to expect, one for the buffer on the computer, one for the frame transfer to the screen, giving us 33ms at 60 Hz. For optimal reactions you would want to half it, giving another doubling of the transfer rate. On the other hand, a delay of 50ms compares well to human reaction times of around 200ms, but it is at the margin if are trying to lock onto moving objects.

24. February 2019 · Categories: General

Now that we are seeing the first folding smartphones coming out, I feel fundamentally pessimistic about them. Unfolding will always take an extra second or two, and that is problematic for quick checks.

As long as the phone remains our primary communication device, we will value quick access a lot. So either another device must replace the phone. The watch and AR are the best candidates for now, but both are still far away from being capable devices for responding. Or the folding phone must work very well both folded and unfolded. This is a challenge, with the extra weight and space required, and with the screen the largest battery drain. I am afraid that folding phones will need to add too much bulk to stay competitive with normal phones, especially now that the standard size has become large enough to comfortably read even longish texts. The declining sales of the iPad mini give a strong indication that the expanded state would not be valued enough to justify costs.

I am much more optimistic about folding tablets. These are already devices that are used for longer sessions so the folding will be much more tolerable, and it provides a real benefit in making tablets more portable. We will have to wait a bit though until the technology has caught up to allow large, durable, folding screens with good enough power consumption.

07. April 2016 · Categories: General

It is interesting to see the demand for the just announced Tesla 3. The demand confirms that the Tesla is seen as the gold standard for electric cars, and the promised performance makes it one of the best cars to own as long as you do not need to travel more than 150km one way (or already have a charger available at your endpoints). This range is good enough to make it a very viable second car for a family, where you have a different car you can fall back to for long journeys. It also illustrates that choosing a range of 340km is a very smart decision: unlike the 100km ranges of commuter cars it covers enough range that you will want more only rarely, and this makes all the difference in justifying the purchase: No backup is needed.

How long the range will be be under practical circumstances we do not know: we lack info on battery aging, to estimate the loss over time. Also I guess the demand as the only car of a household will be limited, as traveling longer distances where you need to recharge during your journey does not work well. It starts with the relatively low density of the charger network, which Tesla is working hard to improve. The stations are roughly 200km apart and typically can charge 6 cars at once. A fast charger needs about 35 to 40 minutes to add another 200km of range, so can serve only 10 cars per hour. This is completely inadequate when there are a few hundred owners wanting to go on holidays on the same day1. The other problem is that the long charge times reduce you average speed significantly: when you are driving 130 km/h normally, adding charge pauses can slow your speed down to just 94 km/h.

The new Tesla will be a success, but it is only a small part of the way forward to a more ecological car fleet. I believe the biggest leap will come when automatic driving will finally allow a complete reconfiguration of travel.


  1. Building a charger network is a problematic proposition: you need roughly 100x the stations to cover the same number of cars doing long distance travel (40min for 200km vs 2min for 800km fill ups), and because you will normally charge your car at home overnight, charger utilization will be highly seasonal, making it harder to spread the costs. 
16. November 2015 · Categories: General

Because car batteries are unlikely to get much better energy density, we will probably have to accept that electric cars will typically only have a range of about an hour, since batteries will be too expensive otherwise. I believe that people would actually accept this provided there is a wide net of charging stations available that allow you to charge the battery within five minutes. There are however two problems that would need to be overcome:

  • Along the motorways, we would need to have either charging while driving, or enough stations to handle the 10000 cars/h we can have during the holidays. Assuming 120km/h speed and 5m per hour recharge, that would amount to stations with 70 fast chargers every 10km. 
  • To drive for an hour at a reasonable speed, you need around 25kWh. Assuming a loss of 5% during charging and five minutes, this amounts to 15kW heat that must be removed.

It shows that for electrical cars to take over, a lot of infrastructure will need to be adjusted.

05. June 2015 · Categories: General

Spending a lot of time in front of a computer, I’d really love to have a larger monitor than my current 27″. The options for this are not very good, there is the Philips BDM4065UC, which is essentially a 40″ 4K TV-display with display port input, but it is far from being good as a monitor. For one thing, the screen is too high and not wide enough, as I find it easier to have multiple windows full height side by side. And the display is not optimized for monitor use, as it does not have square pixels and colors are not reasonably accurate out of the box. 

What I’d love to see instead is to see the technology used in today’s 27″ monitors applied to create a large monitor. It should fit within the 8 million pixels for 4K, so a good size would be 4200 * 1800 pixels, at 120 dpi. That covers 35″ by 15″ (89cm by 38cm), with a diagonal of 38″. 120 dpi is about the highest resolution we can have without a 100% scaling becoming unreadable, and the width allows us to have three or four windows side by side. 

If we want to stay with 110 dpi, a 24:9 format would be much more suitable, for example 1740 by 4640, that is 16″ by 42″ (40cm by 107cm). 

15. February 2015 · Categories: General

One of the ways to make cars more efficient is to keep the combustion engine running at optimal efficiency, and use an energy storage system to provide the actual, variable demand needed for driving. This works because the efficiency ranges from 10% to 35% depending on the actual load. (The theoretical maximum is around 50%, leaving not much room for further efficiency gains from the motor)

When we use an electrical drivetrain for this, we have a couple of losses:

Efficiency Caused by
92% Mechanical to electrical
60%–95% Battery storage
92% Electrical to mechanical
50%–80% Total

The biggest problem is the battery1, which gets significantly less efficient as we increase the current we are drawing from it.

We have a strong design conflict here: the kinetic energy of 2 tons moving at 226 km/h is 1 kWh, so a small battery is enough to smoothen the load, but to retrieve the 25kW we want for a smooth acceleration efficiently, we can either have a large battery or expensive capacitors. Such a battery would would need to support a lot of cycles: 1KWh are roughly 15 accelerations to city speed, so a thousand battery cycles per year seem a reasonable lower bound.

In essence, we have some expensive tech to improve city mileage to the 60 mpg (4l/100km) range, but otherwise we have come so close to the ideal, that another 50% in efficiency gains can only be had by changing the car to have radically less weight and drag.


  1. some background on batteries can be found here 
18. February 2013 · Categories: General

In the saga about the NYT Tesla drive one of the problems was the advice given to maximize fuel efficiency: Do not engage cruise control, but travel with short burst of acceleration followed by unpowered coasting.

This advice is actually correct for petrol powered cars. It is based on the fact that petrol engines have peak efficiency when running at roughly 90% torque, and are quite a bit less efficient when under partial load (especially compared to diesels). This means that the improved efficiency under acceleration more than compensates for the increased air friction caused by driving uneven speeds. Actually it is optimal to have the gears in neutral during coasting, and you need to keep the bursts very short, ideally keeping your speed within a 5 to 10 km/h band. It is quite obvious that this is a very stressful way to travel, and modern fuel injection techniques have narrowed the efficiency gap.

But for an electric car this is counterproductive since electric motors have a much more even efficiency, and the uneven load is poison for the battery: The higher the current you are drawing from a battery, the lower the total amount of energy you get. So it is best to draw current as evenly as possible from the battery, and avoid strong acceleration.

06. September 2012 · Categories: General

Nokia has recently presented two new phones in their Lumia series, and one wonders if they still have a chance to fight back against the rising tide of Android and iOS. The software is now finally competitive against the other two, covers the basics well, but nothing outstanding. So they need something else to get people to switch, and I believe they have it in their PureView camera technology. Assuming Apple does not catch up next week, it is miles better than the competition for indoors photos. Since photos/videos are now one of the main uses for your phone, and low light performance is currently clearly not good enough, it is probably the best chance they have for wooing people into their camp.

Update: It did not help that they did not use their camera to demonstrate image stabilization in their spots, since it sows doubts on the validity of the quite amazing actual photos they posted. And while they never claimed to use smartphone footage to demonstrate IS, it was clearly implied. Shame on them.