I recently came across a paper on the energy costs of video streaming, written by the Borderstep Institute in German. It repeats the myth of insanely high streaming energy costs. That seems to have started with a fatally flawed analysis by the Shift Project, now reasonably refuted by the IEA.

The Borderstep paper takes the average data transfer volume and energy consumption of data centers, as well as that for the broadband network, calculates an average efficiency of 30 kBit/Ws for broadband, and 70 kBit/Ws for the data centers, and uses that to extrapolate 1000W power consumption for streaming a 25 MBit/s 4K stream. This result is wrong, because it neglects the actual engineering:

  • Streaming uses very little processing power on the server side

    All popular videos are stored already fully encoded on the servers, in multiple bit rates, to improve responsiveness. Even a typical home NAS, drawing 40W power, can stream 40 4K streams in parallel over a 1 Gbit/s connection. In the data centers, we see units drawing 1000W serving 20 Gbit/s. Even with a generous allowance for overheads and reserve capacity I’d be surprised if a 4K stream draws more than 5W, for an efficiency of 5 Mbit/Ws. Given that the typical HDD delivers 50 to 100 Mbit/Ws, this feels reasonable.

  • Streaming is mostly via CDNs, and the backbone is now fiber

    With a Content Delivery Network, the data will only have to travel a few hops and at most a few hundred kms, and this can be done efficiently with fiber. DWDM type connections manage 160km with 1 Gbit/s with the transceiver drawing 4W per side, that is 125 Mbit/Ws. Assuming that the switch draws the same power, that we need at most 5 hops, and adding a 5x safety margin, we end up at 2.5 MBit/Ws. This is 10W for the backbone, likely even lower.

  • The last mile for broadband has basically a fixed power consumption

    While in the backbone and data centers you aggregate demand, and so can ensure a pretty high load and efficiency, the last mile is essentially always on. So the energy consumption stays the same, no matter whether you use it sparingly for surfing, or intensely for streaming. You draw 10W if you have fiber, 20W if you have DSL. Your WiFi router at home will also draw a fixed 5 to 10 Watts.

A 4K stream with fiber to the home, and an efficient router will draw up to 5W + 10W + 10W + 5W = 30W, a low bit rate stream over DSL with an old router will be 0.5W + 1W + 20W + 10W = 31.5W. This is similar to the 18W for data transmission the IEA assumes, without taking your home wifi into account. It also shows that with current technology 4K is roughly the tipping point where the backbone starts to consume more power than the last mile, and we need to start monitoring our consumption. Still, this is 300 GB/day.

If you are worried about the climate impact of streaming, look at the TVs instead. A 65″ HD display draws 200W, six times the power needed to move the data.

15. November 2023 · Categories: Hardware

I recently decided to replace my old Synology NAS, a DS416, with a more powerful version, a DS423+. This turned out to take much longer than expected, mainly because I also wanted to take the opportunity to also start using btrfs as a more robust filesystem, and so needed to migrate my data as well.

For the migration, Synology recommends using their backup solution, Hyper Backup, but their steps are not efficient, if you can live with a service interruption of a few days. Their suggestion is to run a backup of the old system, restore to a new system, stop using the old system, repeat transfer with incremental backup, switch over to the new system, and start service again. That works fine with powerful hardware, and with enough space for all the backups, and it makes sure that all permissions are copied correctly.

This description leads you astray if your old NAS is slow, you lack space, or you can tolerate a longer interruption. With a slow NAS, encryption can become a bottleneck, and a backup of an encrypted folder to an encrypted target over an encrypted connection meant that backup speeds were below 15 MB/s. Instead do the following:

  • Move the disks directly to your new NAS. Use Synology Assistant on your computer to verify that you can install DSM without loss.

  • Prepare your old NAS for backup duty: put in empty disks, create a redundant storage pool on them to receive your data, add a volume with shared folder, install Hyper Backup Vault. If you needed to grab some SMR disks, trim them with fstrim via SSH.

  • Backup the data to the old NAS, including all package info. I used Hyper Backup for this, with client-side encryption for the backup so that there is only one encryption step, and it is done by the powerful new NAS. Still I had only around 45 MB/s backup and 40 MB/s restore speeds with my 5400rpm HDDs.

  • Delete the volumes on the new NAS, and create btrfs volumes instead. Make a note of the packages that are uninstalled to put them back later. Recreate the shared folders before you run the restore, so that you have control over which folders should have integrity checks enabled.

  • Restore the backup to the new NAS.

Since Hyper Backup is quite slow as noted, you might want to use direct copies instead. This can give problems with permissions, and means a slightly higher risk of data loss. On my old NAS, where I wanted to keep encryption, it was even slower at 30 MB/s as doing encryption twice (transfer and rest) slowed it down.

08. November 2023 · Categories: Hardware

Shingled magnetic recording is a hard disk recording method where multiple tracks are written in a slightly overlapping manner1. Because the write head is larger than the read head, mainly because it needs to generate a strong enough magnetic field to affect the disk, you can increase density by partially overwriting the data you are recording. If you are looking at it by height, you would get with a 50% overlap, and four tracks:

Action A B C D D2
Write A A A ? ? ?
Write B A B B ? ?
Write C A B C C ?
Write D A B C D D

You see that you can increase density nicely, but in order to update any track but D you would also need to rewrite the tracks below them to prevent data loss. Assuming that you group your tracks into groups of 8, you would roughly need 10 half tracks in height (one half track is needed for synchronization info), instead of 16 with conventional recording, but worst case your write speed is reduced by a factor of 8 when updating track A.

This is awful performance for random access data, but can be fast enough with only large files. The OS can then tell the drive which sectors have been deallocated, and the disk no longer needs to restore those tracks. This is done with a TRIM command. In addition, you want a small percentage of the disk set aside to use conventional recording, so that you can update your file meta data quickly. This seems to be rarely implemented.

Under Linux, you can use hdparm -I /dev/sda to get detailed HDD info. fstrim can be used to trim unused space on a partition, if the filesystem does not do this automatically. This is especially useful for a freshly reformatted disk to ensure the disk firmware treats the data as unused.


  1. There is a more technical overview at zonedstorage.io