28. September 2019 · Categories: General

Since the introduction of the iPhone 4, high resolution screens have become the norm in mobile, and now every mobile computer Apple makes has a Retina screen, after the retirement of the old MacBook Air. Now the transition is moving on to external displays, with intermediate steps of 5K (iMac, 5120 by 2880) and 6K (Pro Display XDR, 6016 by 3384), but we are not yet there with affordable 8K displays.

As can be seen in the following table, data rates increase quite quickly with larger displays. This is with 8 bits per color and 60 Hz.

Resolution Data
3840 x 2048 11.3 Gbit/s
5120 x 2880 21.2 Gbit/s
6016 x 3384 29.3 Gbit/s
ditto, 10bpc 36.6 Gbit/s
7680 x 3200 35.4 Gbit/s
7680 x 4096 45.3 Gbit/s

DisplayPort is the standard technology to connect monitors, and it supports 26 Gbit/s since 1.3, and version 2.0 from June 2019 will extend that to up to 77 Gbit/s, with options for 37 Gbit/s and 50 Gbit/s.

We see in this that all the new monitor options from Apple make full use of step changes in the available bandwidth, to provide the most pixels that one can send over a given monitor cable. Apple does not like compression, as it could introduce artefacts. The alternative is DSC (Display Stream Compression), which has been developed with video and text display in mind, and should only produce small, imperceptible losses.

So my expectation is that any computer that can support the 6K 10bpc resolution will also allow a Retina 38″ display, while for 8K displays we will likely see compression as an intermediate step until 50 Gbit/s bandwidth has matured.

Retina class large displays are used with a working distance that higher resolution is only marginally appreciated. Probably a 1.5x increase would be sufficient, but could lead to issues because of scaling artefacts with old software. Many people might see more benefit from switching over to a higher color depth instead, and it could well be that 8K 10bpc will become the new standard, combining two nice improvements into one compelling upgrade story.

The last improvement vector are refresh rates. While 60 Hz are enough for a fluent display, we have multiple components in the display chain that can lead to additional lag. This includes buffering, where we calculate the image with information at the start of the frame, but have the entire image only ready at the end, the encoding delay for transferring the screen buffer info from the computer to the display, and also a possible quantisation effect when the refresh cycles of the computer and display are not synchronised. The proper way to reduce this would be proper synchronisation between both sides, but this requires full stack optimalization:

  • Sync the frame starts between computer and monitor

  • Improve you graphics pipeline to be able to do deadline scheduling, and start to calculate your next frame just late enough that it becomes available just before you switch over to that output buffer.

Even though it would conserve power, the difficulty of knowing the calculation complexity of a scene beforehand leads most games to just calculate frames as fast a possible, and then throw away extra frames. This is also the reason that some use triple buffering to smooth the output, allowing a small overrun on one frame by taking time from the next one without the user noticing.

A two frame delay is about the optimum to expect, one for the buffer on the computer, one for the frame transfer to the screen, giving us 33ms at 60 Hz. For optimal reactions you would want to half it, giving another doubling of the transfer rate. On the other hand, a delay of 50ms compares well to human reaction times of around 200ms, but it is at the margin if are trying to lock onto moving objects.