Why Displayport 1.4’s Compression Codec Can’t Be Truly 100% Visually Lossless

By | March 21, 2016

VESA has claimed that Displayport 1.4 will be capable of 8K 60 hz HDR video, by using video transport compression. Without compression, DP 1.4 is only capable of 4K 120 HZ SDR, or 5K 60 hz SDR, due to its limited throughput. By utilizing compression, higher resolution video can be achieved by reducing the amount of information needed to be sent.


Understanding How Lossy and Lossless Video Compression Work

Note: This is a highly simplified explanation of compression. There is more to compression than the details provided.

In computing, compression is the act of taking data, and reducing its file size. There are two types of compression methods. The first is lossy, which means the quality is reduced in order to achieve the reduced file size, and the second is lossless, which reduces the file size, without compromising the quality of the content. Nearly all video content consumed today through streaming services or downloads uses lossy compression. Sending video that is encoded using a lossless codec across the internet in real time is simply infeasible with today’s infrastructure.

Compression works by recognizing redundancy in information, and rewriting it in a way that takes up less space:

For example, take this string of information:



This contains 20 a’s, followed by a b. This can be rewritten as:


Which would take substantially less storage space. Lossless video compression will do something similar, for example recognizing portions of the screen that are the exact same color in subsequent frames, or portions of a frame that are the same color, and can be represented as a field of one color. Lossless codecs will only make changes that do not compromise quality, by rewriting completely redundant information.

In “lossy” compression, generally a target bitrate is set, and each second of video contains that amount of information. Unlike lossless compression, lossy compression looks for semi-redundant/similar information, and reduces the accuracy/blends it. For example, if a leaf in an image of a tree is multiple shades of green in different pixels, it will set certain portions of the leaf to be one color of green, and label it as redundant information. This is why video viewed on streaming services such as Youtube can seem “blocky” at times. In particularly static situations where the camera doesn’t move much, the codec will realize that information between frames are redundant, and set a pixel range for several frames. This is why videos of games being played seem to drop in quality when the player turns the camera around rapidly, and look very “blocky”. When turning the camera, groups of pixels are transitioning very quickly, and therefore are not very redundant between frames. For this reason, the codec has to compromise to maintain the target bitrate, and the quality feels lower.

The higher the bitrate, the higher the quality. With a higher bitrate, the codec can make less compromises about quality; therefore, you get a sharper image.

Without lossy compression, digital video content as we know it wouldn’t be possible. Streaming services such as Netflix would not be able to output the insane amount of bandwidth needed to deliver truly lossless video. DVD’s would not be able to hold movies. Mobile phones wouldn’t have the storage space needed to store more than a few minutes of video. Lossy compression is a necessary evil for the storage and delivery of video content.


Is Displayport 1.4’s Compression Actually Lossless?

The compression is described by VESA as “visually lossless”. This implies that some information is lost, but it isn’t information that is easily perceivable by viewers. However, as described by the theory above, it isn’t always 100% possible to deliver lossless compression when you’re limited in throughput.

Since the images sent to a monitor need to be sent in real time, DP 1.4 cannot use keyframe compression to reduce the amount of bandwidth needed to delay a frame, without causing a massive amount of input lag, as it would be infeasible for anything except video playback. Hypothetically the monitor could store past frames and the video card could send the delta changes from the previous frame, but this would require a lot of RAM usage on both the GPU and the monitor, which would also make it borderline infeasible. Therefore, it is reasonable to expect that DP 1.4 will compress frames on an independent basis.

So is Displayport 1.4 actually lossless? It depends heavily on how redundant the video content is. For example, in a video in which each pixel is substantially different from each adjacent pixel, and the image is completely different each frame, lossless compression would have very little, if any, effect on its file size. Of course, a video such as that isn’t a “real world” example.

So here’s a real world example. A 2d, animated cartoon generally has a lot of redundant colors. Whole areas of graphics used in its creation are often the exact same color, which can easily be compressed with little to no loss. A video of the real world, on the other hand, generally results in slightly varying colors of every pixel. What really matters is how much the colors in each frame contrast each other and vary in significance. The less “semi-redundant” the colors in an image are, the more noticeable the compression will be.

The user is unlikely to notice if a small group of pixels with a RGB value ranging from [10,200,10] to [10,203,10] is rewritten as [10,201,10] for the purposes of reducing file size, as it would only slightly effect the shade of green within the area. It is still visually lossy, but it is so difficult to perceive that most users wouldn’t notice the difference.


Considering the amount of bandwidth Displayport 1.4 allows for, the compression will probably hardly be noticable in most cases. Compression in low-bitrate streams can be unbearable, but Displayport 1.4 can deliver so much more information per second than the usual bitrate of most video content. We’ll have to wait and see to truly tell, but it’s likely that the compression used by Displayport 1.4 will be barely noticable, but ultimately vary depending on the type of video content being viewed.

This Sounds Familiar

We’ve seen similar debates about compression before, when discussing audio codecs. People will argue relentlessly about if you can tell the difference between MP3 and FLAC. Some will claim that FLAC makes a huge difference because its lossless, others claim the difference is unnoticeable. This sounds like a very similar situation.

Could the need for decompression cause input lag?

If the video is being compressed before it is sent through the cable, that means the monitor needs to process the data in real time, and convert it into raw values, in order to display the image. This process could take some time, which raises questions about if 8K monitors using displayport 1.4 will be feasible for gaming purposes, even if graphics cards became powerful enough to do so. There’s no way to tell until we begin to see monitors utilizing Displayport 1.4 and video transport compression, so we’ll have to wait and see.