The Future of Television - Part II - The quest for real life definition

Vision depends on the optical physics of our eyes and on the processing of our brain. The perceived definition depends on cones and rods in the retina, the feeling of being part of the image depends on our brain.

Facebook 360° spherical filming camera. Not for the normal film-maker with a price tag of 60,000$. It is supported by an Open Software Environment that should stimulate immersive content production in the next decade. Credit: The Verge - Facebook

The quality we perceive in the image depends on several factors. We are the ultimate judge of quality and our perception of quality is not the same of an instrument that can be used to evaluate specific characteristics. Curiously, we have seen the quality of televisions being affected by the thikness (or better thinnes) of the television set. The market (us) shown that people loved thinner sets, over the objective quality of the image. That led to a shift from CRT to LCD (and Plasma) technologies. The investment on research driven by the market pull led to an increase in objective and subjective quality of the “thin” sets to the point that they got better than the CRT based set.

The number of pixels present in a television combined with the quality of the image received by the television (resolution and compression) are a first factor in the perceived quality. 

The pixels have the goal of creating an image that is good to our eyes. We do not have a technology that can create a vast gamut of colours. What we can do is mix three basic colours, blue, green and red each one finely controlled in luminosity to create the impression of many hues. Hence we need 3 individual (sub)pixels (blue, green and red) to actually create one single colour element. In a HD transmission the signal received contains information distributed on 1920 units across and 1080 down, that is a total of 2,073,600 information units. Each of these units has to be mapped on the television screen. A one to one mapping would require 6,220,800 subpixels. Television sets having this number of subpixels are defined HD. For a few years, in the last decade, we had HD ready sets, meaning that they had a lower number of subpixels supporting 921,000 units (2,763,000 subpixels). That meant that the HD signal had to be “downscaled” so that basically 3.3 units had to be squeezed into a single display unit (3 subpixels) on the television screen.  Conversely, if we have a top of the line television today, UHD/4K, we have a screen capable of displaying 8,294,400 information units (using  24,883,200 subpixels), and the 2,073,600 information units received through an HD transmittion will need to be upscaled in such a way that each unit will use 4 display units (12 subpixels).

These upscaling (or downscaling) operations are quite complex and each manufacturer has its own way to do it, resulting in better or worser quality.

We are now starting to receive (via Internet so far) 4k transmission and these can be mapped one to one on 4k screens (but need to be downsampled on HD screens).

Our perception of of resolution is tied to the angular resolution of our eyes and to the resolution of our retina. The human retina has roughly 100 million cones and 7 million rods (these latter are for detecting colours, the former are better in detecting low light signals, basically provide an black and white image, like the old televisions). The true resolution of the retina has been a matter of debate, with some claiming that it can compare to a 574Mpixel camera. This is not true, because it assumes a resolving capacity that is only present in the fovea (where there is a greater density of rods). In practical terms we can say that the retina, per se, has the capacity of appreciating about 8 million light points detecting their luminosity and colour. That is if we are looking at a 4k screen, and that screen is filling our vision field we have enough information to fill the detection capacity of the retina.  In those condition even if we were to increase the density of information on the screen (increasing the number of pixels on the screen) we won’t be able to tell the difference. 

Interestingly, with a 4K screen our brain perceives it more like a window than like a screen. You might have felt the sensation, when looking at a 4k screen, that the image is beyond the screen, not "on" the screen.  When you look at a screen you lay back and enjoy. When you look at a window you move closer to look at what's behind it. It is a completely different perceptual sensation.

There is also the perception of quality commanded by our brain. The brain has evolved to sort out the semantics of what our eyes “see”. That means identifying objects. The retina itself (the retina is part of the brain!) starts some processing to detect lines and to isolate edges which is the first step in identifying an object. As contrast increases the identification of edges becomes easier, although the image loses details. Since the identification becomes easier our brain is happier and it gives us the sensation of a better image, a more defined one, whereas an instrument would tell us that the increase in contrast has decreased the definition of the image. This is why new technologies based on single pixel illumination (LED) provide “better” image by increasing the contrast.

The other parameter is the angular resolution. Our eye has an angular resolution of about 0.02°, that is we can distinguish two objects seaparted by 30cm one km away. If they were closer we could not perceive them as two separate objects. This means that the ideal distance to watch a screen is the one that would allow the eye to get the tiniest information (pixel) without actually “seeing” the image fragmented into pixels. This is why the usual advice was to watch a CRT television at a distance of 5 times its diagonal whilst now one can watch an HD screen at a distance 2 times the diagonal and for a 4k screen you can view it at a distance of 1 times the diagonal. 

As you get close to the screen, you take more and more screen in your field of vision. Once your field of vision is “inside” the screen something magic happens: you feel becoming part of the scene, you become immerse in what you see. This happens when the screen is bigger than 120° from the point you are looking at it. With a 4k screen we are almost there for watching a still image, and we are there when watching a video (because of the continuous changes in the image decreases our capability to capture details).

Hence we can say that the 4k definition is already the best one we can appreciate. Any further increase (coming with the 8k) will not be appreciated, unless you can have a wall size screen and you look at different parts of this screen, no longer at the screen as a whole. In this situation you are going to experience real immersion, like being in a room that has no wall but extends seamelessly in the screen.

This is something for the next decade. It requires bandwidth well beyond 100Mbps and display technologies based on NED, Nano Emissive Display, that can provide 10 times more pixel density.

Devices like Oculus Rift can provide the full immersion with today’s technologies. There are  now a number of devices that are competing for the game market and some are also used in professional activities. They are addressing virtual and augmented reality.

They are not used for television because television is not delivering immersive content. Content production is clearly a crucial component for creating immersive shows.

Interestingly Facebook has recently released a new camera that provides 360° spherical filming capability (at a price target around 60,000$) and that would allow the creation of innersive content. Even more interesting, they have released API and opened up the camera architecture, thus allowing other companies to come up with more affordable solutions.

I can clearly see a trend in the coming years towards the production of immersive content. We have, and will have even more, processing power to manage this kind of content. Production will extend to the mass market and in the next decade I can foresee a significant amount of immersive content, and that is much better than 3D content since it opens up opportunities for more service offerings. 

Holography has made progress, but looks like the rainbow. As you approach it you realise that it is moving farther away. There are now several holographic like technologies, including fog screens, but they serve only very specific niches. I do not foresee any changes in the near future.

 

Author - Roberto Saracco

© 2010-2018 EIT Digital IVZW. All rights reserved. Legal notice