Engineers at Nvidia have found a way to increase the perceived resolution of an image, a video clip, by overlaying two LCD masks on the same screen.
Virtual reality goggles, like the ones of Oculus Rift, have the problem that the LCD displays being used are placed too close to the eyes and the person using them can perceive the pixels. Normally you won't see the pixels on a screen but if you get close enough than you see them and this "breaks" the apperception of seeing "reality". When you wear VR googles your eye is less than 10 cm away from the screen and its pixels become visible.
What Nvidia engineers did was to use a trick: every LCD display has an array of tiny shutters overlaid. Each shutter blocks or open the visibility to the pixel. They overlaid on this shutter array another one, slightly displaced in such a way that the combination of the two arrays splits each pixel into four giving the perception of higher resolution and making the pixels invisible.
To make this work masking and splitting is not enough. You need to recalculate the distribution of brightness and colour in each pixel to take into account that they are split in four from the viewer viewpoint.
The drawback of this solution is that the overlay of the two arrays decreases the brightness of the display. However this is not a problem since the screen is just a few cm away and it is viewed in a dark enclosure.
In VR googles a key aspect is the angle of view of the screen(s). Our eyes scan the ambient covering close to 180°. Many VR google can support only up 100° and this creates a sort of visual funnel that diminishes the perception of reality. By having denser pixels one can spread them over al broader space thus increasing the viewing angle.
Clearly the future will take advantage of new display technologies, like those I reported last week, based on nano pixels but these are down the lane. The trick proposed by Nvidia is available today.