From MegaPixels to MegaRays

Illustration from Lytro founder Ren Ng's dissertation on consumer light field cameras showing how a microlens array can redirect light before hitting the sensor. Credit: Lytro

Two years ago I wrote a post on Lytro, a digital camera based on a new approach to light capture that would allow the recording of several focus plane on the same file.

The first camera Lytro produced was interesting for this new approach but it was considerably inferior in terms of quality to the images that can be captured even by a low cost consumer camera.

Now Lytro has released a new version, Illum, vastly improved of its camera, still based on the same paradigm: capturing rays information rather than chrominance and luminance of lights ending up in a pixel on the sensor. The street price is 1,599$, not exactly cheap but au pair with advanced amateur digital camera.

The new camera can record 40 million rays and information spatial luminance and chrominance on 5 million pixels (that would be the resolution of the camera in a normal digital camera).

As shown in the figure, the lens focuses on the sensor the rays coming from the optical focus plane (as a normal camera). In addition, however, the camera is also able to intercept rays coming from other focal planes, virtual focal planes. These two information sets are captured by different photo sites on the sensor (which, as said has a total of 40M photo sites of with 5 are reserved for spatial chrominance and luminance).

This information is relayed to a software rendering program that is able to create a multi layered photo each layer with a different equivalent optical focus plane (present technology supports about 12 virtual focus planes, which in normal photography is quite a lot, not in macro photography though were the planes needed may go up to a hundred and more). One can package all this information onto a single photo displayed on a computer screen letting the viewer to interact and change the optical focus plane desired or all virtual focus planes can be merged using focus stacking as we do in a normal digital camera by shooting several pictures at the same subject changing the focus plan in each shot.

Now this latter may cast a shadow on the future of this technology. If I can get the same depth of field through focus stacking with a normal (cheap) digital camera or with a smartphone (some of them already have this focus stacking capability) why should I bother with a different technology delivering the same result at a higher price (and less spatial resolution)?

Well the point is that Lytro provides a trade of between spatial and angular resolution and so far we have been very good in exploiting spatial resolution whilst we have not jet started to investigate and exploit angular resolution. It may be a few more years but I guess we will start to see new software allowing us to do just that and I bet that the future of ambient awareness and of our interaction with ambient will find a place for this new approach.

Don't forget that our eyes, as sensors, and our brain, as software, are making very good use of angular resolution, actually they rely more on angular resolution than on spatial resolution to get the semantics of an ambient! The problem of a computer  in discriminating among objects in an image is due to the use of solo spatial resolution. With angular resolution it is a snap to distinguish among objects. So the game is not over, actually, it is about to start!

Take a look at the clip and sense the new dimension of photography brought about by Lytro. And, as I suggested before, I see many more opportunities coming up in the direction of image and ambient semantics.

Author - Roberto Saracco

© 2010-2018 EIT Digital IVZW. All rights reserved. Legal notice. Privacy Policy.