How tiny can a digital camera get? Well, there are some physical constraints imposed on the lens that as a matter of fact cannot let us squeeze a camera below a certain dimension. Not that they have to be big, as we have come used to appreciate by having digital cameras in our smartphone (occupying just fraction of the phone real estate...).
But if you were thinking of embedding cameras on everything that can be connected to the internet you might find quite a bit of "things" where a digital camera will not fit (and quite a bit more where current digital camera technology may be too expensive).
As often it is the case, what seems to be an unbreakable physical barrier can be circumvented by a different approach. And this is what researchers at Rambus just did.
They presented at the MWC last month a lensless digital camera that is just 200µm per 200µm in size. The idea is to replace the lens with a diffraction grating (in practice a surface with very precise and microscopic "ruts") that split the light in ways that depends on the grating and on the origin of the light rays. This can be computed by a processor to recreate the patterns of lights reflected by objects, that is the image.
Notice that software (processing) is involved in the reconstruction of the image also in our digital cameras! Here you need to use a different sort of software but the principle is the same: reconstructing the landscape that reflected ray beams taking into account their redirection on the CMOS sensor through a lens (in the case of our cameras) or through the grating.
Although the resolution (that is the number of details captured) is nowhere as good as the one possible with a lens, the diffraction grating can provide raw data that are more sensitive than a usual digital sensor to intercepting movements.
The resolution is, however, good enough for localising and identifying objects categories and this is usually what matters to "things" to inhabit and be aware of their environment. Notice that the software required for processing the data can reside on the sensor itself or can reside in the network, depending on how complex, and how costly, the "thing" can be.
We are really moving forward to the Internet of Things and to ambient awareness. This will revolutionise our perception of the world, transforming it into a reactive environment, a place where we can talk to objects as today we are talking to people and dogs and ... smart phones!
This revolution is already happening, even though we are not noticing it, and I bet we will never notice the change, we will simply find ourselves into a different world where the difference will be perceived only when we will stop a moment to look backwards and wonder how could we ever live in the past. The same way that today we don't notice but how was it possible life without a cell phone? And that was just 20 years ago!
Another tidbit. A few days ago it surfaced the news of Apple patenting a digital camera system based on two separate sensors, one capturing the luminance and the other the colour. It will be up to software to combine the information into the creation of an image. It is another example that emphasise how digital photography is different from film photography and that although we tend to compare the image sensor to film this comparison is like comparing apples and oranges. They are two completely different things. It is the software that actually creates the image.