Exploiting sensors

The variety of sensors keeps growing and most of them can be used in many different ways. In the figure: a variety of sensors. Credit. Waquar

Just two weeks ago I gave a lecture at the Turin Polytechnic on smart cities and I spent some time discussing sensors. I made the point that in the next decade we might see sensors becoming a global platform that can be used by a variety of applications. 
Today, sensors are performing a specific task, i.e. harvesting data for a very specific purpose, the one that was foreseen by the designer of the device embedding the sensor.
Take the sensor in the cellphone that has been embedded to take photos.  
Phones manufacturers developed applications to take photos using the sensor and emphasise the quality of the sensor in terms of Mpixels, speed etc. All with photos in mind.
Then they opened up the access to the sensor to allow third parties to create photo apps that can add more functionalities (like accessing the RAW data from the sensor and let the user manipulate those data at his like). What happened is that while many photo apps were (and are) indeed developed others worked to exploit the sensors in ways that were not foreseen by the designer.
Take as an example the apps developed by banks to authenticate a user by having him hovering his phone on the bank webpage shown on his computer screen (something I am using every day), or the app that let you measure your pulse and breathing rate by letting your phone camera looking at your face. Or using your cellphone camera for augmented reality applications...
Think about it and you will find several other examples of apps exploiting the digital sensor in your phone for tasks that were not envisaged when the sensor was designed.
A sensor that I think will be used to support applications in unexpected areas is the microphone.
Signal processing has made amazing progresses in these last ten years and these progresses foster the extraction of meaning out of sound. Your cellphone microphone is always on, so that it can hear you when you say "Hi Siri" (or whatever you say to grab its attention, depending on the type of smartphone you have). An app can potentially be designed to analyse the sound that is being picked up by the mike. It could, as an example, detect the sound of cars, it can be so smart to recognise the type of car, it could tell if the car is approaching or moving away, and its speed, it might detect several cars in the surrounding, it may detect if there is something that looks like a traffic jam...
Clearly this is just an example. The point is that a sensor like a phone mike, of which there are tens, hundreds often, in any given urban area, can provide data and information about what is going on in that area. Notice that by having potentially several data generated by microphones in different phones we can derive even more information. 
The point I am making is that we already have plenty of sensors in a city, as well as in buildings, homes, offices that could be leveraged as data generation points and integrated in applications that can make sense out of them.
This is what ABI Research is pointing out in its recent report "The future of sensors in the smart home". They forecast 4.5 billion sensors in smart homes by 2022 that will provide data in an open form, that is enabling analyses by third parties applications.
We are really moving forward a digital world with data forming the fabric of the future.

Author - Roberto Saracco

© 2010-2018 EIT Digital IVZW. All rights reserved. Legal notice. Privacy Policy.