It is not that difficult to find patterns in a person behaviour. After a few days yuo can tell that person goes to the office 8 to 5, or to school, then goes to buy some groceries, then its home cooking and television, whilst on Friday night there is no cooking but a pub with his friends.
What can be trickier is for a computer to be able to spot pattern just by observing the environment of that person.
Researchers at Georgia Tech demonstrated that this is possible. They have developed an application using deep-learning methods that analyses thousands of images, taken at a rate of one every 30 seconds, by a cell phone camera hanging from the neck of a person over a period of several weeks.
The application categorises the images into 19 types of activities and organises them into a timeline. Then it develops a profile of that person that has been shown able to predict that person behaviour with an accuracy of 83%. Not bad, although that 83% is probably the easy part, like predicting that I will be going to work this Monday morning...
Still, it shows that computers are getting better and better in "understanding" images, something that was very difficult just ten years ago.
The researchers have provided the possibility to the person being tracked of deleting images that he feel not willing to share and to adjust the categorisation.
They claim that this kind of applications can be useful in prompting alternatives (like "use a different road this morning to go the the office, since the usual one is congested"). Personally, I am a bit skeptical and I see more privacy issues than benefits, but alas I am an old guy. May be this kind of technologies will become bread and butter in ten years time to the point of being invisible from our perception and people will just take for granted to have a prompt in the morning on the best route to the office.