We have a number of very accurate 3D scanning products that are used every day, as an example by museums to make 3D copies of statues. They have two characteristics: 3D scanning is slow and expensive. It is ok for this sort of application but it does not work for others where price and speed is needed.
Kinect on the other hand provides a 3D scanning capability that is both quick and cheap, and it has found a place in many home as an interaction device for a variety of games. The problem with Kinetics, though, is that to assess the depth of an object generates a light pattern that is reflected and processed by the computer. This light pattern is fine when using the Kinect indoor but does not work outdoor where the emitted light is overwhelmed by the sunlight. In the high quality 3D scanners a laser beam is used but since the lasers has to scan each point on the objects to assess its “depth” the process is slow.
Researchers at Northwestern University have taken a few shortcuts producing a 3D scanner that is both fast and cheap. It uses mass market components (and so it is cheap) and a laser (lasers are also pretty cheap unless you require very specific characteristics) but to make the scanning fast they have a computer analyzing the overall image and finding out what has changed then directing the laser to scan only those points that have changed. This dramatically cuts the time without hampering the desired outcome.
The idea is to use this kind of 3D scanner in applications like autonomous vehicles where you want to get, fast, a 3D representation of the vehicle surrounding and detect if something moves and how. Google is particularly interested in this for its driverless cars, to find an alternative to costly radar systems.
Other interesting applications are in robotics (including autonomous wheel chairs) and augmented reality.