So today we have reached a transistor size that has made possible high speed computation, huge storage on a chip and low power consumption. However, the Moore’s law ended its validity as a whole. The two sides of the Moore’s law coin, more integration density, lower manufacturing cost per transistor is no longer true. Since 2015 increasing the density of transistors on a chip no longer leads to a lower cost per transistor. Actually we have seen the cost per transistor increase as we are increasing their density on a chip. The trend towards lower power consumption, on the other hand, still holds true and this is good news for IoT, Internet of Things where the issue of power consumption far outweight the issue of processing power. And the increasing smartness in our cities will leverage quite a bit on IoT.
Looking ahead, beyond Moore’s law we see several paths that will take us to increased processing power and decreasing cost although no longer tied to better silicon manufacturing processes.
A few paths, possibly the ones closer on the horizon, are through
- a novel chip architectures, like 3D microchips of which we already have some examples in memory stacking, up to 48 layers so far,
- a fusion of processing and storage with memristors supporting neuromorphic computing,
- adiabatic reversible computing able to slash the power consumption, and
- massive distributed processing involving miriad of objects in “the fog”.
This latter is possibly the one that can potentially have the greatest impact in a city environment. Notice how this is going to co-exist with the Cloud. The Cloud basically shift computation (and storage) from the periphery to a fuzzy center where processing and storage resources can be shared among users whilst a massive distributed processing in the fog leverages on object computation and storage capability.
My feeling is that we are going to see a significant progress in this area earlier in the next decade, as 5G will start to become widely adopted.
A different approach involves different non Von Neumann computers like quantum computers and/or different single atom layer materials (2D materials) like graphene and molybdenum disulfide. These are far away in time and are unlikely to affect the path towards smarter cities, although the industrial availability of 2D materials are likely to enable new types of sensors at much lower cost that can become part of any object. This ubiquitous presence of sensors and their intrinsic capability to perform local processing will be one of the enabler of the awareness infrastructure and this will have a crucial impact on the path towards smarter cities.
One question is, of course, if we need this increase in processing (and storage) capabilities in the context of Smart Cities, considering that we have now access to almost unlimited processing (and storage) capacity in the Cloud(s) and most citizens have in the palm of their hands the processing capability of a Cray 1 supercomputer.
As a matter of fact what we are missing today is not the peak processing/storage capacity but the usability of this capacity. The barrier to this usability are mostly the communication barrier and the energy barrier.
In both cases the “beyond Moore’s” has to help. The evolution towards 5G and the networking at the edges promises the possibility to harvest both computational and storage power from smartphones, something that should be leveraged by future smart cities. Second, the lower power requirement (and better recharging possibility) should enable the usage of spare processing and storage capacity in smart phones creating a “Cloud in the Fog”, that is harvesting citizens’ smart phones capability to serve themselves as a community. Assume a city with 100,000 citizens having a smartphone, each smartphone having 1TB of storage capacity and a processing power of 20,000 MIPS (this is a conservative assumption: an iPhone 5s has 20,100 MIPS of processing power. In the next decade we can safely assume an average power of smartphone in that range). We can expect these smartphone to be used some 10% of the time (again a conservative assumption, meaning 2h and 24’ per day) and a spare storage capacity of 100GB (a 10% of storage capacity not used). This leaves a “shareable” capacity of 10Petabytes and a processing capacity of 2 billion MIPS, that is -roughly speaking- the capacity of a supercomputer or, again roughly speaking the processing power of a human brain. Roughly means that it is intended to give an idea. The comparison among massive distributed computing such as the one that could, and will, be implemented in the “fog” is quite a different beast from the processing going on in a supercomputer (even though a supercomputer is a cluster of million of processors). Likewise the comparison with the processing power of a human brain (even though here too we see a massively distributed processing structure).
The point I am making is that a smart city by leveraging on the distributed power of hundred of thousands of devices, like smartphones, cars, tablets… can harvest a processing and storage capacity that is “au pair” with the one provided by a supercomputer and it can do that basically for free as long as it can provide a connectivity infrastructure like the one envisaged for 5G (and beyond).
The other point I am making is that this tremendous processing power, that will keep growing year after year, can provide some form of “intelligence”, similarly to what processing accomplishes in a human brain. And I will go even further claiming that this is actually a first layer of emerging intelligence that can become the bases for further layers of intelligence and awareness, as I will detail later on.
Going back to the evolution towards lower power requirement, this should enable the powering of a broad class of IoT devices, mostly sensors, and their use in the city environment at an affordable cost.