Let’s take a look, then, at how we could decrease energy usage, still maintaining the level of services that we are used to. One has to consider that in these last 50 years electronics has made amazing progress and not just in becoming more performant, also in becoming less energy hungry. So, how could we go beyond what has been achieved so far, and go beyond not by increasing efficiency by another 10% but by 1,000%?
We live every day in a world that we experience across six orders of magnitude, from the mm to the km. If you try to imagine something smaller or something bigger you enter into a fuzzy area. 2m is clearly quite different from 1 m but when you think about 100 or 50km the idea is that it is not actually all that difference. Similarly if I tell you that a hair is 0.1mm or 0.05mm. They would feel about the same, and yet there is the same difference between 2 and 1 meter. It is not just about perception. Law of Nature don’t scale graciously and this may open up new opportunities.Electronics manufacturing has undergone an evolution spanning six orders of magnitude, although these are below the ones we perceive. The first transistors I have been playing with could be seen with my eye. They were not much smaller than 1 mm. Today they are in the order of ten nm, that is a 5 order of magnitude, but the first transistors, I was too young to play with them, were also in the order of 1 cm, hence 6 orders of magnitude. The real squeezing of transistor initiated with the invention of the integrated circuit and has proceeded relentlessly. By the 1980ies a transistor was smaller than a human cell, by 2000 it was about the size of a virus and now it is about the size of a protein.
We are reaching the point where a further shrinking has to take into consideration new rules of the game. New forces have to be confronted and new, strange, phenomena get in the way. It is the world of atoms and subatomic forces. Controlling electrons is no longer possible in the usual sense. You can no longer say: there is a current or there is no current. A “may be” would be more appropriate but of course you don’t want to hear your computer answering to your question with a “may be”.
We are confronted with a new six orders of magnitude, from the femto to the nano. And the femto is for sure the area of “may bes”.
And it is precisely to avoid the “may bes” that we pump energy in or chips.
So an interesting question would be: how much energy do I need per bit? The question is pretty straightforward, but the answer is not and actually there is still quite a debate going on among scientists.
An easy answer could be derived by looking at the minimum dissipation that is needed to interact with a bit (second law, here we go…) but also this is not easy. Some claim that there is no way to interact with a bit without generating heat, others are saying that this is true only if you are erasing the information, not if you are copying the information. More of this in a little while.
Whatever the answer, as Feynman stated, there is plenty of room at the bottom which means that we could interact with bits dissipating much less heat.
What we know for sure is that Nature has found ways of manipulating bit (of sorts) with a much lower level of heat dissipation, that is using much less energy.
An example is the conversion of light into information, a feat that we constantly do in our eyes. If you want the details, here they are (if you can leave without them just skip the next paragraph taken from Wikipedia):
Information flow during rhodopsin activation. Upon absorption of a photon of light, the 11-cis-retinylidene chromophore is isomerized to its all-trans-state, driving all subsequent activation steps. Deprotonation of the Schiff base linkage follows photoisomerization, and through small-scale changes within the transmembrane region, the activation signal is propagated to the D(E)RY (Glu134, Arg135, and Tyr136) region, resulting in disruption of the "ionic lock" and uptake of a proton from the cytoplasm (most likely onto Glu181, which protrudes toward the chromophore from the one of the β-strands of the plug domain), leading to fully activated meta II rhodopsin. Meta ll catalyzes nucleotide exchange upon the G protein α-subunit of transducin heterotrimers, propagating the activation signal inside the cell. Three regions important in activation and other GPCR functions are highlighted within the transmembrane region: the D(E)RY motif, the NPxxYx(5,6)F motif. and the chromophore binding site. The three insets detail the interactions present within these conserved motifs. For ease of nterpretation, helices are depicted in the following colors: H-I, red; H-II, orange; H-III, yellow; H-IV, lime green; H-V, dark green; H-VI, teal; H-VII, blue; and H-8, purple.
If we were to use one of our chips to do the job of the retina our eyes will start boiling after having read a page.
Another example is the conversion of photons into chemical energy in the chloroplasts. Here again we are seeing a usage of energy that is at least 1,000 times smaller than the one we have in our chips (photovoltaic cells).
How could this be? Well, both in the rhodopsin activation and in the chloroplast activation the information is exchanged at quantum level, in the femto regions of “may bes”. And in this region information can be manipulated through different processes at different orders of magnitude in terms of energy usage.
This is the region where we no longer see information associated to charge, rather to spin (which is what we have when we deal with magnetisms…). Based on recent studies we can say that using charge as information medium the minimum dissipation is:
whereas using spin the minimum dissipation is:
kbTln(2) also known as Landaur principle.
N is the number of charges involved (electron) in the interaction, at least 104. It is quite clear the gain in terms of energy in moving from a charge based information to a spin (or more generally non charge) based information.
In the figure we see a new chip based on a spin wave bus in a multiferroic substrata, created by researchers at UCLA, that has achieved a 3 order of magnitude saving in energy, and a photonics chips mimicking a brain circuit created by researchers at Ghent University.
This dramatic energy saving can also be possible when information is no longer the result of an interaction but rather a “state”. This seems to explain the amazing capability of (relatively) simple neuronal networks such as the one involved in a fruit fly fly-control system. Apparently just about 5,000 neurones (out of the 100,000 of the fruit fly brain) are involved in the brain of a fruit fly at take off and these process the images coming from the composite eyes (that are switched on at take off in a particular status that is needed to create a 3D map of the flying field and provide the appropriate signals to the wings for flapping integrating gyroscope information arriving from sensors below the wings. In comparison a flight management systems requires a several tens of thousands fold more energy to do basically the same thing (notice that this is not related to the different size of the insect and the plane since I am comparing the flight control system, not the actuators….).
Out of the state based information we see also the emergence of information, that is basically happening “for free”. It is what we humans call thinking and probably lower species would address simply as living.
Technology is exploring these new avenues, as an example with the SYNAPSEs chip by IBM that aims at mimicking brain circuitry from a functional viewpoint.
As we move into nano and below we will be using new approaches to deal with information that in turs will make for a leap in terms of energy efficiency.