So far we have seen, in our gedankenexperiment, the creation of a brand new communications fabric at the local and wide area level using only devices, no classic network equipment. But we are still far from a global network. How can this be done without network equipment?
A lot of our daily interaction is now with data/information (data is factual, information is what matters to me –my definition). Apps are often converter from data to information and they will be even more so in the future. Sometimes, this conversion is not happening in the device but in the Cloud (somewhere else accessible via Internet).
How can data be brought to the user device? Already today several devices have a significant storage capacity. By the end of this decade we can reasonably assume that some device will have at least 1 TB of storage space, and that will become the minimum in the next decade (1TB on a single SSD chip is already today an industrial capability).
So let’s assume we have smart phones (and cars service, media centers) equipped with 1TB storage capacity and let’s reserve 1% of this capacity to the local network. A urban environment with 100,000 devices will make available to network storage 1,000 TB. Let’s assume that only 10% of this storage capacity will be available at any given instance on line it means 100TB of network storage. Notice that this storage is extremely resilient since in the hypotheses just made we have a ten times replication of information.
Interestingly, one could multiply this capacity by segmenting the storage use by prevalence and including other potential storage containers, like appliances, toys, music players and so on. Devices that today have limited storage can easily increase their storage capacity once the world shifts to the concept of any object becoming part of a communication fabric. These categories are very cost sensitive so even adding a 1€ chip may be an issue but in the next decade the embedding of storage on microprocessors and the push towards economy of scale is likely to make this kind of storage available by default. So, for our gedankenexperiment we can take them on board.
That is a sort of storage that can create a web subset in the “fog” (Cloud at the edges). There have been studies and experiments in the past on capitalizing massive distributed storage, like the OceanStore project.
In the next decade massive distributed storage at the edges will become commonplace. Artificial intelligence will contribute significantly to the optimization of this distributed storage. It is also not a problem to imagine some storage hub for each local wide area network that can be used as a repository and storage orchestrator.
Traffic patterns and the recurrent use of the same data in a geographical location show that over 90% of the data “consumed” per users fit into this kind of storage.
Of course the remaining 10% is as important as this 90%. For that (but also to refill the 90% of data) we need to access some remote locations.
In this view, relying of massively distributed storage, the requirements on the long distance network change dramatically. By far this network should support a sort of data broadcasting. Something that Akamai is doing every day.
Hence, looking at the construction of a network starting from the edges, using devices as network nodes, leads to a different architecture of data distribution and to a different architecture of the long distant network.