Future Telecommunications: Gedankenexperiment - Part 3

A detail of a crossbar switch showing the switching matrix. The network architecture designed to accommodate these switches is the one we basically still have today. An unlikely fit. Credit: Harvard

Let’s now turn to the technology scenario.

Clearly technology today has very little in common with the technology that was used 140 years ago. And, of course, technology did change significantly in 140 years in the network(s). However, the basic tenets that governed the creation of the first networks 140 years ago remained the same as the network became automated, digitalized, optical and wireless: scarce resources and capital intensive investment dictated the network architecture, yesterday as today.

The flexibility, first provided by signaling system #7 to make better use of network resources, and by the Intelligent network, still co-existed with the rigidity of the physical architecture and with the rigidity of the organization of the telecom companies. Often, the rigidity of the organization was a greater impediment than the rigidity of the wires themselves.

The tremendous advances in processing capabilities, the increase in storage possibilities (rarely capitalized by Operators), the tremendous capacity of optical links and the pervasiveness of the wireless spectrum did little to change the architecture. Whatever was new had to coexist with what was already there. Layers of new stuff were overlaid on older stuff. 

The first “switches”, 140 years ago till 60 years ago had a switching time that could be measured in seconds: that was the time it took an operator to plug in a cord into a socket to establish the connection. In 1892 the first automatic exchange was inaugurated for local traffic (in La Porte, Indiana) using the invention of Strowger (an undertaker) and slowly automatized the “operators” cutting the switching time to a few tenth of a second. Later the crossbar switch (1915 Western Electric first idea) further cut down the switching time to a tenth of a second and later in the 1950ies to hundredths of a second. With the advent of electronics the switching time went down to thousandths of a second and nowadays switches run in the billionths of a second (we now measure the throughput of a switch in terms of millions/billions of frames/packet per second). All in all there has been several billions fold increase in switching time, most of it during the last 50 years. 

The storage capacity was initially equal to the number of lines (connected – not connected) then with the registers used by the switch it grew to several tens of thousands bytes equivalent. With electronic stored program control switches it jumped to hundred of thousands byte first and then to millions. Now if you consider data centers as part of the network (as I would consider Clouds to be) you are measuring storage in the thousands of TeraBytes range. An increase of hundred of billions fold, most of it in the last 30 years.

Communications capacity (a better gauge than speed since, in a way, the speed has been constant being tied to the propagation of the electromagnetic field) has grown from 1 call per line to some 30 calls per line (with PCM) to thousands and now billions of calls per line. Again, an increase of a billion fold, and again most of it in the last 30 years under the convergence of the optical and electronic evolution.

Yet, the philosophy of the “twisted pair” dictated the organization and the interaction with the users/clients (their name changed but little changed in the substance). The advent of wireless should have led to a clean slate approach, but that was not the case: the organization governing the fixed network dictated the management of the wireless network, the twisted pair was simply replaced by the SIM card. The backbone of the network remained the fixed network and the wireless part was shaped on the fixed network model: termination points and SIMs. 

Author - Roberto Saracco

© 2010-2020 EIT Digital IVZW. All rights reserved. Legal notice. Privacy Policy.