Since the 90ies, following the deployment of the Intelligent Network in the 80ies, Telecom Operators have looked at ways that could boost the flexibility of their resources, both for rapid service creation and deployment and for minimizing CAPEX through resource reuse. As the networks morphed into interconnected computers dominated by software it just made sense. The TINA initiative, Telecommunications Information Networking Architecture, was one example of a worldwide effort to exploit the softwarization of the network equipment.
Times were not sufficiently mature, partly because of technology (most of the network was not “software based”) partly because of the market still dominated by agreements between Telcos Manufacturers and Telcos Operators.
Besides, at that time Operators (bound by contracts to Telcos Manufacturers) were clamming on their long standing architecture defending it from the IP-zation of the signaling and transport. Rather than embracing the IP architecture which will have resulted in a flattening of the networks they preferred to keep the hierarchical structure that ensured better control and transport quality assurance (IP was, and is a best effort approach…). They deployed ATM, Asynchronous Transfer Mode, a protocol (and related architecture) that could ensure Quality of Service –QoS-, something that the Best Effort IP could not.
Nowadays the battle between ATM and IP is no more. IP won and it won because the reasons that would have sustained ATM, the guaranteed Quality of Service, were superseded by the tremendous growth in capacity of the network that resulted in IP communications that was, for most applications “good enough”.
Over the same 20 years period 1990-2010, the network became even more softwarized, including the add drop multiplexers, the bridges, and it started to be populated by data centers (the mobile networks works thanks to the presence of data centers spread all over and interconnected, the HLR – Home Location Register- and the VLR – Visitor Location Register, basically telling who “owns” your phone –HLR- and on which network the phone is at any particular time –VLR).
More than that. Data centers, service centers and the terminals themselves created an ecosystem where software is in charge. Software can adapt the interconnection to the specificity of a gateway, the terminal does not need to be fabricated according to a Network Provider specs. It can run any software, including the one that will make the terminal compatible with that network.
This does not just applies to smartphones. It can apply to any device that needs to connect to the network, including vehicles. It requires processing power to run the software and a few MB to spare to host the software applications. Both are generally a no-issue, but in those cases where they are “too expensive” to be economically pursuable the devices (mostly sensors or tiny actuators where the power constrains are usually limiting the processing power) can hook on a low cost (from a performance and energy point of view) interconnection leaving to a controller the task of interconnecting with the network.
The flatter structure of the network, resulting from the inclusion of bridges, routers and switches integrated in the routers, lends itself to be managed in a much more dynamic way.
Say hello to NFV, Network Function Virtualization, and to SDN, Software Defined Networking (or Network). The idea is that each network equipment can actually be stripped to a minimal subset of functionalities, to interact with its hardware periphery, migrating more complex –management functionalities to the Cloud (NFV). In parallel, the orchestration of network elements can also occur in the Cloud, outside of the network (SDN). As I have indicated in the previous posts in this series terminals are becoming network equipment, they just fall under a separate (private and basically unregulated) jurisdiction.
A crucial point of course is who is defining which are the functionalities to migrate and who can play the orchestration function. In an Internet architecture the “orchestration” is highly distributed with no central command post. Could this be the case for the SDN orchestrator?
Probably in the first steps the orchestrator(s) will be the turf of a few Network Operators or Service Providers, each one with a responsibility domain limited to the network resources owned.
However, in a second phase, that in a way has already started, such orchestration may start to become distributed. At the Network Operators level it makes sense to have their own orchestrator reaching out and negotiating the use of network resources belonging to other owners. That would result in the possibility to ensure End to End QoS, a holy grail for Operators since the time they lost control on the end to end network (with the advent of the internet and independent service providers). This is what Software Defined Network is: the possibility to harvest the resources needed to ensure the best (paid) QoS. At the same time, at the edges of the network, third parties will start to offer services, partly embedded in smart phones and other devices, that can help applications running on that device to make the most (in terms of performance and cost) from the available network as seen from the edges. Clearly this implies the selection of the access gateway (5G here we come) and the selection of network resources that are made visible to third parties. These latter may come as a second step, but my bet is that they will come since in the end it will provide a way for an Operator to better monetize its resources.
The orchestrator, be it within a specific network domain with a reach limited to the network own devices, or reaching out to several “federated” networks or be it external to the network and focused on a specific application, will basically create a “customized” network, something that was never heard before.