The data economy: the whys V - Managing bits is quite challenging to some // EIT Digital

The data economy: the whys V - Managing bits is quite challenging to some

The "production" of bits require a broad range of equipment, with a broad range of cost. This segment the kind of companies in the data economy.

Data centres are big CAPEX and OPEX drain, affordable to just a few companies, that in many instances resell the spare capacity. Photo credit: Google data centre - Google

If it is absolutely true that a bit is a bit, the managing of bits presents a broad range of requirements that can translate in very high Capital Expenditure (CAPEX) and Operating Expenditure (OPEX). 

Basically, managing of bits involves processing, storage and transport. In addition, as I already pointed out, you may need to work on transforming atoms into bits and bits into atoms. But let's stick to the processing, storage and transport and look at them from the point of view of their impact in the shaping of the data economy ecosystem.

We do all these operations in our computer, as well as in our cell phone and even in the remote control of our television. So no big deal... As the number of bits to be processed, stored and transported grows over a thresholds new challenges, and cost, come to the fore and our homely way of dealing with bits no longer works.

Of course the evolution of technology keeps changing the thresholds! In the 80ies the MPEG group working on a standard way to digitalise music adopted an asymmetric coding decoding scheme because of processing cost. The idea was that only a few companies would be in the business of digitalising music whilst millions of people (at that time very few could have imagined a market of billion of people listening to digital music) would need to decode the bits back into music. Hence they worked to find a way that would be very cheap in terms of decoding effort, low processing requirements, to make it affordable to users moving the complexity, and processing requirements, into the coding phase. Let the big companies shell out the big bucks required to buy that amount of processing.

Nowadays every cell phone is in the business of digital coding, when you speak, take a picture or a video clip, your cell phone processor has to work as hard as the big computers at Sony and the other music producers had to do thirty years ago, but today it is no longer a problem.

Yet the bar, the thresholds, is still there. The amount of data processed, stored and transported is in the order of PB, EB and ZB. And if can store TB with a nice hard drive costing me less than 100$, moving into PB is a completely different story (for this decade at least). Likewise the processing of GFLOPs of data is a given with my iMac (as well as my MacBook, although I can feel the difference) the processing in the PFLOPs is beyond reach.

Transport is a different beast altogether. Transport requires a network and laying a network was and is beyond our reach as individuals (although today we can, and we do, create home networks with affordable and easy to manage equipment).

Hence, the data economy is layered by the infrastructure cost factor. Whether it is having storage infrastructure, high processing availability or networks infrastructures you are talking of high CAPEX and high OPEX.

At the top tier are the companies that can afford the big expenditures. Storage and Processing are basically two sides of the same coin. They are provided through big data centres. Google, as an example, has some 13 of them spread all over the world, Amazon has over 40 data centres, Apple has its share of data centres too.

A variety of companies that need to manage huge data set piggy-back on these data centres owners, like Airbnb, Flipboard, Netflics... using Amazon ones. This is proof that you can manage huge data sets for your business by leveraging data management as a service. Those same data management services are also available to smaller companies, actually they are available to all of us. I am paying Amazon 11$ a year to host all my digital photos (41,000+ as of May 2015). 
Interestingly, FB started its biz by leasing data centres space around US and has progressively starter to build its own data centres but still lease from others. In 2015 they have socialised the Open Compute Project, started in 2013, to share experience on building and operating Data Centres with the aim of improving data Centres Design, something they say is already paying off.

In the bit transport biz we see similar arrangements with Mobile Virtual Network Operators running the transport biz without any network of their own.

Notice that these "hard" atom-based infrastructures are the founding stones for the data economy. Interestingly, these companies has all set up their data centres to manage their bits (and their core business) and then they have started to offer capacity to others to run whatever business they have, since, remember, a bit is a bit.

Several Telecommunications Companies have also started (since last decade) storage and processing as a service. The problem is that these companies are developing Data Centres basically to support that offer whilst the ones mentioned before are offering those services at marginal cost, since the reason for building their DC was to support their core biz. Hence, in general, their offer has a lower price point and they have such a huge infrastructure that from the user point of view there is very little difference in "guarantee quality". They will need to work really hard to have their offering win the market, particularly beyond their geographical market boundary, something that is vital, since scale in this business is the winner.

Author - Roberto Saracco

© 2010-2018 EIT Digital IVZW. All rights reserved. Legal notice. Privacy Policy.