Next week I am doing the inaugural Future of Broadband workshop, together with my colleagues from Predictable Network Solutions Ltd. I believe this is a ground-breaking piece of work, as for the first time it presents a robust model of how network profitability relates to the true underlying nature of statistical multiplexing.
I’d like to share with you the core argument we are advancing. It is an alternative thesis on what networks are. For the telecommunications industry to meet the communication needs of society over the next 10 years in an economically viable manner, we believe that there has to be a radical change in how we design, build and operate networks.
Fitness-for-purpose at an affordable cost
Our core belief is that profit is a function of how well networks service customer needs, whilst growing cost slower than growing revenues. If you trash the customer experience in the short term, it will harm you in the long term. Value comes from enabling ‘effective task substitution’, such as ordering shopping online rather than going to a store. Over time, people re-orient their lives based on the assumption of broadband being present – such as using Zipcar for occasional outings, instead of owning an automobile that is no longer used for grocery trips. That makes people increasingly dependent on broadband delivering a reliable and consistent experience. Future uses such as M2M, smart grids and telemedicine intensify this trend.
To a first-order approximation, network operator revenue is a function of average network use. An empty network makes no money, and a full one is very profitable. This assumption is subject to a key constraint: that the users’ quality of experience (QoE) requirements are met. You can’t just push your network to saturation whilst ignoring the effect on user outcomes. The more people multiplex quality-sensitive applications together with bulk data, whilst demanding dependable outcomes, the more you need to over-provision your access network, and the lower your average utilisation (and profit) becomes. If you separate out your network into a single-service overlay for each application, that also dives down utilisation and raises implementation costs by forgoing the benefits of multiplexing.
Yet those desirable benefits come at a price. QoE suffers when instantaneous demand exceeds what the instantaneous supply can deliver within acceptable bounds. Thus the peak-to-mean traffic ratio is critical to network profitability. To increase average use, you must time-shift peak demand to reduce this ratio. Since neither applications nor users are homogenous, and willingness to pay is highly variable, there is an opportunity to trade between users to get better outcomes at lower cost.
Matching supply and demand in real time
That means networks are trading platforms. They enable supply to be traded between users at multiple timescales, from microseconds to months. At any moment in time, some users are effectively sellers of and others are buyers of present and future resource capacity. Operator profitability is a function of how well these trading opportunities can be identified and executed. It is a futures business time-shifting demand around.
This is a vision of networks that we believe is congruent with the underlying mathematical reality. Broadband networks are not pipes, dumb or otherwise, and never will be.
Every broadband network is a rivalrous resource: your packets appear as pollution to my data flows. To make these trades between users, you need to be able to price the opportunity cost of your communication displacing timely delivery of my data. This pricing is only possible if you let go of a basic (wrong) assumption about telecommuncations: that the fundamental resource is bandwidth, and the ideal way of delivering that is monoservice networks. Instead, we propose an alternative approach of quality attenuation (a kind of statistical entropy) and that naturally leads to polyservice networks.
Circuit thinking pervades a non-circuit world
The telecoms industry is heading towards a cliff because packet-based statistical multiplexing changes the nature of supply and demand, as compared to circuits. However, broadband continues to be packaged and sold as if it were a circuit that entitles users to communicate with any other user at line speed without any contention. This fatally misaligns with reality of multiplexing. The workshop identifies several important ways in which this misalignment occurs, and with highly undesirable effects. Indeed, many of the current initiatives of the telecoms industry, such as LTE, are increasing peak-to-mean ratios whilst ignoring other critical emergent properties of networks that affect QoE.
Even worse, telcos have gone from being market-makers in this trading paradigm to being unwitting market participants. They are taking on supply risk – such as demand shocks from a new Apple device or OS launch – with no pricing mechanism to transfer that to users or to insure it. There is no ‘busy’ signal for broadband, and this means there is no ‘bust’ signal to investors – until it’s too late and you’re forced into a premature upgrade cycle to maintain QoE.
This behaviour is a function of telcos’ misbelief that bandwidth is the product, and they can supply a purpose-for-fitness offer that ends with the customer need, rather than starts with it.
The only answer is to start from the customer
The alternative lens of quality allows us to reason about supply and demand for statistical multiplexing and how it meets that need. We need to be able to manage the relationship between user QoE aspiration, consequent network quality requirement, supply execution and (for any SLA) assurance that it happened. After all, value solely derives from delivering dependable good experiences to users that allow task substitution.
As a result, we propose a new model of network pricing, operation and service lifecycle management that can be incrementally adopted by current network operators. This is based on finding measurable proxies for QoE that, unlike bandwidth, enable rational pricing and investment decisions, even on monoservice networks. Uniquely, our approach to selecting KPIs is not merely descriptive of past reality, but is predictive of future QoE.
This first invitation-only workshop is sold out. You can read more about what you're missing in the flyer here
[PDF]. However, don't panic (yet) - if you’d like to learn more about how to steer away from the bandwidth cliff towards the relative safety of quality attenuation, please do get in touch. Just hit ‘reply’.