QoS: Not Quality or Service
14 February 2011
In computer networking there is a nice term QoS (Quality of Service) that quite often lends itself more to market speak than technology. 'Quality' brings to mind images about how good something is, when in reality QoS is really about how bad it is. It is also more about dealing with the lack of Service rather than its presence.
The Quality ideal is when one person's usage is not affected by others' usage, which in turn implies a network that is capable of satisfying all the capacity everyone wants. Of course the only network that can do this is one that is over-provisioned, in which case quality control is a moot point. The opposite case of user demand exceeding network capacity is when Quality of Service comes into play, and controlling service degradation is a much more descriptive name for the task. Coincidentally this name pretty much explains the motives behind those who talk the loudest about it.
Choosing who to screw
Having read much research material on the subject during my PhD, a central plank of any QoS scheme is traffic classification. Stereotypical example is giving latency-sensitive real-time traffic such as voice priority, and filling in the gaps with jitter-insensitive data traffic. Problem is that this only works well in marginally loaded systems (network at capacity) that happen to be Pareto inefficient. In pretty much all real-world cases purpose of classification is so you know which class to screw first, and that assumes you can classify in the first place: Imagine the company that believed it had a problem with Skype traffic at peak times, and completely blocked it. People enabled their Skype over HTTP and the company now had two problems: Skype traffic, and the inability to distinguish it. Basically the only foolproof way to classify capacity use is which end user is using it.
Invariably to simply build a faster network
Attempts at introducing QoS into the core of the internet invariably fall flat because it is both more economical and technically easier to simply up the raw bandwidth. This is why most telecoms companies replaced their circuit switched networks with virtual circuits built on top of statistically provisioned IP networks. Of course due to the nature of the internet many organisations realise that traffic classification is also a precursor to differential charging models, and basically no-one other than ISPs want that.
And why this relevant?
Most carriers have under-provisioned networks, and rather than go to the expense of actually provisioning a network for peak load, note how a minority of heavy users use a majority of the traffic: If they can get (often via fine print) that 20% of users who use 80% of capacity off their network, they could have five times as many customers. Problem is that this ends up with people who are only interested in the most dirt-cheap offers, as those who are willing to pay top dollar take one look at the reputation and run a mile.