Now that a court has struck down the FCC’s misbegotten network neutrality rules, hand-wringing forecasts of the end of the world are an everyday occurrence. And there’s nothing the preachers of doom and gloom like more than an example of something failing, as proof that ISPs are out to get them, now that it’s legal to do so. As if it ever weren’t—remember, the network neutrality rules were a political feint; they never really took effect.
The current poster child for paranoia can be found in the case of Verizon versus Netflix. Some Netflix streaming subscribers have noted that Netflix reception over the Internet is pretty bad at some hours of the day. This is taken as proof that Verizon must be actively trying to monkey-wrench Netflix, perhaps to encourage sales of its FiOS TV cable service, or something. Or to extract payments from Netflix and other streaming video providers.
Now I’m not one to go around defending Verizon. I have plenty of gripes with the company. But in this case, it’s quite clear that they’re being tarred with a false brush. Of course video streaming doesn’t always work perfectly on Verizon’s Internet access service. It’s not going to be perfect anywhere. Not on Verizon, not on Comcast, and not on your friendly local ISP, if you can still find one. It can’t be perfect. The whole idea of streaming movies goes counter to the entire design philosophy of the Internet itself. That it works at all is amazing.
Retail Internet service is all about oversubscription
Video has always been the potential elephant in the room for consumer Internet services, only now it’s finally in the room. Consumer Internet access services promote their high burst speeds, but large numbers of consumers’ links are aggregated and then served by an upstream link that is a small fraction of the sum of each links’ speed. For instance, a thousand consumers could have 10 Mbps cable modems, and the cable system might serve them all through a 500 Mbps upstream link. This would be an oversubscription ratio of 20:1. This is actually rather low. Ratios of up to 100:1 are not uncommon, especially for the higher-burst speeds, because average usage doesn’t rise at the same rate as burst speeds.
A DOCSIS 3 cable node itself might have 100 cable modem subscribers sharing 240 Mbps of total downstream capacity, yet many of the subscribers might have 25 or 50 Mbps services, or even 100 Mbps burst caps. And it works, because most people are using approximately zero bits per second most of the time. Data traffic is generally very bursty, not smooth. A user might request a Web page and it will download very rapidly over the next couple of seconds, but then he’ll spend a while reading it before clicking on the next one. Averaged over a large number of users, consumer ISP average usage, at the point of aggregation, has grown from about 3 kbps, in the 28.8 kbps dial-up era, to about 100 to 200 kbps today.
Contrast this with business Internet services, wherein the monthly cost of a connection is much higher even if the peak speed is lower. That’s because a business service is much more heavily loaded—it is likely to have a LAN with many computers behind it, so its usage is already aggregated, not bursty. Pricing based on peak speed is thus a distraction. It’s not speed that costs the ISP; it’s the total number of bytes sent over time.
Video throws off the averages
Streaming video, especially high definition, throws these consumer averages off considerably. A video stream might send steady traffic at 1-4 Mbps, and this often during busiest time of day (which in the case of consumer services is the evening, or what television calls prime time). So the average demand on the ISP could be five or ten times as high as it would be otherwise, if streaming TV attracts only a quarter of its subscribers! This trickles down into the access network, where smaller numbers of users mean less “statistical gain.”
That’s the economic reason why streaming video is such a peril for consumer Internet services. Most consumer Internet pricing does not take usage volumes into account. But the almost-negligible cost of typical data usage will become much larger if video catches on. ISPs will either have to raise their prices to cover the higher average or resort to tiered pricing, so that they remain affordable and attractive to users who don’t “cut the cable.” Or the video companies themselves (I’m looking at you, Netflix) will have to chip in, because, frankly, they’re the ones throwing the cost model out of balance.
Internet services have never been subject to price regulation, whether directly, via them monthly fee, or indirectly, via rules about caps and “neutrality.” This has worked well so long as there has been competition for the consumer’s dollar. The players all find a way to make the deal. But video streaming services and their fans are trying to put the entire burden on the retail ISPs, as if they had some kind of constitutional obligation to support as much traffic as the users want, at no cost—and if they can slip in price regulation by calling it “neutrality,” and avoid making direct contributions, all the better. It’s a wonderfully parasitic model, and not very healthy for the host. It’s all the more ironic that traditionally regulated monopoly telecommunications services, like home phones, are being deregulated, while the Internet services that depend on them are facing new threats of regulation.
The Internet doesn’t have traffic engineering
While the higher average usage of streaming video explains why it’s an economic threat, it doesn’t explain why users are seeing problems today in how it performs. This turns out to be a technical problem, not so much an economic one. The Internet was not designed to replace cable TV, nor even the telephone. Its fundamental design was optimized for bursty data, the key application that didn’t have a telephone network or cable network already in place to support it.
What the Internet doesn’t have is traffic engineering. That’s what the telephone network has in spades, and it’s part of traditional telecommunications. The Internet has instead depends on the “best efforts” idea, wherein the network provides whatever it provides, and users take advantage of it. On the telephone network, demand is predictable and supply is supposed to follow it. On the Internet, demand follows supply.
Traffic engineering isn’t all that hard when demand is predictable. A telephone call consumes precisely one voice channel at a time. That’s precisely 64000 bits per second on traditional TDM networks, somewhat more on VoIP because of the overhead of packet headers. A given individual cannot, in general, make more than one call at a time, and individual subscribers’ telephone usage patterns tend to be pretty predictable. So if, during the busiest time of day, a given route carried 80 telephone calls at a time last Tuesday, it will probably carry between 70 and 90 calls at a time next Tuesday. Telephone carriers are required to pay attention to this; they are generally under regulatory and/or contractual obligations to not block more than a small fraction of calls under ordinary conditions. So they put in enough circuits to handle the load, monitor for changes in demand and blockages, and that’s that. For good measure, telephone networks often have alternative paths for a call to follow, so if the most direct route is busy, it may take an alternative route.
TCP/IP is nothing like that. Applications don’t even have a mechanism for even requesting specific amounts of capacity, and the Internet has no capacity to reserve it. Everything in IP is “best efforts.” (There are workarounds using MPLS or Carrier Ethernet for private networks, but that’s not applicable to the public Internet.) This is one key reason why the Internet is so popular – with no reserved capacity, there’s no billing, and that worldwide free-for-all is usually more attractive than the far-above-cost pay-per-call billing that the telephone companies prefer. Surely you don’t really think that in this day of big fiber optic cables, it takes even a dime per minute’s worth of network resources to make a wireline call across the US or even between the US and Europe, do you? The lower price of IP is mostly a billing artifact, not a result of lower real costs.
So when an ISP builds a network, it puts in capacity, but it doesn’t have a fixed demand level to design for. A GigE link here, a DS3 there, whatever’s available, and it works. Additional capacity then follows the Field of Dreams model: If you build it, they will come.
Only crude end to end rate controls
TCP itself, the Internet’s error control protocol, is what performs the speed adjustment. It uses something called the slow-start algorithm. A computer sends a packet, waits for an acknowledgment from the recipient, and if it arrives on time, it sends two packets. When they’re acknowledged, it sends three. The network in the middle isn’t keeping track, or sending any messages to slow down. The sender’s transmit window grows until a packet is lost by the network – which the sender only knows about when it’s not acknowledged on time. And when that happens, the sender must assume that there is no packet buffer capacity left at some point, so it drops the transmit window back down to one packet. Lather. Rinse. Repeat.
TCP connection speed, then, tends to follow a sawtooth pattern, rising until it drops off. The time period of the sawtooth is based on the round-trip delay of the network, so shorter connections rise, and thus fall, faster. It’s a crude mechanism, thrown together in 1987 in response to the congestion collapse of that era’s network. But it kept the data flowing—so long as everyone played by the rules. And in the late 1980s, when Uncle Sam owned the network, the rules could be enforced by simply threatening to throw a miscreant off.
Of course most TCP applications don’t keep up the flow for very long. A large file may take a while to transfer, but web browsing has many small transfers initiated when a page is loaded. Interactive gaming tends to be more about latency than volume – not that many packets are delivered but timing is critical.
image via Gil C / Shutterstock.com
UDP streaming has no rate control
Streaming is a different animal. It doesn’t always use TCP. It may use UDP instead, which has no rate control. So it doesn’t have to adjust its effective rate to match the network. Some applications do make some rate adjustments if they notice packet loss—it’s up to the application—but in general, they have a rate at which they operate, and that’s that. So a VoIP call might use about 90 kbps, regardless of congestion. A high-fidelity audio stream might use twice that. But a TV-quality video stream uses megabits per second.
So let’s say that the path between an ISP and the nearest streaming server is one gigabit wide. That supports a whole lot of web browsing and email, but it’s only a few hundred TV streams. Or in the case of a mega-ISP like Verizon FiOS, hundreds of thousands of users in a metropolitan area might be fed through what might be a 10 gigabit pipe to the content server’s network. If it gets busy, that’s that. Streams will lose packets. Even a 100 gigabit pipe can’t carry more than 50,000 2 Mbps TV streams at a time. Unlike the telephone network, the Internet doesn’t block requests, so when it’s congested, existing connections suffer too, not just new ones.
So the technical consequences are that not only will streaming degrade, but also that non-streaming TCP applications will be degraded even more so, because they will slow down when they lose packets, while streams won’t. This reduces the share of capacity available to TCP, while making nobody happy.
Video often streams at high speed over TCP
Video, being one way and tolerant of a few seconds of delay, doesn’t need to use UDP like VoIP, which must use real-time streaming. Video services generally send video through the web protocol HTTP, which runs above TCP. These applications use several seconds or more of buffering, which allows TCP to use slow-start. It’s still a form of streaming, and it still requires adequate capacity averaged over the buffering interval; the picture will pause or degrade if enough capacity isn’t available and the buffer goes empty.
Netflix servers frequently adjust their video quality to try to fit into the apparently-available capacity. But that adjustment can interact unpredictably with other TCP applications, causing it to ask for a lower rate than might actually work, thereby impacting the picture. The network can then look even more congested than it is. Remember, an IP network itself provides no information about available capacity; it’s all guesswork based on observing end to end acknowledgments. Whether TCP or UDP, the Internet wasn’t designed for streaming.
And while video’s use of TCP and rate adjustment helps a little, it doesn’t deal with the longer-term cost of raising average consumer usage. Slow-start sawtooth or not, the average TV-quality video stream is still moving a lot of bits. An avid cable-cutting video fan with a big-screen set or two can easily use a couple of hundred gigabytes per month. No wonder monthly usage caps, or tiered rates, are likely to become more common.
A partial work-around is the content distribution network (CDN). Netflix, Hulu, and other large content providers put caches of their popular content in many locations. User requests are redirected to the one on their ISP’s network, or nearest to it. It’s up to the ISP and the CDN to make the deal. Small rural ISPs, who have the highest cost of reaching the backbone, are too small for the CDNs to bother with. And while this helps ameliorate the backbone issue, it doesn’t help the local distribution network.
Financial reality vs. technical alternatives
The problem with the Netflix business model, then, and with the whole “cut the cable” model of Internet video, is that it replaces relatively efficient broadcast delivery of video with inefficient unicasts. Instead of one stream (TV channel) being shown to many users, each user gets their own stream. Bandwidth utilization thus multiplies by orders of magnitude. This doesn’t look like a problem to the consumer, for whom the marginal cost of bandwidth is precisely zero. And zero times anything is still zero. The only problem the consumer sees is that it doesn’t always work. And given the paranoia about “neutrality,” it’s all too easy to assume intentional blockage, not congestion caused by the Internet snake’s trying to swallow the television elephant whole.
Are there technical solutions? The obvious one is to keep piling on capacity, but that would be expensive, and there’s really no agreement as to who should pay. AT&T Mobility is offering a deal to content providers wherein they can subsidize usage—essentially like 800 numbers for data—but it too is drawing controversy. Netflix isn’t offering to pay anyone. And such issues involving money are what killed previous plans for video-friendly networks. It’s not as if big video-on-demand networks are a new idea. Imposing a traffic-engineering requirement on ISPs, to support video, wouldn’t work very well. But it would break the whole voluntary model of the Internet, turning it into a new regulated phone network. And it’s doubtful that any US government agency actually has that authority.
The network that got away
Let’s set the wayback machine to the early 1990s, when the Internet was not yet widely known to the public. The buzzword of the day was the “Information Superhighway.” The assumed future technical architecture for consumer access was assumed to be Broadband ISDN, a fiber-to-the-premises network based on Asynchronous Transfer Mode (ATM) switching. In fact, many telephone companies made promises that they would roll out all-fiber networks over the following two decades, in exchange for “alternative form of regulation,” which allowed them to raise their profit margins.
The most explicit of these promises was made by New Jersey Bell, called Opportunity New Jersey. They promised that 100 percent of their subscribers would be offered 45 megabit two-way service by 2010. A few small trial networks were even built, though later abandoned. Of course the promise was never kept: Much slower DSL was offered to a majority of subscribers, but not all, and FiOS was eventually made available to many but far from all subscribers. And those don’t offer the same two-way capabilities that promised, though FiOS arguably has a pretty decent bag of tricks of its own. In other words, it was a classic case of Kushnick’s Law: A regulated company will always renege on promises to provide public benefits tomorrow in exchange for regulatory and financial benefits today.
What B-ISDN promised, and never delivered, was switched telecommunications at video speeds. It was designed to offer low-loss video transport with engineered capacity, alongside other services. The insurmountable problem they faced was how to price it. Since the telephone companies liked to overprice voice calls, how could they offer even larger connections for video without charging even more for it? Common carriage rules would have allowed users to send voice down the same connections, after all— what you do inside common carriage is your business, not that of your carrier.
So the telephone companies gave up video, and that whole superhighway thing, rather than disrupt the price of voice. Instead, the Internet came along and disrupted it for them. Just without the tools to ensure the quality. And that’s the trick: By not doing anything to ensure quality, the Internet’s cost is kept down. Best efforts: You take what you can get, but can’t ask for more. (The term is used ironically; it really refers to the worst class of service, if there are others.) If a successor internet—say, a multi-service network with capabilities more akin to B-ISDN or telecommunications—were to start doing traffic engineering in order to support vast amounts of video, then more detailed billing would be likely to follow, at least for anything other than “best efforts” –even if it didn’t have to protect legacy voice or cable revenues. And it would probably price “fetch a unicast stream” higher than “watch a shared broadcast stream.”
The technology for delivering high-quality video exists. The hard part is getting people to pay for it. The Internet’s biggest calling card is its simple pricing model, but the growth of video is putting it under extreme pressure. Perhaps streaming a separate copy of every show to every view isn’t such a good idea after all. So when it doesn’t work, it’s probably not a conspiracy to protect other legacy services.
Principal, Interisle Consulting Group
James Cham, partner at seed fund Bloomberg BETA, was at Cisco Collaboration Summit today talking about the importance of models to the future of machi…
The retail value chain is in for a blockchain-enabled overhaul, with smarter relationships, delivering enhanced transparency across an environment of …
With GDPR on the horizon, Zuckerberg in Congress testifying and Facebook users questioning loyalty, change is coming. What that change will look like,…
Organizations amass profuse amounts of data these days, ranging from website traffic metrics to online customer surveys. Collectively, AI, IoT and eve…
Hollywood has programmed society into believing satellite imaging as a magic, all-seeing tool, but the real trick is in analysis. Numerous firms are f…