What precisely the Federal Communications Commission ultimately might do in terms of new network neutrality rules is unclear, and in some ways, more confused than a month ago.
Originally, new proposed FCC (News - Alert) net neutrality rules would have allowed content delivery networks to be created that extend from the edge of the network to the actual end user. Whether that will remain the case is now the issue.
Strictly speaking, net neutrality is about whether content delivery networks can extend to the end user. But it also is about permissible methods used by ISPs to alleviate latency issues.
App providers generally prefer that ISPs increase bandwidth to deal with latency. ISPs would prefer to do that, and also use CDN techniques that more directly address latency issues.
Can bandwidth alone fix latency issues? Many would say “no.” But, can other network architecture practices fix latency issues? Users of content delivery networks and edge caching would say “yes,” at least to a large extent.
That has implications for the positions taken by ecosystem participants regarding net neutrality policy, in particular the use of content delivery networks extended all the way to the end user location.
As always, there are private financial interests that correspond to the public policies that could resolve the network neutrality policy issue. Essentially, network neutrality boils down to barring anything other than best effort Internet access by consumers.
That policy, it is argued by some, prevents Internet service providers from gaining business advantage by providing the same sort of packet prioritization routinely used in the Internet backbone, even by opponents of content delivery mechanisms extended all the way to end user locations.
Opponents of such policies argue that content delivery features might be desirable for latency-dependent services, such as voice and video. That, in turn, might be important as app providers and ISPs continue to rely on video apps to drive both access revenue and end user value.
But the public policy benefits of an “open Internet,” where users have access to all lawful content and apps, also have financial advantages for Internet ecosytem participants.
Essentially, by prohibiting content delivery all the way to the end user, ISPs are forced to deal with latency issues by measures such as increasing raw bandwidth and retooling the backbone architecture of their networks. That means the cost is borne directly by ISP customers.
Were it possible to use other techniques, such as content delivery networks stretching all the way to the end user, content and app providers might have to pay for some of the investment, just as app providers presently pay money to Akamai (News - Alert) and other content delivery networks to assure packet performance across the Internet backbone.
In other words, net neutrality, whatever else it represents in terms of public policy, has direct revenue and cost implications for Internet ecosystem participants. In dealing with latency issues, net neutrality rules shift cost to ISPs.
Extending content delivery networks all the way to the end user (where today CDNs operate only to the network edge) might shift some costs to app providers.
Real money, business advantage, cost structures and revenue models, in other words, are at stake in the net neutrality debate.
So much hinges on what policies the Federal Communications Commission ultimately adopts. And those policies are in flux.