How James Licklider Got It Wrong

By Special Guest
Carolyn Raab, VP of Product Management at Corsa Technology
March 16, 2017

As visionaries go, James Licklider was pretty impressive.  In the early 1960s, while working at the Pentagon, he wrote a memo about what he called the “Intergalactic Computer Network” that foretold virtually everything we know today as the Internet, including cloud computing.  He even foresaw the need for cybersecurity, suggesting a completely distributed design in order to avoid a single point of failure. 

To make such a distributed system work, early architects built DNS (Domain Name System) so users could navigate using easy to remember site names like “Google” without having to know the actual physical IP address of Google’s servers. 

Unfortunately, Licklider missed an important point.  In the ultimate irony, the DNS required for distributed computing turns out to be just the kind of single point of failure Licklider had hoped to eliminate.

Breaking the Internet

Last year an unknown hacker unleashed a distributed denial of service (DDoS) attack targeting U.S.-based Dyn, a company who provides DNS services for many Internet scale Web properties.  By focusing massive amounts of junk traffic from as many as 10 million compromised bots, the attack ultimately brought down such well-known sites as Twitter, Reddit, Netflix and Airbnb. 

Think about that for a minute.  A single DDoS attack targeting a single company resulted in a day’s worth of absolute chaos and brought down several of the world’s largest Web properties.  So much for avoiding single points of failure.

In one sense the attack wasn’t groundbreaking; DDoS attacks have been on the rise for years.  But two aspects of the attack bear mentioning. 

First, the attack utilized compromised Internet of Things (IoT) devices, as many as 10 million according to some reports.  That’s important because IoT devices are growing geometrically and are notoriously under-protected.  For example, what is the username for your webcam at home?  For far too many users the answer is “whatever they defaulted to out of the box,” providing fertile ground from which hackers can recruit the bots they need for their DDoS attacks.

The second notable aspect was the sheer volume of traffic the attack generated.  The exact magnitude may never be precisely known, but estimates range as high as 1.2 terabits per second.  This is far beyond the capacity of most DDoS protections solutions. 

If Netflix and Airbnb can fall to such DDoS attacks, what chance does the average site have?  Before you answer, consider that Deloitte is predicting that DDoS attacks will grow significantly in 2017, to more than 10 million discrete attacks, and that the size of the biggest attacks grew by 250 percent in the past year.

The reality is that DDoS is a problem that is going to get worse – a lot worse – before it gets better.  It is worth thinking about how best to mitigate such attacks.

Why High-Volume DDoS Attacks Are So Hard to Mitigate

Fighting DDoS attacks requires two very different capabilities: intelligence and mitigation.  Intelligence is being able to spot the attacks in the first place.  Mitigation is doing something about it. 

DDoS attacks can be stealthy, making common and seemingly innocuous requests of a company’s servers, but in such a high volume that the servers crumble under the load.  Security intelligence requires sophisticated software algorithms to analyze traffic in order to quickly and accurately identify attacks.

Mitigation, on the other hand, is a fairly brute force operation.  It requires less intelligence, but very high throughput and fast switching times.  Mitigation in many ways is just a form of forwarding traffic.  Once security intelligence identifies the attack signature, mitigation is a matter of looking for traffic matching that signature and forwarding it away from the company’s servers.

That sounds simple enough, but as the Dyn attack shows, DDoS protection is increasingly failing to protect company resources, for several reasons:

  • Insufficient bandwidth.  The bad guys are winning the security arms race.  The most virulent DDoS attacks produce more traffic than current DDoS safeguards can process.  To protect against the kind of attacks leveled against Dyn last year, your DDoS solution needs to process 150 million packets per second for every 100Gbps of throughput.  Few of today’s DDoS solutions can do that.
  • Inability to scale.  As noted above, mitigation is a form of traffic forwarding.  It is tempting to just add DDoS protection to your core router.  The problem is that your core routers are busy with core network routing chores.  When you add DDoS intelligence and mitigation to their workload you reduce their ability to do basic routing. 

Core routers simply cannot scale to do both basic routing and DDoS protection – especially at the scale of attacks being seen recently.

  • Lack of Location Flexibility.  Solutions that can keep up with today’s terabit-scale DDoS attacks are rare – and very expensive.  Their cost dictates that they sit at the very core of the network; it is simply too costly to place them at all your edge locations.  Yet this means rerouting all your traffic to the core for analysis and mitigation, a highly impractical solution. 
  • Lack of Agility.  DDoS attacks are morphing quickly.  Many DDoS solutions lack the agility to update their intelligence and mitigation logic quickly as the industry identifies new DDoS attacks.  This is especially true when DDoS mitigation is deployed as rules in the core routers.
  • Lack of Mitigation Flexibility.  Each DDoS attack is unique.  It follows that one’s mitigation strategy should be tailored to the specifics of that attack.  Yet that is often not the case.  Many large ISPs, for example, adopt a “cattle drive” mentality.  If one of their customers gets hit by a DDoS attack they move quickly to protect the rest of their customers by killing all traffic to and from that customer.  Essentially, they abandon the diseased cow by the roadside to save the rest of the herd.

A more flexible solution would tailor a mitigation strategy that protects both the infected and non-infected customers simultaneously.

  • Cost.  As mentioned above, DDoS solutions that can keep up with today’s terabit-scale attacks are astronomically expensive, causing many organizations to adopt a “duck and cover” approach (no protection and react to attacks manually).

The Problem in a Nutshell

The reason why existing DDoS solutions fall short is based on a fundamental architectural flaw.  Most solutions combine security intelligence with mitigation in a single solution.  As noted above, security intelligence requires complex software which can analyze traffic and detect even the most sophisticated attacks.  On the other hand, mitigation requires extremely fast hardware that can immediately enforce policy to reroute attack traffic at line-rate and without slowing the network down.

When you put intelligence and mitigation in the same box you end up with a complex architecture that attempts to optimize the CPU complex needed for security intelligence with the raw hardware power for mitigation.  Complexity always leads to very high costs at purchase time as well as operationally over the life of the solution.  Or, if you do find a more affordable solution, the architecture is compromised and you end up underpowered in both intelligence and mitigation capacity.

A better approach is to disaggregate security intelligence from mitigation, but how?

Disaggregated Network Security: A Better Way

When the security intelligence is separated from the mitigation and it communicates through an out of band interface, better and radically simplified DDoS protection is immediately possible.

These interfaces include BGP Flow Spec and REST APIs.  The BGP Flow specification is a multi-vendor standard which allows multiple devices on a network to coordinate traffic filtering.  In this way, a large site can choose best-of-breed security intelligence solutions that sit at strategic points in the network and watch for attacks.  It is broadly used already, as are REST APIs.

When the intelligence solution spots an attack it creates mitigation rules and sends them to the mitigation engine over these out-of-band interfaces.  This architecture provides several immediate and fundamental benefits:

  • Best-of-Breed DDoS Protection.  By disaggregating security intelligence from mitigation, network architects are free to choose the very best intelligence solution and combine it with the very best mitigation solution.  As noted, these two activities have very different requirements, and it is unlikely any single solution is optimized for both.
  • Terabit-class DDoS Mitigation.  As we saw above, mitigating high-volume DDoS attacks is demanding and requires extremely high-performance hardware.  That is easier to find in a pure-play mitigation solution than in a combined solution.
  • Improved Location Flexibility.  By splitting DDoS into intelligence and mitigation, network architects can distribute each to its proper location in the network.  Put security intelligence where it belongs and mitigation where it belongs.
  • Scalability.  Traffic volumes for DDoS attacks, currently topping out at 1.2 TB/sec, are forecast to grow even higher.  A pure-play mitigation solution makes it easier to design solutions that both scale out and up as needed.
  • Agility and Mitigation Flexibility.  Disaggregated DDoS protection relies on using pure-play solutions for both security intelligence and mitigation.  These pure-play solutions will react and adapt to new attacks faster than bloated aggregated solutions because they are focusing on just their single aspect of DDoS protection.  This provides a more agile solution – one that adapts quickly to new attack vectors.

It also enables more flexible mitigation strategies, such as filtering on sources of attack traffic instead of being limited to shutting down destinations.

  • Affordability.  One of the core benefits of disaggregation is a reduction in costs.  Bloated all-in-one solutions come at a steep premium in cost.  Pure-play solutions not only perform better, but they also are less expensive.

DDoS Will Continue to Evolve.  You Should Too.

DDoS attacks have been around for nearly two decades.  But with the world’s first terabit-class attack, 2016 marked a turning point.  When even the largest Web scale sites are unable to withstand the largest DDoS attacks, it is time to evolve and adapt.

Disaggregating security intelligence from mitigation is a strategy that provides fundamental benefits that will allow organizations to stay ahead of the bad guys in the DDoS arms race.




Edited by Alicia Young


SHARE THIS ARTICLE
Related Articles

For True Cybersecurity, Executives Must Become Hands-On

By: Special Guest    4/21/2017

Data security is so important that mishandling it can spell disaster for an enterprise. It is a potentially ruinous mistake for executives with non-te…

Read More

Facebook's Latest VR Cameras Offer New Freedom

By: Steve Anderson    4/21/2017

Two new VR cameras from Facebook, of all places, add an impressive new level of freedom for users to shoot video.

Read More

Facebook Working on Shocking New Interfaces

By: Steve Anderson    4/20/2017

A brain-computer interface? It may be coming soon from Facebook.

Read More

A Time Traveling Telescope? VR Makes it Happen

By: Steve Anderson    4/20/2017

With a new virtual reality based telescope, users can see back in time, in this case to the Seine in Paris in 1628.

Read More

Ex-Microsoft Execs Extend Bill Gates' Vision for a Better LinkedIn- One That Pays You

By: Rob Enderle    4/17/2017

One of the most interesting announcements last week was for a new service called Nextio (sounds like 'next to you'). This takes an idea that I first h…

Read More