Flow Intelligence Identifies Latency and Bandwidth, Enabling Performance Optimization for Today’s Sophisticated Cloud-based Applications
As cloud applications become more sophisticated, they require a dynamic infrastructure that can readily adapt and adjust to ensure optimal delivery. But the underlying network often isn’t smart or flexible enough to isolate, differentiate and optimize traffic to each application’s needs. Specifically, unstable latency and bandwidth variability due to inappropriate sharing of the underlying physical infrastructure are wreaking havoc on applications, frustrating app developers and leaving customers dissatisfied. But what can be done to solve these problems on a large scale and allow everyone to fully benefit from the promise of the cloud?
The answer lies in application defined networking (ADN), which puts the flow intelligence on top of today’s dumb networks, decoupling the forwarding of packets from the orchestration of resource interconnections. ADN enables greater control over the infrastructure elements that impact application performance and ensures that latency and uncontrolled bandwidth do not prevent providers from delivering the level of service they have promised.
Image via Shutterstock
The Impact of a Dumb Network
As businesses and consumers alike become more reliant on the cloud, development and use of rich, dynamic applications is becoming more widespread. But this growing use of the infrastructure-as-a-service is causing serious performance issues for the applications that rely on it. The reason: Network infrastructure, and the server-centric approach to cloud provisioning, was not designed to optimally handle the hectic demands put on it by dynamically changing applications. For example, in a public cloud, a greedy scalability test application pushing the infrastructure to its limit can cohabit with a mission-critical application in production. These two different workloads then compete for the same bandwidth and other shared resources.
Among the problems cloud app developers and administrators face are unpredictable latencies and uneven user experiences; disastrous cascading effects from bottlenecks, failures and cloud outages; poor performance and lack of isolation from a large number of users; wasted capacity resulting from blind over-provisioned infrastructure; unmanageable network complexity and spiraling costs from lack of visibility in resource interactions.
These problems quickly translate into end user dissatisfaction. Users get easily disappointed when testing a new online product, such as an e-learning environment, when the system slows to a crawl and makes them wait for the next step. After this experience, these users will probably never try the product again. Consumers get frustrated, too, when the Netflix videos they’re watching on their computers grind to a halt while buffering, or a conversation gets twisted by bandwidth fluctuations in the network.
Businesses are not immune either. For those utilizing big data applications, latency can wreak havoc on the ability to convert millions of data sets into real, usable information. And, platforms–as-a-service cannot afford any interruption in the operation of the mission-critical applications they support. All these troubles can cause companies to lose money, credibility with their customers and valuable business opportunities.
Injecting Intelligence into the Cloud Networking with ADN
Underlying cloud performance problems result mainly from the fact that the end-to-end network cannot differentiate between different types of traffic, which have conflicting interests. Time-sensitive voice and video traffic can be delayed when massive data sets traversing the network block their path because they’re utilizing significant bandwidth. What cloud providers need are ways to help prioritize this traffic in real time and automate the exploration and control of latency and bandwidth issues within the network. Performance transparency is key to user satisfaction.
This is where ADN comes into play. With ADN, the applications themselves can specify, control and adapt the networking environment to optimize delivery and performance across public and private cloud networks. One of the first building blocks of ADN is “flow intelligence,” which is the ability of a network of resources to collect, model and analyze end-to-end measurements and monitoring data and help to understand:
In addition, flow intelligence helps to solve:
While various means are being employed to bring more intelligence to the cloud environment, there is a new approach that is leveraging data already collected by the transport layer (Layer 4). Using network sensors that can be placed in any virtual equipment of a cloud infrastructure, this flow mapping solution amplifies the knowledge and work already being done by the transport layer and its congestion protocol to provide an accurate real-time view of the latency being experienced and the bandwidth being consumed by applications.
This Layer 4 amplification is a critical building block of any ADN solution and offers many beneficial properties. It is a straightforward extension of the operating system, very efficient in terms of data collection and processing, and contains key information to quickly determine how to remediate the problems.
Flow intelligence enables cloud users and potentially their providers to automatically discover the footprint and the communications patterns of their application, as well as correlate user experience, system and communications metrics. By helping an application to immediately locate or predict the bottlenecks or latency issues, flow intelligence enables adaptive provisioning and networking that can make applications work smarter and help cloud providers better meet the demanding needs a wide variety of workloads.
About the Author
Pascale Vicat-Blanc is founder and CEO of Lyatiss Inc., a SaaS provider for application defined networking solutions for cloud performance automation. She has more than 20 years of R&D experience on network and cloud computing technologies, having served as research director at INRIA (French National Institute for Research in Computer Science), CIO of a Grid 5000 Data Center, team leader at INRIA-Bell Labs and a project manager with the CERN (European Organization for Nuclear Research). Vicat-Blanc earned an M.S. and a Ph.D. in Computer Science from the University of Lyon and multiple national awards in research and business (Legion of Honor, WomenCEO).
When it comes to malware and other types of computer bugs it seems like we are falling into a problematic pattern: it consists of researchers or "ethi…
Network strategies are underpinned by many aspects. It starts with consumer adoption of online and mobile video in droves. It continues with local cac…
The U.S. government has been slowly making a migration to the cloud since the Cloud First policy was instated in 2011, but the going has been tough an…
Next week is the launch of Lenovo Technology World in China and the CEOs of Intel, Microsoft, and Baidu (likely the only company that scares the crap …
We live in a world where e-commerce is increasingly becoming the way transactions take place. It is a world where programmatic advertising is foundati…