Analyzing IP traffic on mobile networks to manage service quality, user experience and security has become rather challenging for mobile operators of late. Put it down to the amount of traffic now being generated by consumers in 4G and LTE networks.
It turns out that there’s Big Data and then Too Big Data. Mobile operators are trying to figure out how to scale their analytics tools without losing the clarity on - for example - how to improve data throughput in and around a ballpark during a baseball game, or why you can’t maintain a voice call driving through the mountains.
For all the myriad of analytics tools in the market today, the original assumption was that they would examine all the network traffic and pick-out the useful information they needed to create metrics, KPIs and reports and with them, make the changes needed to improve quality, performance and security.
This worked well when the network was generating 1GB or 10GB of traffic at each location, but today’s networks generate petabytes of traffic on a daily basis. Now you might imagine that selling analytics tools would be a very lucrative profession right now, but the reality is that mobile operators can ill afford to make the investments to keep buying these tools ad nauseum, because the cost per-bit of analyzing network traffic is not falling nearly as fast as the cost of bandwidth.
Further, the tools that they have are losing their real-time analytics capabilities, because the back-end tools that process this data are being overwhelmed by the data tsunami, meaning that it can take hours or even days to figure out that an application or service is experiencing problems.
In other words, its time to reexamine the network architectures that deliver IP packets to these analytics tools and figure-out if we can make this more efficient.
In most mobile networks today, operators will place “taps” at key locations in the network. These taps take a copy of every packet on the optical fiber, and send it directly to an analytics tool or to a specialized Ethernet switch that copies that packet multiple times for as many analytics tools that need it.
Now each analytics tool is rather specialized – one tool might just look at voice packets, another at video traffic, and another for security. What’s interesting is that in most cases these analytics tools are throwing away 90% or more of the traffic they receive, just to get to the bits they can actually process. Surely it would be much more efficient if we could just feed the analytics tools the IP traffic they can actually process? That way, all of the bandwidth and processing power of the analytics tools would be directed towards their specialized jobs and not dropping unwanted packets.
Then there is encrypted traffic – why have analytics tools drop this data after they realize it can’t be processed? What about packet sizes? Many analytics tools just analyze the first 100 bytes or so of each packet, so why feed them a large 1500 byte packet?
What’s needed is a specialized type of Ethernet switch that does more than just copy packets, but can strip, groom and place IP traffic with the right analytics tool automatically. This need has created a new market segment called a “Packet Broker”, and the job of packet brokers is to take terabits of traffic in 100GB, 40GB and 10GB increments and feed the right data to the right tool in 1GB or 10GB increments.
Brocade has specialized in building Packet Brokers for mobile operators and unlike other vendors who have designed for enterprise use cases, the Brocade Packet Broker is designed to scale to the very high workloads and bandwidths of growing LTE networks.
Indeed, Brocade’s customers are seeing a 50% reduction in the costs of scaling-up analytics tools and servers by introducing our Packet Broker technology.
In the next installment, we’ll look at how Packet Brokers can be used to keep specific user’s traffic consistently appearing in the same analytics tool regardless of where the customer roams.