FEATURES

Big Data for Telcos: cutting OPEX, improving service quality – and deploying services faster

Share

Big Data has won acclaim – and rightfully so – for helping marketers improve the efficiency of their campaigns, assisting doctors in making diagnoses, fighting fraud, detecting hacker attacks, and even predicting financial markets. While most of the attention has been on consumer applications, Big Data analytics and techniques can be applied to telecommunications and carrier networks as well. In this article we shall discuss how Lifecycle Service Orchestration techniques can be used to cut the operational costs of carrier networks, improve the quality of delivered services, and enable more rapid provisioning of new network services.

First, let us review where the Big Data on a network comes from. Depending on the network architecture and how it is instrumented, raw data can be extracted at a very granular level – the origin and destination of individual packets, route taken, and the elapsed time to traverse the route. At a higher level, data can include customers, services, the type of information, and the nature of the connection for each link (metro fiber link, cellular data, enterprise LAN). Not all real-time data may be available for all routes and links, but the more information, the truer a picture emerges of the network, its capacity and utilization, as well as the performance of its network services.

What can be done with this data? The place to start is by understanding the network itself, encompassing both the physical topology of the network as well as the services running on it. While network management tools have been able to discover and map a network’s static topology for many years, the ability to truly understand the services is much trickier. In part this is because it requires analyzing all the raw data, and in part because the service information model is constantly changing in real time as users change what they are doing, IP addresses change their edge access points (think about mobile users), load balancers adapt to dynamic demand, and network routes are updated to improve performance.

That’s where modern Big Data analytics techniques come in. Firstly, one needs to make sense of the raw data from a myriad of operations systems, and the network itself, to build a real-time, real-world service information model of the network. This critical step requires machine learning techniques to determine how network elements are interconnected and services are delivered over that infrastructure. Secondly, the accurate up-to-date service information model then forms the basis for sophisticated predictive analytics based on measurements that are received from the network in real-time. This helps operators understand where performance is degrading on their network, which resources are over utilized, and where future traffic problems may occur, so that plans can be drawn up to build-out or reallocate resources.

In order to make Big Data analytics results consumable, advanced graphical user interfaces (GUIs) are required. Network operations teams can then view, on a single pane of glass, an accurate representation of their network and services, how their network infrastructure is being used, and capacity, utilization and performance analytics. Completing the full cycle, Lifecycle Service Orchestration software automates the actions required to keep the service quality to the highest possible level.

In other words, Big Data and predictive analytics combine to make large carrier networks more resilient. With large networks scaling to millions of network paths and terabytes of network measurements per day, only Big Data techniques can provide the proactive guidance that operators require to anticipate and satisfy future customer demand.

While there are a number of areas where Big Data analytics and machine learning can benefit Lifecycle Service Orchestration, let’s return to the three points listed above: cutting the operational costs of carrier networks, improving the quality of delivered services, and enabling more rapid provisioning of new network services – when carrying those services over both physical network elements or virtual network functions (VNFs). Reducing operational costs: OPEX can be reduced by avoiding problems before they occur. Providing network operations teams with a service information model layered on top of the physical topology provides useful information about performance, capacity and utilization that can allow for a more proactive allocation of resources – not only potentially reducing emergency truck rolls, but also allowing the allocation of less expensive (or more efficient) resources.

Improving service quality: The service information model understands not only the routes, but also the nature of the services. For example, knowing that VoIP or video services must stay within specific parameters, the Big Data algorithms can determine when and what problems are likely to occur. Proactive measures can then be taken to ensure that quality is met, thereby satisfying or exceeding customer requirements for that traffic.

More rapid services provisioning: In the event of a service request to add or upgrade connectivity between locations, in a massive, loaded carrier network, it can be difficult to determine if the capacity is already present, or if upgrades need to be made to the network to accommodate the new customer requirements. Furthermore, the deployment steps could be painstakingly manual. Thanks to Lifecycle Service Orchestration using Big Data, new customer services can be implemented in a day – instead of in weeks.

Given the vast scope, scale and complexity of today’s service provider networks, encompassing IP VPNs, MPLS, Carrier Ethernet, Ethernet over SONET, and mobile technologies, Lifecycle Service Orchestration is becoming a key enabler of network agility. Carriers must be able to respond quickly to customer demand and to changing network utilization – Lifecycle Service Orchestration, enhanced by Big Data analytics, will be the “secret sauce” that will enable carriers to become increasingly competitive and responsive. The market is growing fast, and new technologies like SDN and NFV generate even more network Big Data. Being able to understand the network in real time, and respond in real time, is essential.

Tools like CENX’s Cortx Service Orchestrator, demonstrate the benefits of Big Data analytics and machine learning to network operators, helping cut OPEX, improving service delivery and enabling the rapid deployment of new services. By using real-time data to build and maintain a service information model, and then layering predictive analytics and GUI-based search capabilities on top of that model, the service provider, for the first time, can truly understand the entire network – and where that network is going.

In many ways, the way Big Data is used behind-the-scenes by network operators is similar to the Big Data applications we hear about from retailers, from healthcare, from scientists, and from the finance industry. Big Data analytics connects millions – or billions – of tiny bits of information to draw a conclusion, make predictions, solve problems, create opportunities, and improve customer service. It’s at NASA, it’s on Wall Street, and now it’s in the network.