Mobile data traffic is estimated to grow more than 10 times from 2015 to 2021. The deluge of data and new services requirements will not only emerge from smart phones, but also from Internet of Things (IoT) devices. The need to deliver newer and faster services by telco operators is a mandate for survival. The network functions virtualization (NFV) infrastructure that the telco operators are building today in their data centers must meet the requirements of this imminent deluge, which will be further exacerbated by the adoption of 5G networks. Delivering the largest number of innovative services at the lowest NFV infrastructure cost is a fundamental challenge all telco operators are facing today.
Figure 1 – Mobile data traffic growth driven primarily by video, gaming, data, and IoT applications
An Efficient & Agile Server Infrastructure is Key
The server is the core building block of the NFV infrastructure. It hosts the applications – web, business logic, networking, and security – in virtual machines (VMs). They are the source of the revenues and services that operators deliver to mobile and IoT devices and customers. More than 60 percent of the monthly data center costs are attributed to these servers and related power and cooling costs. As such, improving the efficiency of each server in the NFV infrastructure is the first thing telco operators need to look at. Next comes operational efficiency related to managing the server farm using cloud orchestration tools such as OpenStack. Let’s look at these two aspects.
Figure 2 – Server monthly costs based on 3-year server and 10-year infrastructure amortization. Source: Amazon Web Services
What Impacts Server Efficiency
To be able to execute and deliver the necessary services, VMs and the applications in them, need to have access to adequate resources in the server. Examples of such resources are CPU cores, memory, storage, security policy rules, networking bandwidth, and analytics processing among others. VMs, based on the applications they host in an NFV infrastructure, have different profiles depending on their resource needs – some VMs are compute-intensive, others are memory-intensive, and some are I/O-intensive.
VMs all need security and networking services that are delivered using a virtual switch (e.g., Open vSwitch or OVS) or virtual router (e.g., Contrail vRouter), managed using OpenStack networking. They need to report real-time analytics, and in many cases, the virtual switch or virtual router collects analytics data on behalf of the VMs and reports it to a centralized analytics-processing engine.
When traditional server networking technologies are used (such as 10/25/40Gb Ethernet network interface cards (NICs)), delivering needed security, networking, and analytics services using a virtual switch or virtual router consumes as many as 12 CPU cores, leaving VMs and their applications starving. I/O throughput to VMs is constrained to less than 6Gb/s for Internet Mix (IMIX) traffic. Traffic from mobile and IoT devices morphs into connections and connection setup rates as it reaches the data center servers. In these scenarios, connection setup rates are constrained to about 5,000 per second, with as many as 12 additional CPU cores consumed, processing connection setup logic in server software. As a result, compute-intensive VMs are not able to get needed CPU cores, while network-intensive VMs are not able to get the I/O throughput they need to perform at their full potential. The result is poor server efficiency where the output per server is reduced to as little as a sixth of what is expected.
Server Infrastructure Operational Efficiency
It is common sense that managing a lot of the same things is much easier than managing silos of different things. An OpenStack-managed homogenous server farm is ideal for the NFV infrastructure or any data center infrastructure for that matter. With traditional NICs (or what are sometimes called commodity NICs) in servers, this goal of homogeneity of the entire data center server farm can be achieved. However, the efficiency of each server is minimized. When traditional NICs are used with software-based virtual switches or virtual routers, the needs of specific VM profiles can be met only with dedicated network configurations.
For example, a VM that requires low I/O throughput can be serviced using a network configuration where the software-based virtual switch or virtual router is executed in the kernel and four or six CPU cores are allocated for such processing. In another case, a software-based virtual switch or virtual router is executed in the user space using data plane development kit (DPDK), and eight or 10 CPU cores may be allocated for such processing to deliver higher I/O throughput to the VMs. In another scenario, the highest I/O throughput may be delivered to VMs of certain profiles using technologies such as single root input/output virtualization (SR-IOV). The good news is that servers can be configured to meet the needs of VM profiles. The bad news is that now we have silos of servers that are configured differently. VMs cannot be placed and moved freely. These aspects negatively impact the operational efficiency of the NFV server infrastructure.
Figure 3 – Network configuration silos resulting in significantly reduced operational efficiencies
It Is Time For SmartNICs
Whether it’s for the efficiency of each server, or the operational efficiency of the entire data center and NFV infrastructure, SmartNICs can deliver solutions to these issues. SmartNICs are programmable network interface cards that are optimized for common off-the-shelf (COTS) server and open source-based deployments, delivering complete OpenStack-managed solutions using the latest versions of Open vSwitch (OVS) and Contrail vRouter.
In the area of individual server efficiency, SmartNICs can boost output per server by up to 6X, delivering the following benefits:
- 10+ CPU core savings
- 5X+ I/O throughput to VMs while keeping intact all of the rich networking services available with OVS and Contrail vRouter
- 20X+ higher connection setup rate
- 10X+ improved price/performance for real-time analytics.
In addition, SmartNICs enable a homogenous OpenStack-managed COTS server infrastructure, with seamless and quick onboarding of customer VMs and third-party virtual network functions (VNFs), while maintaining full VM mobility. As a result, the use of SmartNICs significantly improves server infrastructure operational efficiency across the entire data center.