Software-Defined

Shashi.Sastry

SR-IOV: A Trend in Enhanced Performance

by Shashi.Sastry on ‎07-25-2013 07:40 AM - last edited on ‎10-28-2013 09:24 PM by bcm1 (1,583 Views)

In my previous blog, I briefly discussed the Network Functions Virtualization (NFV) movement and the reasons why traditional Service Providers (SPs) are adopting NFV. One of the major qualifications for NFV will be to achieve relatively higher virtualized performance and scalability similar to what is now considered the norm in the hardware paradigm. Among the various trends that will define I/O virtualization, the two most distinct ones for consideration in NFV are:

  • Platform or server performance is increasing with the availability of multi-core CPUs.
  • 10 GbE is being widely adopted as the basic I/O.

 

In the virtualized environment, Layer 2 (L2) performance and throughput matches line rate even at 10G due to the L2 switching performance improvements for the virtual switches. With NFV and in this blog, the performance discussion is with reference to Layer 3 (L3) services - firewall, routing, load balancers, etc. The quest is to find how the higher packet throughput can be achieved when a combination of such services are configured on the virtual networking device, i.e. a virtual machine that supports services such as firewall, routing, load balancing, to name a few.

The performance degradation for L3 services in the virtual environment can be attributed to the components that sit between the Virtual Machine (VM) interface and the server’s hardware Network Interface Card (NIC).


5.png

 

Figure 1: A generic depiction of the layers between the VM interface and physical NIC.

These layers generally consists of the hypervisor’s virtual switches, system level drivers, hypervisor OS drivers, and so on. I/O techniques such as Peripheral Component Interconnect passthrough (PCI passthrough) and Single-Root I/O Virtualization (SR-IOV) are ways to bypass these components and tie the NIC directly to the interfaces of the virtual machine for higher throughput and performance.

Let’s look at SR-IOV as an I/O technique with Intel’s architecture to achieve the target 10G performance for NFV. SR-IOV is defined by a PCI-SIG specification with Intel acting as one of the major contributors to the specification. The main idea is to replicate the resources to provide a separate memory space, interrupts, and DMA (Direct Memory Access) streams per VM, and for each VM to be directly connected to the I/O device so that the main data movement can occur without hypervisor involvement.

 

In traditional I/O architectures, a single core has to handle all the Ethernet interrupts for all the incoming packets and deliver them to the different virtual machines running on the server. Two core interrupts are required - one to service the interrupt on the NIC (incoming packet) and determine which VM the packet belongs to, and the second on the core assigned to the VM, to copy the packet to the VM where the packet is destined. This results in increased latency as the hypervisor handles every packet destined for the VMs.


                                                                       6.png

 

Figure 2: SR-IOV yields higher packet throughput and lower latency

 

To achieve some of the stated benefits in Figure 2, SR-IOV introduces the idea of a Virtual Function (VF). Virtual Functions are ‘lightweight’ PCI function entities that contain the resources necessary for data movement. With Virtual Functions, SR-IOV provides a mechanism by which a single Ethernet port can be configured to appear as multiple separate physical devices, each with its own configuration space. The Virtual Machine Manager (VMM on hypervisor) assigns one or more VFs to a VM by mapping the actual configuration space to the VM’s configuration space.

7.png

 

Figure 3: A physical NIC is carved into multiple VFs are assigned to a guest VM (Intel).

 

When a packet comes in, it is placed into a specific VF pool based on the MAC address or VLAN tag. This lends to a direct DMA transfer of packets to and from the VM bypassing the hypervisor and the software switch in the VMM. The hypervisor is not involved in the packet processing to move the packet (between the hardware interface and the actual VM) thus removing any bottlenecks in the path.

 

SR-IOV is supported on most hypervisors such as Xen, KVM, HyperV 2012, and ESXi 5.1. Newer servers and 10G NICs with SR-IOV support is required (and virtualization is a must for NFV). While SR-IOV is one way to achieve high packet throughput, the actual deployment will give rise to discussions regarding hardware support (hypervisor, NICs), traffic separation, etc. Another issue is that even though the physical NIC can be carved into multiple VFs (number depends on the NIC and the hypervisor), there are practical limitations on how many VMs can be deployed to share this NIC. Techniques for VM mobility while moving from one VF on one server to a VF on another server have to be explored. The open question is whether customers will require VM mobility or if we can assume that these VMs are immobile and tied to the SR-IOV server that they are initially deployed on.  While SR-IOV promises to deliver high packet throughput, the exact nature of these improvements for L3 services will ultimately depend on a vendor’s software networking architecture or  vendor-enforced license throttling.

 

Like NFV, SR-IOV is a new and exciting paradigm shift. In the months to come, there will be lots of activity to define the solutions and use cases that will lend itself to the deployment of virtual networking software. As NFV gathers momentum, it will be interesting to learn what the DevOps environment will look like; perhaps that will be the topic of my next blog.