I/O virtualization video: SR-IOV, MR-IOV, NICs and moreDate: Sep 26, 2012
In his 2012 Storage Decisions Chicago presentation titled "Innovations in storage networking: Next-gen storage networks for next-gen data centers," Dennis Martin, founder and president of Arvada, Colo.-based Demartek LLC, discusses I/O virtualization (IOV), what components are required to make it work and how the technology can benefit IT shops. View the video above and read Martin's remarks below to learn more about I/O virtualization using the Single Root IOV (SR-IOV) and Multi Root IOV (MR-IOV) specifications, and how these two specs can be used with network interface cards (NICs), RAID controllers and Fibre Channel host bus adapters (FC HBAs) to achieve a more efficient data center.
I/O virtualization isn't the same as server virtualization, but very complementary to it. The idea of virtualization -- for example, if you take a virtual server -- is to separate the physical from the logical. The nice thing about virtual machines (VMs) is that you can spin them up easily and you don't have to buy new hardware for each one, you can just throw them onto another server.
What you're doing is taking one piece of hardware and representing it as multiple pieces of hardware. It looks like a server, but it's really just software. You can combine things, split things … that’s the idea with virtualization in general.
Think about that on the I/O side. What if you could do that with NICs, RAID controllers, FC HBAs or anything else that mounts on a slot in a server? What if you could virtualize that, split it out, combine it or rearrange it?
For example, you may be doing NIC teaming, which makes the network connection look like a 2-gig pipe, 4-gig pipe or whatever "team" you want to set up.
Let's take it to the next level. This specific case is called SR-IOV. In this diagram we have a box, and inside the box I have three VMs, some kind of hypervisor -- pick your favorite vendor -- and some kind of adapter. Notice I don't say NIC or HBA; it’s any kind of adapter that fits in a PCI Express (PCIe) bus.
In a world where you don't have SR-IOV and you do this, the VMs have to share it, and what organizes the sharing? It's the hypervisor, right? And does the hypervisor ever get in the way? Yes. So, what you'd like to do is offload all that management to the card, which is what SR-IOV is all about. With SR-IOV, a card that's SR-IOV-capable has the intelligence to manage the virtual connections so the hypervisor doesn't have to, which means you get a few cycles back in your CPU for other things because it's now offloaded to the card. So you might be able to throw another VM on there or put a bigger app on there.
That's the idea here. Think about this with NICs, FC HBAs, disk controllers and RAID controllers. Think about it even for expensive things like PCIe SSDs. What if you could share them with all the guests but you didn't really have to have the hypervisor do that for you, you could just do it in the hardware?
Now let's take it one more level and break it out. This is a very similar picture to the SR-IOV example, but instead of VMs running inside of one physical server, you have three physical servers that can have VMs running within each of them. In this case, one adapter card can be shared by the three physical servers by placing the adapter card into an external PCIe chassis. This is a MR-IOV scenario. So if your single physical server wasn't filling up the 8-gig FC pipe, maybe two or three servers would.
Think about another server that only had slots in it -- no CPU, just slots -- and you put your 10-gig NICs, 16-gig FC HBAs or whatever you want into that, and then you put a PCIe card in your server, and then you have a cable just running into this chassis and then it finds the card out there.
Now you can share the cards. Just think about a FC SAN. When you have a SAN, you have all the storage out there, and no one server owns a particular piece of storage; you just carve it up as a pool and say "These LUNs go to that guy, those LUNs go to that guy." Same idea, but now it's with the cards. You don't really own the FC HBA, the 10-gig NIC or the RAID controller. It's not owned by the server, but it's used by the server.