During the VMware keynote session today there was a minor discussion on the upcoming concept of VMware vVOLS. Today, a virtual machine sits on a VMFS created on a storage LUN or on an NFS share. An individual virtual machine consists of many files and in the case of VMFS-based VMs, is sitting on a piece of storage that is potentially shared with other virtual machines. For a couple of reasons that’s not a great thing; firstly if the array is used to replicate the VMFS (either locally or remotely) then all the VMs within that VMFS get replicated. That can be wasteful and overly complex to manage. Second, from a storage array perspective, the LUN is the lowest level of granularity in terms of performance and QOS and as the storage array has no way to determine the individual contents of the LUN, it can’t prioritise workload by VM.
vVOLS are the answer to overcoming the shared VMFS issue. The vVOL (a bit like a Hyper-V VHD) becomes the single container for storing the entire contents of a VM, including all metadata associated with it. Having a lower level of granularity means that a storage array that is vVOL-aware can replicate just that virtual machine and can give it a specific level of performance.
I have no insight into how VMware and the storage vendors intend to implement vVOLS, but I can see two options.
NFS – On NAS shares, a vVOL could simply be a single file with metadata to identify it as a vVOL. The storage system simply manages this file (being aware it is a VM), providing all the features of prioritised access, replication and so on. The internal format of the file would determine the VM contents, presumably with some header contents to store metadata and the remainder consisting of pages of data representing FBA blocks of the logical disk, much as a VHD works today. As the VM grows, the file grows.
Block – On block storage arrays, a vVOL can simply be a LUN. Today, LUNs can be created thin provisioned on most storage arrays, so a vVOL can be created as a thin provisioned LUN at the maximum size permitted by the underlying storage, sitting within a thin pool. This allows the vVOL to grow as necessary. QOS can be applied easily to an individual LUN. However block-based storage has more issues. Firstly, there is usually a limit to the number of LUNs that may be created on an array and this could be a limiting factor. Second, LUNs presented over both iSCSI and Fibre Channel use the SCSI protocol referencing a target and a device (LUN), with a limit on the number of devices on each target. Although vSphere 5 allows 256 targets per HBA there is a limit of 256 LUNs per host, far too low to be practical in terms of using a single LUN for each vVOL. This restriction, and the inherent problems in doing a discover of 1000′s of LUNs using the SCSI protocol, means that as currently defined, one vVOL per LUN won’t work. This has to be the main area on which the storage vendors are focusing, namely how to overcome the issues of SCSI, which is embedded in iSCSI, FCoE and Fibre Channel.
NFS seems like a simple option to implement. Perhaps we’ll see that as a first step. However, remembering that EMC owns VMware, then block is bound to be treated with equal priority. To make vVOLs work, the storage vendors will have to either fix the SCSI issue with clever discovery and mapping techniques, or come up with a totally new way of interfacing with objects on the array. One suggestion was to use object-based storage. Today those platforms use REST protocols over HTTP, which is both unreliable for high-volume I/O and doesn’t easily allow for sub-object updating. In any case, this would mean throwing out all of the existing IP and investment in current technology, which is not going to happen.
The Architect’s View
vVOLs make complete sense in order to scale virtual machine growth. However today’s storage protocols cause significant issues in achieving vVOL granularity. Storage vendors won’t throw out their existing architecture, but will most likely modify their hardware implementations in some way. Yet again, NFS could serendipitously overtake block as the preferred vVOL platform.
Comments are always welcome; please indicate if you work for a vendor as it’s only fair. If you have any related links of interest, please feel free to add them as a comment for consideration.
- Netapp: The Inflexibility of Flexvols (10,251)
- Windows Server 2012 (Windows Server “8″) – Storage Spaces (9,688)
- Enterprise Computing: Why Thin Provisioning Is Not The Holy Grail for Utilisation (8,086)
- Comparing iSCSI Targets – Microsoft, StarWind, iSCSI Cake and Kernsafe – Part I (6,040)
- Review: Compellent Storage Center – Part II (5,690)
- Data ONTAP 8.0 – Part III (5,170)
- Why Does Microsoft Hyper-V Not Support NFS? (5,103)
- Windows Server 2012 (Windows Server “8″) – Virtual Fibre Channel (4,586)
- How To: Enable iSNS Server in Windows 2008 (4,557)
- Back to Blogging (4,455)
- Windows Server 2012 (Windows Server “8″) – Virtual Fibre Channel (16)
- 3PAR Continues to be HP Storage Cornerstone (15)
- Netapp: The Inflexibility of Flexvols (11)
- Why Does Microsoft Hyper-V Not Support NFS? (9)
- Enterprise Computing: Why Thin Provisioning Is Not The Holy Grail for Utilisation (7)
- HP Storage Bets on 3PAR (7)
- Making Sense of Storage Vendor Growth Comparisons (6)
- Windows Server 2012 (Windows Server “8″) – Resilient File System (6)
- Enterprise Computing: Is the Solid State Drive Hype Over? (6)
- ViPR – Frankenstorage Revisited (6)