- Storage White Papers
Following on from my first post on VVOLs, I did a little research on the two technology previews that had been presented at this and last year’s VMworld – sessions INF-STO2223 & VSP3205 respectively. The sessions fill in a few background ideas and seem to have evolved slightly over the past 12 months. Here are some of the contents.
The I/O Demultiplexer is a way to simplify connectivity between the VM and storage array. In the latest presentation, the I/O Demux has been relabelled Protocol Endpoint and should be either SCSI or NFS compliant (I suspect they mean more agnostic). As I mentioned previously, unless VMware are talking about fundamentally redesigning the SCSI protocol, then for block-storage there still needs to be the concept of initiator and target to represent host and storage. Both presentations are at pains to point out that the I/O Demux is not a LUN or mount point so exactly what is it?
A capacity pool is a logical pooling of storage set up by the storage administrator, from which the VMware admin can create VVOLs. This means responsibility for pool creation (it’s layout, location, performance) stays with the storage team, but the virtualisation team have the flexibility to allocate VVOLs on demand within that pool of capacity. In most respects, it seems that a capacity pool is no more than today’s VMFS LUN or an NFS share/mount point, but is consistently named across both protocols.
Array communication will be managed by a new Vendor Provider plugin on the storage system. Previously this was described as a Data Management Extension. I’m never comfortable about array-based vendor plugins as I think they are usually a kludge to make two incompatible devices work together. To me the vendor provider already has the smell of an SMI-S provider. These never get natively implemented and are usually sitting on a management server that has to be available for the storage admin to manage the array. VMware need to be clear about whether this provider will be native or not, as it only introduces additional complications. Of course the vendor provider plugin is needed probably because neither SCSI nor NFS protocols could be modified to provide the additional management commands VMware wanted or needed.
As I mentioned in my previous article, I can see how VVOLs could be easily implemented on NAS systems. In fact, Tintri already provide the VVOL features on their arrays today. I took the time at VMworld to chat to Tintri co-founder, Kieran Harty to get his view on the VVOL technology and how it might affect them. In his view, VVOLs will take another 18-24 months to fully mature, in which time Tintri already have a lead in the product they are shipping today. However it’s also true to say that today’s NAS vendors could easily add code that recognises the file comprising a VM.
From a block perspective, the spectre of SCSI LUN count still looms. I expect the IO Demux is a fix to get around this problem. VMFS LUNs will be renamed capacity pools and VVOLs will be sub-LUN objects. Hardware Assisted Locking introduced in vSphere 4.1 enables locking of parts of a VMFS in a much more efficient fashion (i.e. locking the parts of the VMFS that represent a VVOL). All that’s missing to deliver VVOLs is a way of mapping exactly which VMFS blocks belong to a VVOL and ensuring the host and storage array both know this level of detail. One issue that still stands out here is in delivering QoS (Quality of Service). Today a VM can be moved to a VMFS that offers a specific service level in terms of performance and capacity. As that VMFS is a LUN, then I/O attributes in the array are easily set at the VMFS/LUN level. This includes I/O processing at the storage port on the storage array. Command Tag Queuing enables I/O processing to be optimised by reordering the processing of queued I/O requests when there are multiple LUNS on a shared storage port. Updated 18/10/12 – See the 2nd related link – VVOL containers are simply LUNs or NFS shares.
However, if a VMFS stays as a LUN and VVOLs are logical subdivisions of a LUN, then somehow additional QoS information needs to be provided to the array in order for it to determine which priority order to process requests. Today that’s done by LUN, but a more granular approach will be needed. How will this be achieved? Will the array simply know the LBA address ranges for each VVOL and use that information? Even if this is the case, today’s storage arrays will require significant engineering changes to make this work and what’s not clear is how a shared array with non-VMware storage will inter-operate.
As a side note, HP have released a preview of an HP 3PAR system working with VVOL storage. You can find the video through Calvin Zito’s blog at HP.
The Architect’s View
VVOLs could certainly be a step forward and I’m relishing understanding the full technical details. The concept is a good thing as it abstracts storage specifics from the virtualisation admin. At this stage there are still too many unknowns to determine how easy VVOLS will be to implement, however VMware will no doubt make them a mandatory part of vSphere in the future, so we better get used to dealing with them now.
- VMware vVOLS – More Than Just Individual LUNs?
- Virtual Vols VVOLs Tech Preview with Video
- Netapp and Vmware: VVOLs Tech Preview
Comments are always welcome; please indicate if you work for a vendor as it’s only fair. If you have any related links of interest, please feel free to add them as a comment for consideration.
- Netapp: The Inflexibility of Flexvols (12,374)
- Windows Server 2012 (Windows Server “8″) – Storage Spaces (11,531)
- Enterprise Computing: Why Thin Provisioning Is Not The Holy Grail for Utilisation (9,724)
- Comparing iSCSI Targets – Microsoft, StarWind, iSCSI Cake and Kernsafe – Part I (7,544)
- Windows Server 2012 (Windows Server “8″) – Virtual Fibre Channel (7,447)
- Review: Compellent Storage Center – Part II (7,406)
- Why Does Microsoft Hyper-V Not Support NFS? (6,794)
- How To: Enable iSNS Server in Windows 2008 (6,340)
- Data ONTAP 8.0 – Part III (5,772)
- How Many IOPS? (4,744)
- Another View on HP Moonshot (162)
- Comparing and Contrasting All Flash Arrays – All Vendors (108)
- The Maturing of Flash Storage (65)
- XtremIO: What You Need to Know (Updated) (43)
- What EMC Should Have Done With VNX (18)
- The Virtual Machine is a Legacy Construct (17)
- Windows Server 2012 (Windows Server “8″) – Storage Spaces (16)
- Why Does Microsoft Hyper-V Not Support NFS? (16)
- Comparing iSCSI Targets – Microsoft, StarWind, iSCSI Cake and Kernsafe – Part I (14)
- Windows Server 2012 (Windows Server “8″) – Virtual Fibre Channel (13)