Home | Opinion | Optimising Storage Architectures for SSD
Optimising Storage Architectures for SSD

Optimising Storage Architectures for SSD

5 Flares Twitter 0 Facebook 0 Google+ 5 StumbleUpon 0 Buffer 0 LinkedIn 0 5 Flares ×

Last week I attended Hitachi’s 2012 Blogger Day.  Aside from catching up with some old friends, we were presented with some NDA stuff which will see the light of day soon.  In the meantime, I want to talk about a press release Hitachi made while I was still on holiday (and clearly missed as I returned, somewhat jetlagged).

Previously I’ve discussed how solid-state arrays need to be optimised in their design to get the best out of the technology.  Traditional arrays were designed to cope with the hard drive as the slowest component in the architecture.  IP was built around squeezing the best performance out of spinning media.  Startups in the all-flash array market have created new products that work in exactly the same way; they are designed to get the best out of solid state media, including all its good points and bad points.

Whilst I still believe that all-flash arrays built from the ground up will have the advantage (especially in delivering low latency rather than purely high IOPS), Hitachi’s announcement of their Flash Acceleration firmware release for VSP (which looks to have been made in true deprecating style) shows that current hardware could be tweaked to be more efficient with SSD technology.  In fact, the improvements are significant, with an all-flash VSP producing a claimed 1,000,000 IOPS.  I questioned Patrick Allaire (Marketing VP for VSP who possibly needs to tweet a little more) on this magical 1 million number, which let’s face it, is purely a marketing term.  He indicated that the lab testing had pushed workloads to higher values (around 1.2m IOPS), but getting to the 1 million mark sends the message Hitachi are looking to convey, and that’s that their VSP architecture continues to deliver on performance.  The enhancements provide 3x scalability on the current VSP, in terms of IOPS, with a 65% reduction in I/O response time.  Incidentally, the enhancements also improve performance/throughput with arrays built on traditional disks too.  The only downside to this new firmware release is that it comes as a chargeable item (albeit with a free trial first).  I think if Hitachi want competitive advantage in this market, then that should release this firmware as a free upgrade, as it would show commitment to delivering the best possible products to their customers.

Of course Hitachi are not the first “top six” storage vendor to claim high performance, however from what I can see, they are the first to put a number to their array capability.  Hitachi did present a slide indicating EMC had quoted the VMAX- 40K at 810,000 IOPS during EMC World 2012.  As I didn’t attend that event, I can’t comment on the accuracy of that figure, however I have tried to corroborate the number through EMC blogs, presentation material and so on and have no success in doing so.  In fact, Chad Sakacc on his EMC blog at Virtual Geek has a post highlighting the technical benefits of the 40K without a single quantifiable performance figure.  If anyone has a referenceable source then please let me know and I will update this post.

What’s Next?

Hitachi also announced some details of their new flash controller architecture.  This is due to offer greater sustained throughput, 5+ years endurance, zero block compression/dedupe and security functionality.  Look out for more on that as the news becomes public.  I have a feeling that this will be only the start of an evolving strategy and we will see many more announcements in the coming months.

The Architect’s View

The all-flash array market is maturing nicely.  We can see products at all levels; replacing traditional HDDs in arrays, as dedicated flash appliances and of course integrated into the host.  Hitachi have some way to go in order to catch up with this expansive market place and the Flash Acceleration code is only a first step.  Already, other vendors are moving into delivering converged flash solutions; that is, they are integrating the intelligence between host and array flash to provide added value, but possibly more important to secure customer lockin.  Hitachi needs to make sure the future doesn’t lock them out.

 

Disclaimer: I recently attended the Hitachi Bloggers’ Day 2012.  My flights and accommodation were covered by Hitachi during the trip, however there is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time when attending the event.  Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

 

Related Links

 

Comments are always welcome; please indicate if you work for a vendor as it’s only fair.  If you have any related links of interest, please feel free to add them as a comment for consideration.

About Chris M Evans

5 Flares Twitter 0 Facebook 0 Google+ 5 StumbleUpon 0 Buffer 0 LinkedIn 0 5 Flares ×