Storage in 2011: The best and worst case scenario

As the year dribbles to its conclusion, SearchStorage ANZ asked leading industry figures to give us their best and worst case scenarios for storage in 2011.

Predictions stories generally offer a bright outlook, so here at SearchStorage ANZ we thought we could give industry figures a chance to get that out of their system ... and then offer their views of the worst things that could happen to storage in 2011. So without any further ado, let’s start with their ...

Best-case scenarios

Clive Gold, Marketing CTO, EMC ANZ: New FLASH technology results in a step change in capacity and durability!

“This will accelerate the adoption of FLASH as part of storage infrastructure, both within storage arrays and being built into servers. As the growing ‘speed of access’ problem with massive data stores is solved it gives rise to new and interesting computing models like social-network mining and predictive pattern recognition.”

John Martin, Principal Technologist, NetApp Australia and New Zealand: The Government sector is well placed to lead business in storage strategy

As a result of The Gershon Review, Federal Government is in a great position to lead in standardising and providing benchmarks regarding storage efficiency for business.

To enable this to occur, Federal Government IT departments should start tracking costs for storage as a separate line item and report these findings to the public on a regular basis. This would provide a benchmark which businesses could compare themselves against and highlight the impact of inefficient storage management practices on IT, government departments and the environment.

The benchmarking should include metrics such as:

•         Total storage spend

•         Storage efficiency metrics (raw storage purchased vs data used for production purposes)

•         Energy costs measured by the storage consumption of end users

•         Dollar cost per input output operations per second (IOPs)

•         Clear guidelines around data retention and the implications for privacy

Adrian De Luca, Hitachi Data Systems: Don’t exceed SLA’s, just meet them

As much as our bosses tell us to exceed expectations, when it comes to storage we should be content with meeting them. That because over provisioning your storage infrastructure translates to more expense. Take the time to understand your storage consumption, not just utilisation. Review performance requirements, not just performance results and revisit architectural decisions made in the past to see if they still make sense. With this information, evaluate some of the new storage options available today (ie. SAN, NAS, iSCSI, FCoE, SAS, SSD, dense trays, etc) to determine the right solution for you. Don’t think products and features, instead evaluate services and capabilities.

Jeremy Babb, Senior Storage Architect, Practice Manager, Unisys Asia Pacific

With the Gershon review pushing capex spend by federal government businesses down, storage vendors and systems integrators will have to find new ways to have storage implemented within these areas. The positive is that the appetite for storage in the federal government space is growing and we need to find ways in which to share the cost burden of implementing large storage systems with federal government. If we can achieve this growth potential for 2011 is good, we would be expecting a 25-30% and may be 50% growth for this period.

Along with published business plans for 2011, New Basel 3 and for Insurance Solvency 2 regulatory requirements will push for not only more economic capital reserve but also the audit and reporting transparency. This in itself does not mean more storage, however we believe these new regulatory changes will drive a demand increase in storage. Again in the financial sector we are anticipating a 30-40% growth for 2011 as companies demand to keep everything for reporting purposes drives the growth.

Peter McCallum, Softection

We would have federal legislation put in place regarding data storage. All data, being paper and electronic, in a single piece of legislation that identifies data type (personal, medical, corporate etc.), a retention period for each type of data and a federally-funded audit process for this legislation, not self-regulated.

Such legislation will drive corporate policy, which in turn will drive the security and the storage requirements and knock out all arguments described above.

Adrian Sharkey, Managing Director, Quantum ANZ

I believe that the best innovation in the Storage industry for 2011 will be the development of a self-sustaining data centre powered by a ventilation system that automatically siphons away exhaust heat and converts it into electricity which, in turn, powers the data centre in a never-ending cycle!

Worst-case scenarios

Simon Sharwood, Editor, SearchStorage ANZ: More industry consolidation

2010 saw Sun, Isilon, Compellent and 3Par acquired by larger companies, all of whom promised to take good care of the newly-acquired products in order to meet increased demand for storage. IBM snaffled Storwize, too.

A glummer view of these buyouts is that in under a year the industry has lost its five most successful mid-sized players (by revenue), all of which were reasonably vigorous innovators who pushed the industry forward and may also have put downward pressure on leading vendors’ prices.

More industry consolidation would therefore surely be a bad thing in 2011, as a shrinking pool of enterprise-grade suppliers is hardly likely to create more choice or more innovation for buyers.

Clive Gold, Marketing CTO, EMC ANZ: Software vendors convince users to use direct attached storage.

The simplicity argument of direct attached storage (DAS), results in a massive blowout of complexity and ongoing operational costs. In a move that is the IT version of “back to the stone age”, data is once again captive of a server, operating system and application, ensuring vendor lock-in and inhibiting the adoption of cloud computing.

Jeremy Babb, Senior Storage Architect, Practice Manager, Unisys Asia Pacific

Cloud will grow in popularity and this will drive storage growth in 2011. Multiple cloud vendors all trying to out-price and out service each other will mean each vendor needs a level storage to drive cost efficiencies and capability into the cents per GB in order to win business. Within this market it is the hardest prediction to make, if business takes to cloud, storage may grow at large rates of 100-200% alternatively we may see smaller steady growth of 15-20% as business dips its toes in to see if they like what they are getting.

John Martin, Principal Technologist, NetApp Australia and New Zealand:

Storage gets treated as a commodity and the costs and wastes of inefficient and inflexible storage infrastructures remain hidden within server and “stack” infrastructures.  Implementing standardised benchmarking will go a long way to preventing this and help make storage performance transparent.

This was first published in December 2010

-ADS BY GOOGLE

SearchCIO.com.au

SearchSecurity.com.au

SearchDataBackup

SearchDisasterRecovery

SearchSMBStorage

SearchStorageChannel

Close