When it comes to storage, the future is virtual

When it comes to storage, the future is virtual

By David Braue

Nov 01, 2004: There's nothing sexy about hard drives-and, increasingly, nothing profitable about them either. Little surprise that storage vendors have spent recent years hyping philosophies like centralised storage and information lifecycle management (ILM). Both require expensive consulting services to implement and rely on sophisticated, high-margin software to buffer IT staff from the nagging complexity of keeping track of data across so many hard drives.

Delivering a technical infrastructure to match vendors' promises has been another challenge entirely. Storage area networks (SANs) have been popular for centralising storage, while technologies like iSCSI and FCIP allow previously closed SAN environments to be addressed using common technologies such as IP packets and commodity Gigabit Ethernet connections.

ILM, on the other hand, remains elusive in both its definition and its execution. Consensus among vendors is that it's a way of proactively managing information according to its importance to the business-which, of course, changes over time. Rather than forcing companies to buy expensive Grade A storage for all their data, ILM should improve storage efficiency by automatically cascading information between different classes of disk.

That's currently easier said than done. Each successive generation of enterprise philosophy has introduced successively higher layers of abstraction-first by consolidating data, then by consolidating interfaces-but it's now clear that actually implementing ILM requires one more major shift. If consolidation and connectivity defined the past five years in the storage industry, virtualisation and policy-based management-core requirements for ILM-will take centre stage for the near future.

Setting rules for the virtualisation game

Virtualisation allows companies to create virtual disk volumes that might span several physical hard drives.

It was a fundamental concept behind SANs, but in those environments has been limited to the expensive Fibre Channel drives built into SAN arrays.

In an ILM world, however, the complete storage environment necessarily includes several different types of drives-ranging from high-performance disks useful in high-volume transactional environments, to the lower-speed ATA and Serial ATA drives becoming important as a means of low-cost data archiving. Each type of drive has traditionally required different management tools, meaning that customers work much harder than they need to in order to move data between different types of storage.

Effective ILM increases the number of terabytes of storage that each storage manager can handle. To do this, a virtualised storage infrastructure must be aware of each type of storage installed, and accommodate the differences automatically. That might be easier if customers were willing to rip out all existing storage and standardise on new boxes.

However, progress in the storage world is always evolutionary: vendors have had to work within the constraints of existing environments to deliver on the promise of virtualisation and ILM.

Hitachi Data Systems fired the first salvo in a newly reinvigorated battle with the September launch of its TagmaStore Universal Storage Platform (USP). Claiming a maximum 330TB of internal storage capacity, the USP pairs loads of storage space with the ability to attach to and manage, octopus-style, up to 192 other storage systems the customer may already have in place.

The USP accomplishes this by pretending it's a conventional Windows Server system, to which any modern storage device is designed to attach. Once they're on speaking terms, the USP manages the storage arrays along with own storage to create any number of 'virtual private storage machines' that HDS claims can provide up to 32 petabytes of total virtual storage. Built-in policy enforcement shields storage administrators from the details of the various storage systems, providing simplified management that reduces the effort required to track the information.

Karen Sigman, senior director of global channels with HDS, positions the USP in terms of the benefits it provides to customers rather than focusing on speeds and feeds. "It's more a functionality discussion," she explains. "Customers want to be able to reduce their TCO, improve functionality, and deliver simplicity. Simplicity is a big, bit thing here, and [virtualisation] delivers that by consolidating everything."

HDS competitors haven't taken long to follow suit. HP, through a licensing deal with HDS, released its own version of the UDS called the StorageWorks XP12000. Network Appliance is building virtualisation technology from partner NewView and recently acquired Spinnaker Networks into its core DataOnTap file system, while EMC is finding new ways to capitalise upon its acquisition of virtualisation leader VMWare.

While every vendor is pursuing virtualisation, however, their respective strategies are still widely divergent. HDS, HP and Network Appliance, for example, see virtualisation as being driven by the storage array. EMC believes virtualisation should be driven by servers, an approach stemming from its ownership of a server virtualisation company. IBM has taken yet a different tack with its TotalStorage SAN Volume Controller, a custom-built Linux server that plugs into the SAN and manages virtualis-ation separately from existing servers and storage arrays.

"Some virtualisation implementations are focused on connections but not the actual storage," says Garry Barker, senior storage consultant with IBM Australia-New Zealand. "If the product lives in a server, you ca see output from that server but can't do common things simultaneously across the servers. We've taken the approach of keeping it outside so that we have visibility of everything we'd like to have in the virtual environment."

The road to ILM

IBM's approach proved beneficial to TransAction Solutions, an IT service provider that installed two IBM SVCs alongside a 4.3TB IBM SAN array to better manage infrastructure that processes over 8 million transactions monthly for a dozen Australian credit unions.

"The virtualisation technology allows us to deploy and redeploy storage capacity [for different customers] in minutes," says TransAction Solutions general manager Guy Light. "Previously, it would have taken us anything up to weeks to specify the solution, seek bids, order and implement the new hardware."

Specific virtualisation technologies may work well in isolated environments, but an industry-wide definition of virtualisation is still years down the road. While vendors duke it out over how virtualisation should properly be handled, customers will do well to remember the interoperability problems that plagued early SANs.

To head off incompatibility, the Storage Networking Industry Association (SNIA) has weighed into the debate by incorporating an extensible definition of virtualisation within its SMI-S 1.1 specification.

In the long term, storage virtualisation will provide a consistent, interoperable abstraction layer that masks the individual traits of each storage element. This capability is essential if heavily-hyped visions of ILM are ever to become a reality: although it's a good long-term vision, ILM requires high-level abstraction so users don't have to think about which array is storing which type of data.

Data can simply be stored by the applications, then automatically prioritised and moved between storage tiers according to virtualisation rules.

In today's functional vacuum, customers still need to tread carefully and conservatively. "There is value in these technologies and they're very interesting," says Dr Kevin McIsaac, Asia-Pacific research director with META Group, who believes it will be 18 to 24 months before storage virtualisation could be consistent enough for widespread deployment.

"But there is no compelling business case that means everyone should stampede for them. There are management benefits: customers may have bought storage from multiple vendors and, by layering this over the top, could reduce complexity and construct a very good business case. But if you've predominantly got one vendor in a well-shared SAN, it's not a really obvious must-do."

Related Article:

Hitachi's virtual reality for EMC