Once esoteric SANs aiming for mass market appeal

Once esoteric SANs aiming for mass market appeal

With their ability to pool and manage all sorts of storage resources, Storage Area Networks (SANs) have fast become common in medium and large-sized businesses that have both the financial and human resources to get them up and running.

SANs work, and they work well: of this there is no question. Ernst & Young, for example, recently implemented a redundant SAN to consolidate more than 10TB of enterprise data that was previously spread across a broad variety of servers-including many previously in Andersen Australia, which Ernst & Young bought in 2002.

That purchase increased the merged company's storage requirements by 50 percent overnight, forcing it to consider the best storage strategy for the long term. "We have a continuing dependency on our data and we need a system that provides the most advanced method for managing mission-critical information," says CIO Stephen Arnold.

"We are increasingly using technology to develop, store and retrieve our company communications. As part of the merger we had to integrate Andersen's and Ernst & Young's operations into one seamless firm."

The growing popularity of SANs is reflected in their continuous growth, with IDC reporting a 7.5 percent annual increase in SAN shipments that it believes will help SANs secure 65 percent of the US external disk systems market in 2007, which by then will be worth $US5.8 billion. By contrast, the market for network attached storage (NAS) is growing at just 5.4 percent annually, according to IDC.

Rewriting the cost equation

Although they make strategic sense as a way of consolidating storage, getting the right SAN infrastructure to suit business needs has proved to be very important-and expensive. That's made them most common in large businesses and government departments, which IDC believes along with healthcare will be the largest adopters of SANs by 2007. NAS, by contrast, will be most common in business, legal and financial services industries while server-attached direct attached storage (DAS) will remain most popular amongst retail banking and utilities.

Much of the reason for SANs' high-end focus has come due to their high price: Fibre Channel (FC) switches remain an expensive proposition, while the proliferation of commodity Intel-based servers makes full SAN connectivity uneconomical since FC host bus adapters (HBAs) can cost more than the servers themselves. Furthermore, SAN configuration and management expertise can be expensive, as are the myriad tools needed to properly virtualise and manage SAN resources.

Cost and complexity issues have been the reason many companies have stuck with NAS, which has evolved to appliance-like simplicity and plugs directly into existing enterprise networks. Yet NAS devices are based on files rather than blocks, which limits customers' ability to use virtualisation to improve management of their resources and allow connectivity with all sorts of different machines. SANs, by contrast, are inherently designed for virtualisation and connectivity to all sorts of systems, which is another reason why they're popular amongst larger companies.

Last year, NAS leader Network Appliance fired the first shot in the convergence war with the introduction of its gFiler, a hybrid NAS and SAN solution that provides NAS-like file management but integrates directly with existing SANs.

Marketed as a bridge between the two worlds, this approach has been replicated in products from several other storage equipment providers. Anecdotal evidence suggests that simply layering NAS functionality over a SAN isn't necessarily as easy as vendors would have us believe, but it can nonetheless be a valuable tool for companies that want to combine existing and new storage infrastructure.

Even more important for SAN customers, however, are the proliferation of standards such as FCIP (Fibre Channel Over Internet Protocol), iFCP (Internet Fibre Channel Protocol) and iSCSI (Internet Small Computer System Interface), all of which are designed to extend the reach and capabilities of SANs outside of the costly confines of Fibre Channel-only subdomains. These protocols are becoming widely supported within Fibre Channel equipment and conventional switches, allowing SCSI and Fibre Channel control signals to be sent over standard IP-based internal or external networks.

Internally, use of iSCSI means that commodity servers can be hooked up to a SAN using cheap Gigabit Ethernet links; this makes it much more feasible to use a SAN as a central storage area for every server in the company. "SAN investments are coming under a lot more scrutiny," says Tim Smith, marketing manager with Hitachi Data Systems. "That's why we're going to see this convergence take hold: you are able to show a fairly significant TCO by reducing the number of servers, and also by being able to back up more hosts [onto the SAN] by leveraging iSCSI."

Externally focused FCIP and iFCP protocols are less related to cost savings than reliability: they allow SANs to be extended across existing IP-based wide area networks, facilitating disaster recovery and continuous replication of data across hundreds or thousands of kilometres. In the past, doing so has been cost prohibitive because Fibre Channel quickly becomes impractical due to the cost of long lengths of inground fibre. Using IP, however, makes full-time data replication between sites quite practical.

The weight of inevitability

After years of growth, the storage market has consolidated its resources and marketing around several key players: IBM, HP, Hitachi Data Systems, EMC, and StorageTek dominate a market where ongoing demand and technological improvement continues to push prices down so quickly that increases in shipments are barely keeping revenues flat.

With the proliferation of storage in companies of all sizes, however, many of these companies have shifted their focus towards storage resource management (SRM)-an evolving field that basically sits above the nitty-gritty of storage related administration to improve the management of information.

SRM tools from most vendors, including software-only competitors like Veritas and Computer Associates, are build on robust virtualisation capabilities that allow ever finer control over storage resources. The need for better virtualisation is so strong that EMC, for one, recently bought out server virtualisation expert company VMware, while Veritas purchased virtualisation player Ejasent. Microsoft will soon release its own Microsoft Virtual Server, although the technology-acquired with the buyout of Connectix last year-will initially be focused more on server than storage virtualisation.

Better control over SAN resources will be increasingly matched by policies that help manage the flow of information between media throughout its lifecycle. At the same time, vendors are empowering their systems to gain a level of autonomy in monitoring and reacting to changes in the computing environment: for example, a storage system could automatically provision additional virtual Sun Microsystems Solaris servers during times of peak applic-ation demand, then convert the resources to a Microsoft Windows Server 2003 based web server at other times.

"Use of SRM gives you insight as to what your data looks like," says Richard Collins, Managing Director of Interwoven Australia/New Zealand. "From there, you can make a very calculated judgment call in terms of what sort of infrastructure you need. You need to understand data and who owns the data."

With the current focus on virtualisation of resources, the flexibility of SANs is coming into its own in a way that file-based NAS devices simply cannot. As prices for SAN-related gear drop and connectivity options increase, the need for NAS will slowly decrease. Backed by protocols that lower the cost of SAN computing, it's clear why SANs continue to strengthen their position as the future of corporate storage.

Related Article:

Control lab to relax storage anxiety