Don't be afraid of storage diversity
Don't be afraid of storage diversity
June 5, 2009:The trap many customers commonly fall into is trying to have too much of a good thing when it comes to storage management, writes John Martin.
There is often temptation to succumb to the siren call of the “single pane of glass” theory of IT management when it comes to managing storage. While this approach looks good on a slide deck, the attempt to manage everything means dumbing down the storage devices to a lowest common denominator and this stops customers from getting the most out of their investment.
The “one screen to rule them all” approach could lead to bloat-ware with long lists of firmware requirements and agents that are impossible to install, configure, setup, maintain or use. This often causes more issues for users but the good news is there are better alternatives. Storage management means different things to many people and there is no shortage of software available to help customers with their challenges. Many independent studies conducted by organisations such as Gartner and Mercer Consulting have confirmed it costs more to manage a TB of storage than it does to acquire storage capacity.
A typical storage environment has many different components, many of which interact with each other and need to be managed. This includes Fibre Channel Switches, Disk Arrays (FC and iSCSI), NAS devices, Tape Libraries and Host Profiles (including HBAs).
Vendors often provide users with a dedicated user interface that exploits the power and functionality within the devices. These “homogenous” management interfaces may have options for advanced functionality such as the centralised reporting of multiple devices or integration with enterprise management frameworks. Because vendors design these interfaces to exploit the full functionality of a device, they are often the best choice for customers who are looking to implement advanced features such as data deduplication or virtual SANs.The real pain starts when a user has to learn five or six different management tools each having a unique capability to achieve specific actions. None of these actions contribute to the overall management of a storage environment. You might think that is happening because a company has a mixed vendor equipment environment. That’s incorrect – this could very well happen even if all the equipment came from the same vendor.
In an attempt to address this, the Storage Networking Industry Association (SNIA) created an industry standard for managing different storage devices through the Storage Management Initiative Specification (SMI-S). According to SNIA, SMI-S will provide compliant applications the ability to cater to a number of different storage management disciplines including – Configuration, Discovery, Provisioning and Trending, Security, Asset Management, Compliance and Cost Management, Event Management and Data Protection.
While some vendors rely on SMI-S for their homogeneous management tools, the real value for SMI-S comes when a single heterogeneous management tool is used to give a broad view of a storage environment and the way these devices interact. This function is increasingly valuable as storage moves from being an ‘interesting set of devices with capabilities’ to part of a ‘service driven IT infrastructure’ where technical terms can be easily understood by an organisation’s business units. Instead of users being told that they are getting “Fibre Channel LUNs in a RAID-10 configuration with synchronous replication”, the users can ask for 500GB of capacity capable of over 10,000 operations per second with a recovery target of less than ten minutes, and the IT department uses the most efficient ways of meeting that service level.
Although SMI-S covers most of the areas in storage management, there are some capabilities such as Information Lifecycle Management and Archiving which the SMI-S specification does not address. Even if there was a perfect SMI-S management application, there would still be a requirement for other applications that fall under the broad heading of storage management. There is still considerable amount of work being done to unify complete suites of software to address every conceivable storage management function. The enormity of this task and the rapid evolution of data storage technology means that most organisations will continue to rely on a range of different tools to manage their data storage infrastructure.
In the face of complexity, the keys to successful storage management are good processes such as those embodied in ITIL. It is important not to place too much focus on software tools without first ensuring that the processes and service level offerings are right. In some large IT environments, storage administrators bypass the GUIs of management tools and use command line interfaces or API’s in their own custom written scripts. This allows them to focus on streamlining their internal processes rather than trying to conform to the workflows preferred by the external management tools.
For many, the best way of supporting a process-driven environment is not to try to automate the entire process with a vendor-supplied package but to provide storage administrators with the information and discrete workflow tools they need to support their own processes. This means focusing on activities such as Alert/Event Management Working, Capacity Reporting, Performance Reporting and Configuration Tracking.
The use of Automatic Discovery and the overall scalability of the solution may also be important depending on the size and rate of change in the storage environment.By supporting good IT processes, storage management software can help drive the transformation of an IT organisation from managing devices to managing service levels.We will see more innovation in data storage technology in the next three to five years than we have experienced in the last 20 years. This makes the job of managing storage environments much more difficult. The arrival of “Scale out” and “Cloud” storage alone means old storage paradigms may no longer apply.
I predict server virtualisation will make it harder to map where storage is being utilised on servers. Fibre Channel over Ethernet and iSCSI will blur the lines between storage and networking groups. The increasing use of Flash and high density SAS drives, the resurgence of direct attached (local) storage, new applications such as web 2.0, and the explosive growth of digital content and its metadata and will combine to create very different performance / reliability trade-offs than we see today.
These scenarios present new challenges for storage administrators and the creators of these tools. In the foreseeable future, there will still be a requirement for careful planning, intelligent design and approaches which reduce the management burden. But right now, don’t be lured by the siren call.
John Martin is a Consulting Systems Engineer - Data Protection and Retention, with NetApp Australia and New Zealand.