Virtualisation Serve and Volley

Virtualisation Serve and Volley

June 20th, 2006: With server hardware growing ever more powerful, Matthew Overington looks at how to use virtualisation technology to consolidate resources, streamline testing, and maximise ROI

Times are tight in the world of I.T.. After the boom and bust of the late twentieth century, a reality-check saw budgets heavily slashed and administrators called upon to deliver better use of resources through consolidation. With IT budgets under strain and return on investment (ROI) a primary business driver, it's imperative to be able to demonstrate efficient use of company resources. It can be difficult to sneak a new server into the budget under the watchful eye of the CFO, but thankfully there's another option.

There's heavy momentum behind enterprise-class virtualisation, and both chip manufacturers have already announced that they'll build virtualisation hooks into upcoming server-class products. Virtualisation is a significant trend in big business, with heavy hitters like Microsoft and HP offering enterprise-grade solutions for customers. It allows IT managers to maximise return on investment, at the same time significantly reducing the total cost of ownership and simplifying management.

Both these acts rested on the assumption that records would be created and would continue to be accessible over time. This assumption foundered on two important developments both beginning around the mid 1980s. The first was the devolution of the New Zealand state sector, with the centralised control on all aspects of government life once exercised by the State Services Commission replaced with a large number of independent entities, each responsible for all aspects of their management. In this environment not all agencies were star performers in recordkeeping.

The second development was, of course, information technology. The ease and speed of electronic communications replaced slow and cumbersome paper systems, but meant the loss of the relative certainty that we associate with paper - easy to preserve, and easy to guarantee continuing accessibility as long as you protect it from fire, bugs, moisture and excessive light. The same cannot be said for electronic records.

Good Management

While most virtual server products are designed with similar functionality in mind, one key differentiator between products is in the management tools. As you don’t have physical machines to log into, there are some slight differences between managing dedicated systems over virtual ones.

Virtual management tools offer the ability to allocate resources between virtual servers running on a system – processor time, memory, hard disk space, network and screen access. While just about every solution on the market allows admins to set thresholds and resource limits per virtual machine, a few will even allow you to allocate resources dynamically. This is extremely handy for servers that occasionally see high loads. Microsoft is leading the Dynamic Systems Initiative (DSI), which is focused on producing self-managing dynamic systems to help reduce downtime and free up administrators for other tasks.

One example of elegant management integration is Microsoft’s Virtual Server 2005 R2, which is designed to work with Microsoft Operations Manager 2005 (MOM). Redmond offers a MOM 2005 Management Pack for Virtual Server that allows administrators to map guest-host relationships. Microsoft also throws in a web console for Virtual Server 2005 R2 to help ease remote administration.

 

According to HP, there are few data centres that utilise more than 40% of their available resources at any given time. HP points to server applications being resource intensive in short bursts, and the fact that many servers only run a single application at a time. Servers are configured to handle peak loads without skipping a beat, which means that they're under-utilised at anything below the peak. Virtualisation forms a way for tech decision-makers to consolidate resources and maximise the return on their hardware investments.

"Virtualisation is fast emerging as one of the most effective ways to combat server sprawl," says Nick Gault, president and CEO of XenSource, which makes virtualisation software based on the open-source Xen hypervisor.

One of the more compelling reasons to switch to a server virtualisation technology is the saving on the cost of hardware, as you can pour all your budget into a single system, and then fit it out with a copy of a virtual server and any operating systems you may want to run on it. Bear in mind that you'll need separate licences for every OS running on the server – even for multiple instances of the same version of operating system. For example, say you want to run two versions of Windows 2000 on a single box – one for development and testing, and the other for live production. You'll need two Windows 2000 licences. It's not a massive issue when you consider that you'd need two separate licenses for each physical machine, as the return on investment on a single system is much higher than multiple, as you can ensure that the server is running at maximum load all the time. It also dramatically reduces management time (and cost), as there's only a single physical system to support instead of two. On the other side of the fence, running several operating systems or environments on a single physical system introduces a significant risk: what happens if the hardware fails? While in a multi-server environment, it's possible to either wait while a resource is repaired and brought back online, or fail-over to another system during the downtime. In the case of a virtual environment, server failure could cause users to lose access to several resources at once, so it's vital to have a comprehensive fail over plan in case anything goes awry. Thankfully, modern server hardware features comprehensive monitoring features and it's straightforward to configure systems to alert the admin in case of failure. There's a bevy of server-class Virtualisation technologies available to suit a wide range of target markets, from Unix clusters to beefy Windows boxes. And recent developments have meant that Microsoft's Virtual PC and Virtual Server product lines are challenging traditional market leaders like VMWare.

HP Virtual Partitions

HP has invested heavily in virtualisation technology, and offers a compelling solution for high-end Unix customers. HP-UX 11i Virtual Partitions (vPars) is designed to run multiple instances of HP's HP-UX 11i Unix operating system on a single machine. Each virtual partition can run its own applications separate from the other instances of the OS. Applications and name spaces are kept isolated under each instance, so it's impossible for one partition to contaminate another.

HP-UX 11i Virtual Partitions is able to dynamically partition hardware to allocate resources to each vPar as required. It's possible to allocate more than one processor to each task, so processors can be assigned to vPars at times of heavy load, then given up to another job when the demand decreases again.

IBM

Not surprisingly, IBM's Virtualisation Engine is focused on core enterprise applications and supports both IBM Server and IBM TotalStorage systems. It employs a number of IBM's (and existing) technologies and is designed to lever the open-source Hypervisor product. The Enterprise Workload Manager (EWLM) is designed to manage workloads and direct them towards processors that are underutilised. EWLM is designed to support a number of heterogeneous systems and network resources as required.

Microsoft Virtual Server 2005 R2

Prior to 2004, Microsoft hadn't offered a virtual product for several years, but it acquired Connectix and launched Virtual PC, and later Virtual Server products to an enthusiastic market. Release 2 (R2) of Microsoft Virtual Server 2005 was launched late in 2005, and it introduced network (PXE) booting to allow automated deployment.

The software is designed to run on Windows Server 2003 and Microsoft sees four distinct markets for its product: those looking to consolidate hardware, host legacy applications, automate test environments for developers, and for disaster recovery. Microsoft specifically claims that its product is ideal for disaster recovery, as virtual servers can be migrated to other host machines in the event of an emergency, thus maintaining uptime.

Microsoft Virtual Server 2005 R2 is a key part of the Dynamic Systems Initiative, which includes the Redmond-based company and around 20 other vendors, and is focused on delivering systems that can dynamically manage themselves. Memory, hard disk, processor and system resources are all divvied up by the software, and allocated based on need to maximise performance.

VM Ware

VMWare is arguably the best known virtualisation software for both desktop and server environments. The company offers a number of virtual server products to suit a range of customers, from the technical professional (VMWare Workstation), through to the enterprise data centre (VMWare GSX and ESX Server products). What's more, the company also offers a number of tools to help migrate, deploy and manage a virtual environment.

The company offers VirtualCenter for managing virtual assets and it allows the admin to manage countless servers from a single location, create and destroy virtual servers, perform zero-downtime hardware maintenance, and enforce configuration standards. The company also offers VMotion - an elegant app that allows administrators to migrate servers from one host system to another without interrupting service.

VMWare has arguably the most advanced toolset for operating virtual environments -- either on the desktop or in the enterprise. In fact, VMWare's list of clients is impressive, with internet job-search engine, Monster managing to retire 75 servers from its data centre by moving across to VMWare.

The Monster deployment makes full use of iSCSI technology, and VMWare was able to handle the tricky bus interface without a hitch.

In mid-2002, Monster's data centre was, "... at capacity with 230 racks of gear," according to Brian McCarthy, Monster's director of operations analysis. "It was too hot and consumed too much power. We also had computers being shipped to us from other sites, including 8-CPU and larger servers."

By deploying VMWare GSX Server and migrating to virtual machines, McCarthy was able to shed more than a quarter of the hardware in the data centre, and with it, save US$275,000.

Grid Computing

Grid computing was first launched to link supercomputers in separate geographical locations, but has evolved to incorporate desktop PCs as well. At first glance, grid computing is the antithesis of virtualisation: the idea of is essentially to link a number of systems scattered in different locations to share a single processing load.

Arguably the best-known grid computing application is the now-defunct SETI@Home project, which used spare processor cycles on volunteers' PCs to help search for extraterrestrial life.

Grid computing relies heavily on open standards to ensure that disparate and heterogeneous systems can communicate effectively with one another. Scalability is a key concern, as there can be as many as several million resources comprising the grid. At the base layer is a grid fabric that's used to communicate over a network like the internet or a LAN. Above that sits a core grid middleware, which handles core services like allocating resources, storage, and process management. Above that sits a user-level grid middleware, which includes all the development environments and programming tools to service the grid, and the top layer is the grid application running on host PCs.

Grid computing allows organisations individuals with under-utilised resources to open them up to network users. For example, a university may need a huge amount of processing power to crunch through enrolments a couple of times per year. While it's inefficient to purchase large, beefy servers just to handle the load, grid computing would allow the network administrator to poach processing cycles from every connected computer on campus, carving through the processing load in no time. More information can be found at www.gridcomputing.com.

Momentum

Software virtualisation is a trend that's not going to disappear in the near future. In fact, as server hardware becomes progressively more powerful and incorporates advanced virtualisation technologies, expect to see an increase in virtual server deployments – particularly in smaller businesses where resources are tight, and it's crucial to maximise leverage of existing assets.

According to Mendel Rosenblum, Chief Scientist at VMware, "In the coming years, virtual machines will move beyond their simple provisioning capabilities and beyond the machine room to provide a fundamental building block for mobility, security and usability on the desktop."

Most enterprise-class virtualisation technologies are vendor-centric; they're designed to work with a certain hardware set or server product and are only supported if you purchase a combined hardware and software solution.

The virtualisation application you choose for your data centre or server environment will largely depend on your existing infrastructure. For example, if you've got a significant Unix investment, it pays to deploy a Unix-compatible virtual server product like HP's Virtual Server Environment.

Where products like Microsoft Virtual Server 2005 R2 come into their own is in consolidating legacy systems. Though Redmond would like to see its entire user base upgrade from Windows NT Server 4.0 to Windows Server 2003, fact is that some businesses have legacy systems that specifically demand NT Server 4.0 in order to run. Microsoft's solution is to run Windows NT Server 4.0 in a virtual environment on a Windows Server 2003 box to boost reliability and manageability.

Server hardware manufacturers are also starting to rally behind the virtualisation movement, and both Intel and AMD are scheduled to launch enterprise-class processors with hooks built in for virtual server applications. These will allow the virtual server software packages to interrupt the processor and request a slice of the processing pie. The result is set to be more responsive virtualisation systems and seamless switching between hosted operating environments. Every serious server-class virtualisation technology already features support for 64-bit processors, disk arrays and dual-channel memory, allowing them to make full use of a host system's resources.

At the end of the day, understanding and working with virtualisation technology is a growing part of managing a data centre or IT department. With tech budgets under pressure from management, any technology to help consolidate resources, maximise return on investment, and simplify the life of administrators is a godsend.

Comment on this story.

Related Article:
An uphill struggle

Business Solution: