Keeping cool under pressure

Keeping cool under pressure

By Stuart Finlayson

The physical meltdown of a company's data centre is undoubtedly one of the most disastrous events that could befall any organisation. Stuart Finlayson talks to someone whose company is dedicated to ensuring that such an eventuality never occurs.

The modern day data centre is becoming ever more sophisticated in terms of the technology within-with small, rack mounted blade servers having largely replaced the much bulkier mainframe set up. Equipment density within racks is increasing, which leads to complex cable management issues, as well as an enhanced risk of overheating within such infrastructures.

The consequences of equipment failure as a result of heat damage-which is one of the primary causes of systems downtime-are not difficult to grasp. As well as the losses incurred by system downtime, which can run into thousands, if not hundreds of thousands of dollars an hour, there is the increase in servicing costs and the shortening of equipment life.

Such delicate and sophisticated technology requires an equally sophisticated means of keeping it running at a safe and optimal temperature. One such company whose expertise lies in providing this vital service to organisations is American Power Conversion (APC). It is a service that comes at a price though, with some rack cooling systems coming with a price tag of upwards of $50,000. So what does a dedicated rack cooling system offer that a regular air conditioning system does not, in order to justify such an outlay?

Caroline Gest, marketing manager for APC Australia & New Zealand, explains that there's much more to the process than meets the eye. "Removing heat is a complex task that needs to be addressed by professional solutions. Comfort air conditioning or standard air-conditioning, is not the solution for many reasons. Firstly, comfort air conditioning was designed to cool down a room for the comfort of people-it is not designed to cool down equipment that can easily reach over 40 degrees in 20 minutes. At this temperature a server would ultimately crash, resulting in an ungraceful and uncoordinated shut down in an effort to avoid melting. Which means an interruption to operations, data loss, possible damage or loss of hardware, and potential operation downtime overall. As well as that, people and equipment require different levels of humidity. The optimal environment for people (which is the aim of comfort air-conditioning) requires much more humidity than the optimal environment for electronic equipment-hence the need for precision cooling in a computer room or data centre.

"Comfort air conditioning is not designed to constantly and consistently cool down equipment. A lot of operations today are 24 x 7 and cannot afford the air conditioning in their building to be down or not functioning 100 percent of the time - businesses cannot afford downtime. The estimated lifespan of the average comfort air-conditioning system is based on eight to ten hours of operation, not around-the-clock, 24 x 7 operations required in a data centre-so these units wear out very quickly.

"Secondly, cooling down sensitive IT equipment is not only about cooling but also heating, humidifying and dehumidifying. The temperature and humidification of the air needs to be precise, correct and steady. A sudden variation in temperature can lead to thermal shock. To avoid thermal shock electronic equipment should be maintained at a constant 21 degrees centigrade and shouldn't go up or down more than one degree in any ten minute period."

Therefore, it follows that a number of technological issues must be considered prior to installing a cooling system to minimise and avoid heat issues within the data centre and server room environment. The aim, says Gest, is to deliver is to deliver cool air to the inlet of the servers and to minimise the mixing of the hot air coming off the servers with the cool air being delivered to the servers.

"Careful consideration must be given to the location of the precision cooling units within the room to ensure good delivery of conditioned air to the floor grilles via the raised floor but that also minimise mixing of the returning hot air with the cold room air. In other words, you can make as much cold air as you want but if you don't get it to where it is required (at the server), you're just wasting your time and money.

"A good raised floor height is required with a zone set aside for air delivery to provide a path for the conditioned air to travel from the precision cooling unit to the floor grilles. Problems that arise are cable trays and other sub-floor obstructions impede airflow to areas of the data room. This basically means you provide sub standard cooling to those restricted areas and hot spots are likely to develop, causing heat related failures. Also delivery of conditioned air can be compromised by air leaks through missing floor tiles and cable cut-outs; floor tiles should be replaced and cable penetrations sealed as best as possible.

"Along with the positioning of the precision cooling units the actual position of your racks is very important when it comes to getting the best out of your cooling.

Implementation of hot aisle/cold aisle arrangement ensures minimal mixing of conditioned air with hot exhaust off the servers increasing efficiency of the cooling system. Furthermore, the use of blanking panels on unused rack positions within the server will minimise mixing of conditioned air and hot exhaust air.

"Poorly located delivery or return vents are very common and can erase almost all of the benefits of a hot-aisle-cold-aisle design. The key to air delivery vents is to place them as close to the equipment air intakes as possible and keep the cool air in the cold aisles. For under-floor air distribution, this means keeping the vented tiles in the cold aisles only. Overhead distribution can be just as effective as a raised floor distribution system, but again, the key is that the distribution vents be located over the cold aisles, and for the vents to direct the air directly downward into the cold aisle (not laterally using a diffusing vent).

In either overhead or under floor systems, any vents located where equipment is not operational should be closed, since these sources end up returning air to the precision cooling unit at lower temperatures, increasing dehumidification and decreasing Computer Room Air Conditioning (CRAC) performance."

So there you have it! But would one not be forgiven for thinking that hardware vendors should incorporate cooling technology into their builds as a matter of course, rather than the customer having to purchase such an expensive addition to their original investment? On the face of it, this would appear to make perfect sense but, according to Gest, the specialised nature of the business renders such a scenario highly unlikely.

"Cooling is a very specific field that requires expertise. APC are experts in cooling technology. When you know how much damage inefficient cooling can do to the most sophisticated IT equipment, you must leave it to the experts in the field.

"It is an interesting question though. The fact is that server and storage vendors put a lot of time and money into improving the internal cooling systems within their equipment. However, with the trend towards smaller and smaller rack-mounted equipment, equipment densities have increased so much that these internal cooling mechanisms aren't enough. In the modern data centre, an integrated and manageable precision cooling system is essential. It comes back to providing a solid physical foundation for the equipment so the IT executive can provide an available network without needing to become a cooling or power specialist."

Related Article:

Government offers protection against hackers

Business Solution: