The way we control temperatures inside data centres has changed dramatically over the years. The advances in computer technology, when combined with better a better understanding of the best ways to lay out a data centre, have led to an acceptance that data centres can actually operate at a much higher temperature than previously thought, without impacting on the high levels of reliability their customers expect.
To understand this change in thinking, we first have to look at how data centres have changed in the decades since they were introduced in the ’60s and ’70s. Back then, scientists would work at their desk alongside computers. There would be an abundance of paper in the workspace, an item that has long since become virtually obsolete in modern data centres.
This led to two important considerations for IT managers. Firstly, the space had to maintain a suitable temperature so that individuals can work comfortably. Typically, this was accepted to be around 21oC, and led to the norm that the servers held within these facilities should also operate in this climate.
Secondly, relative humidity had to be kept at a level of around 50% to avoid damaging the high-quality paper used by the machines. In order to achieve this level, the cooling system would often have to supply air as low as 11oC to keep the whole room close to the 21oC target.
However, while the IT equipment no longer requires this level of cooling and such stringent relative humidity targets, attitudes towards data centre cooling have, in some circumstances, stayed the same. Considering the levels of loss that can result from a server failure, it is unsurprising that some IT departments continued to take a conservative approach to close climate control.
However, as IT equipment and people started to separate, the space that was once filled by desks started to be filled with server racks. Modern infrastructural hardware was also evolving to tolerate higher temperatures, so thresholds could be raised without affecting their power or performance.
Air cooling – in the traditional sense – involves the use of computer room air conditioners (CRACs) to convert warm air to cool air by removing heat to the outside. CRACs can be used in a number of basic configurations that focus on cooling the entire room, just a row or just a rack. Whole-room air conditioning situated CRACs in such a way that a certain temperature is maintained fairly evenly throughout the room.
When you consider the fact that most of the electrical energy going into a data centre will turn into heat at some stage, you can begin to understand the challenge of cooling these spaces. This is, in fact, a large amount of energy – it is estimated that in 2017 the global data centre industry used 541 terawatts, roughly 1,000 times more energy than the whole of the United States uses at any one time.
Rather than trying to combat this with cooling, then, it is much more simple, effective and energy efficient to take the hot air outside. To do this, data centre layout designs have been refined to isolate warm from cool air. An improved understanding of aisle containment and its benefits substantially changed the way data centres can be cooled.
Instead of pumping in volumes of cold air into the whole room, which is a very inefficient method to cool mixed warm and cold air, data centres can isolate hot air ejected from server outlets and remove it. Overall air temperature can be automatically reduced, lowering the cooling load required. In fact, cooling systems can now provide air at mid-20oC temperatures, without affecting the performance or reliability of the servers.
The rise of modern cooling systems that capitalise on this ‘heat removal’ philosophy has also enabled data centres managers to maintain a climate that maximises the reliability and performance of IT hardware in their facility while improving their power usage effectiveness (PUE) rating at the same time. These systems can often pay for themselves very quickly when compared with using more traditional climate control solutions.
For instance, indirect evaporative cooling systems exploit the temperature difference between the indoor and outdoor environment by passing indoor and outdoor air through a plate heat exchanger. When outdoor temperatures are higher, it can utilise adiabatic humidification to recreate this temperature difference for heat rejection.
There are also solutions that deliver ‘free cooling’ by using a water circuit as a go-between when the outdoor air is colder than the indoor conditions.
With higher indoor temperatures, it operates in mix-mode, where both the free cooling and the direct expansion operate simultaneously. The system benefits from the ‘cube root’ principle, where if the free cool circuit can provide 20% of what is required, then this is 20% less for the direct expansion circuit, saving nearly 50% in energy consumption.
All things considered, it is clear to see why new data centre spaces should always consider heat removal as a valuable part of the cooling process, regardless of whether they are undertaking an expansion, or designing an entirely new facility.
The rise of new technologies such as adiabatic evaporation and free cooling, together with rising energy prices, are only going to make this trend all the more popular.
The energy efficiency benefits, when allied to cost savings, will make heat removal an increasingly desirable addition for data centres looking to maximise their profitability, while delivering a high level of reliability for their customers.