Data centres are at the heart of the internet and internet traffic continues to rise exponentially driving data centre power upwards. The majority of the traffic is dominated by social networking, gambling gaming and the use of mobile phones, with video content (such as YouTube) being the largest single generator of data.
Faster broadband only serves to encourage wider usage and many governments are worrying over data centre power growth despite having digital agendas to increase access.
Against this background we see the rapid rise of ‘the Cloud’, but what is this cloud? Simply put it is just another form of data centre – one owned and operated to offer applications and storage on a buy-as-you-need basis.
As competition rises between Cloud providers, they try to differentiate themselves based on brand and price. But one cost feature, the cost of energy, dominates their business models and they are driven to pursue building their facilities with high ICT utilisation (squeezing every bit of processing and storage capacity out of their ICT hardware) and very low overhead energy by getting their PUE (Power Usage effectiveness) as close to 1 as is possible. Practically, the best-in-class have already reached the limit – where less than 10% of the IT energy is used to power the losses and provide cooling, i.e. a PUE of <1.10.
These Cloud facilities are the factories of our digital age and can be huge – more than 20MW in many cases – and the cost of energy can represent over 60% of their operating costs, hence the drive to minimise power consumption.
Now, the ICT hardware turns all of the incoming power into heat and digital services so the heat has to be moved from inside the facility to the external ambient.
Traditionally, when the ICT hardware was thought to need tight temperature and humidity control, that heat removal was achieved with precision air-conditioning but times have rapidly changed and that precision is no longer needed.
The problem with air-conditioning is that it limits the PUE to c1.40. In reaction to that move away from precision cooling the industry developed so called ‘free-cooling’ coils whereby the external temperature is used in place of compressor operation whenever it is cold enough outside. As you raise the internal temperature the more free-cooling you can achieve but there still remains a limit in the warmer summer months.
Free cooling – all the time
So where does the ‘wet’ come into it? Well, in temperate climates, Europe for example, the air is usually dry when it is hot (unlike the Tropics where the humidity is constantly high) and so evaporative and adiabatic systems have found application to take advantage of the wet-bulb temperature instead of the dry-bulb temperature. Look at the UK for example: Hottest dry-bulb record is 34.5°C but the highest wet-bulb at that moment was 23°C. So adding water enables the cooling system to achieve free-cooling for far longer – in fact to 100% in all of the UK and Northern Europe.
Now this can consume a lot of water, which is a valuable resource in many parts of the world, and so rain-water harvesting and using grey water is preferable to using potable utility-water. The highest consumption in the UK would be in the order of 1,000m3 of water per MW of heat removed per year – equivalent to about 30 domestic dwellings. So we need to consider several aspects:
- How far does the PUE fall (and kWh of electricity saved) per m3 of water?
- What is the source of the water and does it have any embedded energy (such as potable)?
- How much water is saved in the (thermal) power station cooling towers by reducing the local PUE and what is the source of that water (usually river or sea)?
It does seem rather incongruous, if not plain odd, that faster broadband can be enabled by using more water…
Professor Ian Bitterlin