A survey of experts by the Uptime Institute has highlighted the latest technologies with the most disruptive potential, and the sector is being urged to keep a close eye on these promising innovations. Louise Frampton reports
If you knew an earthquake was coming, you would want to know “how big the impact is going to be, how fast it will happen and how likely it is to happen”, according to Uptime Institute vice-president research Rhonda Ascierto.
Speaking at a conference hosted by Data Centre Dynamics, she highlighted the top 10 technologies believed to have the most potential to change the data centre world.
The Uptime Institute conducted research to identify disruptive technologies, asking leading experts to rate them on potential impact and speed of change.
So what are the top technologies to watch?
Distributed resiliency involves spreading workloads across sites using networks, data replication, load balancing and traffic switching. Effectively, resiliency migrates up to the IT level. The ‘pros’ for this approach include higher availability of business services and less need for gensets, while the ‘cons’ are that the costs are unclear (including networking).
Ascierto pointed out that “not all workloads like to be moved – particularly legacy applications”. She explained that the disruptive driver behind this technology is the potential to improve business survivability without expensive single-site facility infrastructure.
Experts gave this a disruptive rating of 3.91 out of a possible 5, the highest of all the scores received for the technologies (a score of 5 was defined as: ‘prepare for competitive, disruptive change now’; 4) ‘assess how it will affect your business – some impact likely soon; 3) ‘watch closely but impact not immediate’; 2) ‘background development – no need to watch closely; and 1) ‘impact remote, unlikely or unrealistic’).
Chiller-free data centres
The trend is for lower mechanical refrigeration but most retain it for backup. The next step is complete elimination. This would offer lower capex and simpler maintenance, while ensuring more power is available for the IT load. Ascierto pointed out that IT managers are still against wide temperature bands and there is some trepidation. However, fears over IT failure rates are likely to diminish over time and pressures to reduce excess capex and opex on cooling will continue to grow. This approach achieved a high disruptive rating of 3.89.
Micro modular embedded data centres
Micro modular embedded data centres were also scored as having high potential for disruption. The benefits, according to the Uptime Institute, include ‘plug and play’ installation and rapid delivery.
“We are hearing of 12-week deliveries,” commented Ascierto. “These modular data centres cost more than low spec server closets and require different operational practices. However, we think the next wave of edge computing, with IOT, is going to drive the demand for these pre-fabricated, modular data centres.”
Experts gave the modular data centres a disruptive rating of 3.75.
Storage class memory
Storage class memory was also investigated by the researchers. Uptime Institute described this as the ‘holy grail’ of computing – combining the persistence of storage with the speed of operational memory. Intel is reported to have been sampling storage class memory this year.
“If the power goes out, you can still access the data,” Ascierto explained. The ‘pros’ include instant hibernation/recovery and faster data access, but the ‘cons’ include marginal gains in storage arrays and the fact that it is unproven.
“By bringing the data closer to the processors, it could radically change the white space layout and IT architectures. You may not need, for example, 2N UPS coverage for non-critical workloads,” said Ascierto. This technology area scored a disruptive rating of 3.64.
“If this technology hadn’t been in development for so long, I think people would have shown more confidence and it would have scored much higher. I think this could be really impactful and disruptive, but there has been a long wait for it to come to market.”
Data centre management as a service (DMaaS)
This is where real-time data is encrypted and transported from the data centre to the supplier’s cloud, where it is pooled with many other customers’ data for machine/deep learning.
“Having this vast amount of data enables the prediction of events and outcomes with much higher accuracy – perhaps, even predicting things that you wouldn’t be able to forecast without this vast amount of data. The pro is that, ideally, you can lower your risk via additional scrutiny and it could lead to new best practices,” said Ascierto.
“However, some people are jittery about their monitored data going over a wide area network to a cloud, whether or not this is justified. There is a reliance on third-parties, as well as latency issues.
“Personally, I am very bullish about DMaaS, but I don’t think it will replace on-prem monitoring and DCIM. It isn’t going to be an ‘either, or’ scenario. I think this will be used to augment on-prem approaches today,” Ascierto said. DMaaS achieved a disruptive rating of 3.63.
Silicon photonics were also highlighted. These are fibre optic links directly integrated into semiconductor chips, without the need for discrete electrical-optical conversion. They are cheaper and faster than copper, but there is limited availability, there is a risk of vendor lock-in and it only makes financial sense at scale.
“Even the hyperscalers aren’t doing this in the white space just yet, but they probably will in the next couple of years,” said Ascierto.
The disruptor driver behind this technology is the ability to provide IT subsystem disaggregation and pooling without the loss of performance. In addition, there is potential for much higher resource utilisation. The technology received a disruptor score of 3.59.
Open source infrastructure
This includes the Open Compute Project and Open19. Uptime Institute describes this as ‘the next stage of IT commoditisation’. It means more rack-integration of power and relaxed climatic specification.
“The big promise of open source is that it can reduce costs for IT and infrastructure capex and opex by a significant amount. However, the big con for widespread adoption is the fact that there needs to be a more mature supply chain. A lot more work needs to be done to get enterprise grade support and service,” said Ascierto.
The driver for this approach is the potential to cut costs and improve efficiency. It scored a disruptive rating of 3.54.
Software defined power
This is where power is a pooled resource and matched dynamically to IT load needs. It involves approaches such as automated power capping, re-routing, storing and discharging energy.
“The benefits include much higher utilisation rates and dynamic capacity management, but it is not necessarily straightforward – particularly if you are going to be shifting loads, because you need to integrate data about the equipment, the power source, the power quality and the IT apps that are running on the equipment. Some see this as increasing risk; you are effectively shifting the risk to the software. But one of the drivers for software defined power is it will enable leaner power capacity, lower redundancy and higher utilisation, which means lower capex,” Ascierto commented.
Software defined power scored a disruptive rating of 3.42. “This surprised me; I thought it would be higher,” said Ascierto.
Direct liquid cooling
This involves delivering liquid, directly or indirectly, to chips. There are two major types: cold plates (liquid in a heat sink) on chips or full immersion. The pros are that this technology requires lower power, due to the elimination of fans, and has the potential for higher reliability. The cons include added complexity and the need for operational changes.
“People are not used to having servers in vats of liquid and how to maintain that; it is foreign. We also tend to see direct liquid cooling in facilities that were built for air cooling, so it is an added cost on top of infrastructure that has already been built,” Ascierto commented.
The driver for this technology is that it supports higher sustained processing speeds and more IT capacity in a power envelope. Direct liquid cooling achieved a disruptive rating of 3.33.
“This is lower than I would have expected… As we see more artificial intelligence workloads, I think direct liquid cooling is going to enjoy a real renaissance,” added Ascierto. “Personally, I would put that score higher.”
Data centre microgrids
These provide localised energy sources for increased energy security. They are often tied to the utility but can disconnect (island mode). The advantages of microgrids include energy assurance and security, but there are added capex costs involved and, in areas such as the US, utility costs are still fairly low.
“Building and operating microgrids is also a whole other area of expertise,” said Ascierto. “Microgrids achieved a disruptive score of 3.20, which was the lowest score. However, if we had a major, extended utility outage, the focus on longer-term energy generation on or near site would shift entirely,” she added.
Some 600 data centre end users were also canvassed for their views, which showed some marked differences with the expert panel. In summary, the experts were more bullish on chiller-free data centres, direct liquid cooling, silicon photonics and micro-modular data centres, compared with end users, while the end users were more bullish on storage-class memory and open-source infrastructure.
“Ultimately, capacity planning is – and always will be – critical. The data centre is already being disrupted, and will become more efficient. I would encourage you to assess the top technologies now,” Ascierto concluded.