Should edge data centres be treated as mission critical sites and how can we make this a reality? Kevin Brown, CTO of Schneider Electric’s IT Division, warns that the edge will need to become a lot more resilient in the future, but there are significant challenges ahead. Louise Frampton reports.
With strong market drivers and demand for more distributed IT architecture to support emerging technologies, the shift to edge represents one of the most profound opportunities to modernise today’s legacy infrastructure and data centre ecosystem. However, there is still a great deal of confusion around what ‘the edge’ actually is, according to Kevin Brown.
“On the one hand, we are saying this is the next multibillion-dollar business and on the other, we are saying ‘what is it?’” he comments. “In 2012, Cisco introduced the concept of Fog Computing [a standard that defines how edge computing should work] and this is when we really started talking about ‘the edge’. This was around seven years ago, yet we are still trying to agree a definition. It is pretty complex, we don’t know exactly what it is going to look like, or what the opportunity is, but we know it is going to be big.”
Schneider Electric predicts there will be three types of data centre in the future: large, centralised cloud data centres; regional edge data centres of about 1-5MW, servicing local areas; and ‘local edge’ sites, which may be anything from one rack up to around 100kW of IT.
“What seems certain is the new hybrid computing architecture will require a more robust edge infrastructure. Users are now asking how fast an app will load on their device, not just if it will load, and they expect responsiveness,” says Brown.
The local edge infrastructure will be widely dispersed and will need to be effectively managed to ensure resilience, which will present significant challenges. Brown argues that, when we discuss resilience, according to the Uptime Institute’s Tier system, we talk in percentages of uptime. In his view, this can give a false sense of security; the difference between ‘99.98 uptime’ for a Tier III data centre and ‘99.67 uptime’ for a Tier I data centre doesn’t sound like a huge difference. However, it is much more meaningful when you compare hours of downtime – ie 1.5 hours vs 29 hours.
“I would argue that edge data centres are mostly Tier I. We have got pretty good at building large data centres for Tier III…The challenge arises when you connect these together. If you take a centralised data centre with a downtime of 1.6 hours per year, and connect it to a local edge data centre, running at Tier I, the availability for the person that is dependent on that edge data centre goes down. It is worse than Tier I when you do this – it goes from 29 hours of downtime for the local edge data centre to 31 hours. In fact, many edge data centres are worse than Tier I.
“A lot of chaos happens. There are no local staff taking care of them. I visited a Tier III data centre and had an armed guard following me around. Yet, if I visit a retailer with an edge data centre, the janitor has access into the closet… this is playing out in real time.”
Brown warns that we need to start treating edge sites as mission critical data centres. Although this presents a challenge, there are three aspects that contribute to making the edge more resilient:
• An integrated ecosystem
• Management tools
• Analytics and AI
Physical infrastructure vendors, the IT equipment manufacturers, system integrators and managed service providers will have to work together differently in the future, particularly in terms of the supply chain. Solutions will need to be delivered to site fully configured.
“All of it needs to come together with a thorough understanding of the customer application and be delivered at multiple locations worldwide, leveraging existing staff. This is part of the challenge,” comments Brown.
He says that Schneider Electric is already making progress in terms of an integrated ‘eco system’ and has been working closely with HPE, Scale Computing, Cisco, StorMagic and Dell EMC on delivering resilient edge data centre solutions, including a standardised and robust micro data centre that can be monitored and managed from any location.
The Micro Data Centre (DC) Xpress allows IT equipment to be pre-installed by the customer, partner or integrator before shipment, and features complete data centre physical infrastructure and management software in a self-contained and secure enclosure. These micro data centres are also certified by leading converged and hyperconverged IT vendors.
Users of distributed IT architectures are also faced with the issue that they have no expert, onsite IT staff to deal with the issues that can arise across multiple, dispersed locations.
“Customers say that they receive so many alerts that they don’t know what to do with them; they don’t know which ones are important.
“It is one thing when you are in a big data centre, when you have lots of highly trained staff running a networked operation, with 10 screens up on the wall and educated people sifting through the data. They understand what is going on. But it is a different scenario when you have 3,000 micro data centres – it can be a nightmare; no one knows what they are doing,” comments Brown.
“There are issues over who is accessing the equipment and even whether the user can get the IP address – the problems can range from very complex issues to the very basic,” he continues.
Need for effective management tools
Brown argues that management tools must move to a cloud-based architecture. This will allow thousands of geographically dispersed edge sites to have the same level of manageability already provided for large data centres.
“With a cloud-based architecture, you can pay-as-you-grow and start with what you need. It’s easy to scale, upgrades are automatic, and it has up-to-date cybersecurity. Most importantly, this approach enables access from anywhere, at any time, from any device,” Brown explains.
He adds that multiple players in the ecosystem can see the same data and work from the same exact dashboard at the same time.
Ultimately, end users want to know they have a problem before it is too late, and this is the potential of predictive analytics.
However, edge data centres need to be managed holistically as opposed to being managed as a collection of individual devices – ie the UPS or the PDUs. At the same time, it is important not to lose the granularity of the data.
Brown reveals that Schneider Electric is addressing the need for management tools with the introduction of a cloud-based data centre infrastructure management (DCIM) solution that facilitates resiliency optimisation. EcoStruxure IT Expert allows secure, monitoring and visibility of all IoT-enabled physical infrastructure assets, including power and cooling – anywhere, at anytime.
The solution addresses the data centre industry’s need to simplify how data centres, distributed IT and local edge environments are managed. Providing proactive recommendations and consolidated performance and alarming data, the solution can reduce alarm noise and improve overall site resiliency.
“It is no longer just on-premises staff that can see the data. Experts from Schneider Electric, managed service providers, IT vendors and everyone in the eco system can look at the exact same data at the exact same time.
“This is the power that comes with it. Everyone is working from the same data set. It is also the only way you can move to machine learning and artificial intelligence; you need a large enough data set,” says Brown.
He outlines four key ingredients for artificial intelligence:
• A secure, scalable, robust cloud architecture
• A data ‘lake’ with massive amounts of normalised data.
• A talent pool of subject matter experts
• Data scientists to develop the algorithms
“It is our experience that once you have these ingredients, which provide a solid foundation, you can start doing something interesting. You can become more predictive and help data centre operators know when there’s a problem before it occurs,” Brown concludes.