Kevin Brown, CTO and SVP of Innovation of Schneider Electric’s IT Division and Wendy Torell, Senior Research Analyst at Schneider Electric’s Data Center Science Center talks about how Edge computing needs a razor sharp focus on reliability

Centralised data centres are a familiar concept today with most organisations buying into the idea that cloud computing, virtualisation and specialist expertise can combine to deliver resilient, scalable and cost-effective IT services from large remote data facilities.
At the same time a number of factors are forcing companies to consider moving some key IT assets to the edge of the network and closer to the users be they customers or employees. The upshot is that in reality, many organisations will deliver specific applications and services from a variety of data centres ranging from smaller in-house networking facilities to gigantic colocation centres. This has implications for the overall levels of security and resilience that may be expected.
It has long been taken for granted that large centralised data centres have the highest standards for deployment of such functions as data backup, fail over systems and physical security. Backups are performed regularly and punctiliously; there is ample storage and server redundancy, enhanced by, to take up the slack in the case of equipment failure; highly redundant power and cooling systems, and physical security is strictly enforced to ensure no unauthorised access to sensitive areas by those with malicious intent.
Further down the data centre chain, some or all of these functions may not be as readily available. A micro data centre installed in a spare office, network closet or basement is unlikely to merit its own security guard or have the same level of redundancy that pertain in larger facilities.
Furthermore, the sort of applications hosted locally may be used by only a minority of staff but they are also more likely to be proprietary applications, specific to the organisation and critical to the businesses well being. Therefore, when calculating the overall availability of IT services it is important to take into account the variance between the different data centres so that one can attain a true picture of the strength or vulnerability of its IT assets.
Recent research carried out by Schneider Electric proposes that the overall availability of IT services to an organisation should be based off of a holistic view of the organizations data centers, and that a score-card methodology be adopted so that a dashboard can be drawn up depicting system-level availability. This produces metrics showing that the relatively poorer levels of availability from smaller sites can have a disproportionately large effect on overall IT availability.
For example, a user might be dependent on applications hosted by two data centres; one a centralised Tier 3 facility with an availability of 99.98% and 1.6 hours of downtime and the other a local Tier 1 site with 99.67% availability and 28.8 hours of downtime. The total availability rating of the two systems in series (meaning a failure occurs if either system fails) is the product of the two systems’ availabilities, or 99.65% (99.98*99.67) resulting in a total downtime of 30.7 hours.
That’s the bad news.
The good news is that what gets measured gets managed, and visibility of the situation allows steps to be taken to improve availability throughout an organisation.
New best-practice measures can be adopted to improve availability at the edge of a network. Simple steps like moving equipment to locked rooms, or installing biometric access controls, which are now cheap enough to be deployed throughout a network can boost security appreciably.
Remote monitoring software is now flexible enough to take account of IT assets distributed across a wide geographical area as well as those housed centrally. Thus, monitoring can be consolidated at a centralised platform providing similar levels of management and reporting throughout an organisation.
For power and cooling, one should consider monitoring of temperature and humidity levels at all sites and the introduction of redundant power paths to maintain availability. A similar focus on redundancy for network connectivity should also be considered, depending on the critical nature of a locally hosted application.
As data information is exchanged between local and central facilities one must take into account the challenges that may emerge as a consequence. These include service disruption, latency and in some cases network reliability. Fortunately the availability of monitoring software, cost-effective security and high-availability edge solutions will ensure that these challenges can be met.