One of the principal outcomes of the trend towards outsourcing to the cloud is the elevated importance of the applications, data processing and data storage being retained on-premise. However, these services are often located in edge data centers which were not designed to meet the availability expectations of either the always-on generation or the reliability needs of business-critical applications.
The centralized cloud was originally conceived for certain types of applications, like email, payroll, social media and that sort of thing. These applications weren’t considered to be absolutely time-critical. However, migrating critical applications to the cloud has proven to be more challenging as it has become apparent that latency, bandwidth, security and other regulatory requirements need to be addressed.
Maintaining some business-critical applications on-premise is one way to exercise the level of control needed to meet such goals. The result is that customers end up needing to operate a “hybrid-environment“ consisting of a mix of cloud, together with medium-to-large regional facilities, and/or smaller, localized on-premise data centers.
At the same time, the growth of IoT and automation in utilities as well as industrial, manufacturing and processing has increased the need to process and store all kinds of data much closer to the point of production. This enables real-time control of productivity together with rapid decision making for efficient, often money-saving plant operations.
For obvious reasons, tackling latency is a top priority for any performance industry. For consumers streaming movies, to those executing stock trades, or enterprises, factories and processing plants trying to bring products and services to market, a latency challenge of just a few milliseconds can mean the difference between success and failure. The emergence of distributed – or edge data centers – and the strategy of placing data processing and storage close to the point-of-use, enables users to enjoy fast, latency free access to relevant information.
An edge data center might physically comprise anything from a few racks located in a small room or closet to a 1-2 MW facility. Crucially, they also host network connectivity to the cloud. Unfortunately, many edge data centers were hastily developed with little thought given to redundancy or availability. With more of IT happening in the cloud, if the access point were down, employees cannot be productive. Without monitoring, they become a potential cause of persistent and costly downtime.
Enter cloud-based DCIM solutions called DMaaS (Data Center Management as a Service also called IMaaS). DMaaS is an innovation which offers IT solution providers and managed service providers the opportunity to help customers exactly address the monitoring and management requirements of distributed environments, edge data centers and hybrid architectures.
DMaaS includes easy-to-implement cloud-based monitoring and reporting services. These are the essential first steps towards ensuring that facilities meet goals such as availability, reliability and efficiency. These cloud-based DCIM solutions also enable partners to build a stronger relationship with their customers by offering accurate and insightful information for maximum protection of critical equipment, with recommendations on how to optimize the performance of their data processing facilities and reduce the cost of operations – supplying the information you need to have a meaningful conversation with your customer. This technology equips you with the ability to proactively provide Certainty in a Connected World for your customers and ensure that their operations are humming along.
For more details about how cloud computing is driving a re-think of edge data center resiliency requirements, please download white paper 256.