The demand for high-performance infrastructure is soaring, resulting in industry collaboration and standardisation initiatives revolutionising the data centre ecosystem. Paul Mellon, Operations Director, Stellium Datacenters, explores how the hyperscale landscape is evolving and how enterprise organisations have an impact.
Demand for high quality hyperscale data centres across the globe has never been higher. Hyperscale providers will account for more than half of worldwide data centre capacity by 2027, according to Synergy Research Group’s annual market report last year. Hyperscalers currently account for around 37% of global capacity, with almost 900 large data centres operated by this segment. Half of this capacity is in own-built data centres, and the other half is in leased facilities.
Clearly, hyperscalers have high expectations and demands. They require their data centres and those of their prospective partners to ensure their technology and applications are powered, cooled, protected and connected, when and how they want, irrespective of geographic location. Continuous IT availability through the provision of scalable, future-proofed power and resilient critical infrastructure are prerequisites. So too are optimised energy efficiencies and industry leading PUEs and compliance with environmental, security, quality and operational regulations.
Though hyperscalers are leading the charge for more space, power and connectivity, large enterprise organisations are a further contributing factor to exploding data centre growth. As with hyperscalers, they recognise that high-quality purpose-designed and built colocation data centres offer many benefits versus the alternative of continuing to operate their own self-build/managed solutions. Increasingly, they are looking to deploy HPC workloads which effectively require hyperscale-class data centre solutions.
Accelerating R&D
This is where standardisation and industry collaboration among technology vendors is already paying handsome dividends. For example, not too long ago, a data centre – albeit a fairly small one – might be considered to be a cluster of 50 to 100 racks with a combined IT load of 100kW. A single rack can now accommodate 100kW in normal chilled water cooling and this scales to 250kW with immersive cooling. These developments represent giant steps forward in terms of efficiency.
The Open Compute Foundation has been at the centre of such innovation, ensuring collaborative R&D is undertaken at such scale as to short circuit the development and standardisation process from decades to years – sometimes months. What used to be the preserve of a small number of mainframe computers, high power density and cooling is now readily available at all levels in the data centre community.
Having originated from Facebook and several other technology companies’ initiatives to create a data centre design that would significantly reduce cost and boost efficiency, the OCP has since evolved into a global organisation with over 8,000 engineers with members such as Arm, Meta, Google, HPE, Inspur Systems, Intel, Microsoft and NVIDIA.
One major result of this visionary collaborative approach is the level and choice of high-power density and cooling now readily available at all levels in the data centre community. This has allowed the migration of millions of IT business environments to the cloud, delivering far greater efficiency in terms for lower power usage as well as flexibility on how organisations choose to manage their services along with robust SLA’s guaranteeing 99.98% service availability.
Predictability
However, OCP’s mission has also evolved into supporting the core tenets of efficiency, impact, openness, scale and sustainability, therefore embracing a much wider vision and going well beyond optimising computing, storage and network efficiency.
Their OCP Ready certification programme for colocation data centres such as Stellium is a good example of this. Further accelerating the road to standardisation for hyperscale-class facilities, OCP Ready entails colocation data centres already working with the OCP to achieve compliance against their rigorous criteria for power, cooling, IT technical space layout and design, facility management and control and facility operations.
It allows prospective hyperscale and enterprise customers to identify colocation facilities where OCP IT equipment can be installed without complications. Because the programme’s standards are open-source (like some software), the data centre operators will also know in advance what a given customer rack needs to do in terms of size, capability and power before it arrives onsite, and ensure the design and layout is ready to support this class of equipment.
The majority of Open Compute hardware is deployed as a fully populated rack. OCP-Ready data centre facilities must be able to easily accommodate racks of these weights and dimensions and be able to deploy multiple racks of such equipment at scale. As a minimum the rack sizes can weigh 500kGs to 1500 kGs, and 47U in height. The workloads can be anything from 6.6KW to 36kW and beyond.
In addition, the density of compute to meet scale demands and efficiency goals raises the bar for power and cooling specifications for racks. Cooling of the racks will require potentially a range of solutions – immersive, chilled water/air, chip cooling. For the higher densities the expectation is for PUE to be sub-1.1.
In summary, driven by the demands of hyperscalers for design excellence, predictability and certainty, the ongoing standardisation of IT hardware and supporting infrastructure is also a positive leap forward for enterprise companies as they migrate workloads from on-premise facilities to colocation and the cloud. It provides peace of mind with access to hyperscale-class infrastructure, allowing easier deployment of HPC equipment and the ability to scale quickly and reliably in any geographic region.