Hyperscale data centres are rapidly evolving to meet the soaring demands of AI, cloud computing and sustainability. With AI workloads driving a shift to high-density rack designs and liquid cooling solutions, operators are rethinking infrastructure to accommodate increased power and heat dissipation, while maintaining sustainability targets such as greener energy, efficient hardware and carbon reduction strategies.
Innovations in modular design, advanced cabling and localised material sourcing are shaping the next generation of data centres. As hyperscalers scale up, flexibility, efficiency and sustainability remain at the forefront of their evolution. We hear the views of three experts on hyperscale data centres.
Rajesh Sennik, Head of Data Centre Advisory at KPMG UK:

AI is having a significant impact on data centres globally, as they look to support rapidly growing demand, through incremental growth in capacity and changes to design.
We expect to see data centres continue to grow to support cloud and enterprise workloads. Whilst there will be dips in demand and challenges around how much capacity is required long term to support learning, we continue to see sustained demand for AI projects. However, we expect there will be varying demand based on geography. For example, there is a significant shift to build new AI data centres in the Nordics given training models are not latency dependent and hence there is greater flexibility on their location. In addition, the Nordics offers access to renewables and are generally easier permitting along with lower ambient temperatures.
A key consideration in the design and operation of AI data centres is the rack density. Cloud workloads generally operate at a rack density of 10-15kW while AI relies on high-performance computing nodes and tends to operate at above 40kW. Therefore, the servers generate far more heat which has to be dissipated, meaning AI data centre cooling systems have to be engineered to a much higher specification.
Traditional air-cooled systems are no longer appropriate at AI rack densities and we are seeing a major shift to liquid cooling, using technologies such as liquid to air cooling, direct liquid cooling, full immersion cooling and direct-to-chip cooling. While this has profound
implications on the cooling systems themselves, we have noted that for the data centre operators the major implication is the introduction of additional liquid distribution systems and incremental pipework into the halls.
Cloud computing continues to drive sustained demand for data centres as we see exponential growth in workloads moving to the cloud to support applications such as analytics, ERP, social and collaboration. The major impact on data centres from cloud computing is the sustained demand for capacity.
Sustainability is also a key consideration for data centres with hyperscalers mandating aggressive net zero targets for 2030. Operators are taking a multi-solution approach (there is no silver bullet) looking at solutions such as renewable power usage, efficient cooling systems, energy-efficient hardware, green building practices, carbon offset/PPAs, intelligent server virtualisation and consolidation, demand response and analytics to optimise usage.
We are seeing the development of new technologies which help server loads operate at optimal efficiency, leading to major savings in energy. Also, we are seeing the need for sustainability drive innovative solutions such as using the excess heat from data centres (heat recovery) to power local heat networks.
Adam Asquith, Technical director, Black & White Engineering:

Hyperscale data centres are rapidly evolving to support the growing demands using a multi-pronged approach. AI computing is supported using dedicated hardware, with racks and servers typically comprising higher TDP GPU’s which are better suited for the parallel processing requirements induced by AI and Machine Learning.
These racks necessitate different, and more innovative, power and cooling delivery strategies to deal with the higher IT load densities in operation. The adoption of this specialised hardware is being integrated into existing (re-purposed) and new build facilities alongside ‘more traditional’ cloud compute clusters under a hybrid deployment pattern, intended to provide flexibility, at scale.
Air and ‘high density’ liquid cooling can be installed and operated alongside each other, in the same critical space simultaneously using combinations of CDUs and CRAHs, arranged to provide a layer of redundancy and resilience. LV power can be delivered to the racks via overhead bus and row/room PDUs using standard power formations and architecture.
Sustainability is a key consideration; developers and operators require unprecedented levels of flexibility and scale, delivered at speed to drive competition and meet the industry demands. Operating temperatures and resulting plant and equipment efficiencies often
form the majority of opinions when it comes to energy consumption and this may still be the case, but a more holistic approach is being considered that accounts for embodied ‘energy and carbon’ throughout the various stages of a facilities life cycle. Strategies are being devised that allow developers and owners to quantify carbon and then focus attention on how this can be managed, in a fashion that permits net zero qualification.
Newer designs target modularity, repetition and controlled preassembly; supply chains and contractors can target more efficient production, shorter timescales and less waste production, accordingly. Alternative and greener materials formed from recycled material or aggregates are considered, along with products and equipment comprising a ‘green’ or ‘carbon’ passport. Locally sourced materials are sought to reduce transport and logistic energy consumption.
Workflows and processes are being developed to streamline and optimise operational efficiency; in some instances, automation and robotics are being leveraged.
More is being done outside of the facility envelope and concentrated efforts towards reducing and offsetting carbon footprint via the adoption of green energy through investment and collaboration with local utility and energy providers. This will only increase as we approach a ‘giga scale’ and further strain is placed on the existing energy networks. Measures being taken to offset carbon emissions produced include sequestration and carbon capture projects often involving wider communities.
Sebastian Sassi, VP of Sales, Atlantic Vision:

Hyperscale data centres are evolving in a lot of ways, and most of them are around making better use of the whitespace available while providing more power, cooling and density.
In the fiber passives and optical glass manufacturing space, we’re seeing the emergent popularity in data centres of newer small form factor connectors like the MDC. There’s a trend towards use of larger fiber count, larger outer diameter trunk cables, versus extended longer run simplex and duplex patch cords, to connect from rack to rack and bay to bay. In a nutshell, hyperscalers want this hardware for scalability and efficiency.
By using a trunk cable form factor that basically bundles a bunch of cables into one, it leaves more space in the cable raceways and routing areas are conserved. But we’re seeing an increased challenge: the demand for cables like this is resulting in longer lead times for production of MPO (multi-fiber push on) and MTP (multi-fiber termination push on) connectorised glass.
Reports of lead times for multimode and singlemode trunk cables are extending out to three or four months. Before the AI boom, it would have been three to four weeks.
Hyperscalers running vast facilities have enormous challenges around power capacity and space for structured cabling solutions, so connectorised optical glass manufacturers are facing a crunch to maintain production capacity for the dense, flexible, high-performance cabling solutions needed to make this growth a reality.
Production facilities are definitely aware of this trend. The demands placed on hyperscale data centres by burgeoning growth in automation, distributed computing, virtualisation, and Generative AI make for a perfect storm. There will be a lot more need to make efficient use of space, even on massive data centre projects. Supply chains that serve the hyperscale facilities are adapting to the unprecedented demand for connectivity.