What to consider for data centre cooling in a shifting landscape

What to consider for data centre cooling in a shifting landscape

Keith Dunnavant, VP Offer Strategy & Portfolio Management, Munters, reviews cooling basics, current trends and peeks into the future of data centre cooling.

Keith Dunnavant, VP Offer Strategy & Portfolio Management, Munters

As AI’s rapid evolution fuels the need for High-Performance Computing, advanced engineering and optimised cooling solutions have become crucial to ensuring data centre efficiency, reliability and adaptability to an uncertain future. By integrating innovative technologies and tailored cooling systems, modern data centres are achieving new standards of performance, driving down energy costs and minimising environmental impact.

Precision vs practicality: Finding the right balance in thermal management

In today’s data centres, effective thermal management is vital but the good news is that it doesn’t require pinpoint accuracy. Instead, effective thermal management is about maintaining proper coolant flow (air or liquid) at temperatures within a suitable range that keeps equipment operating reliably and efficiently.

ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) provides guidance for engineers designing data centre cooling systems. Their fifth Thermal Guidelines for Data Processing Environments publication provides details on recommended and allowable temperature ranges.

The current recommendations suggest that supply air for servers should ideally fall between 18-27°C (64.4-80.6°F). This range helps ensure server reliability while also enabling efficient cooling strategies. The allowable temperature range is much broader and varies with server classification. Class A1 servers are the most restrictive, having a range between 15-32°C (59-89.6°F).

Seasonal ambient temperature variations present opportunities to refine cooling strategies. For instance, during colder winter conditions supply air which can be delivered cooler with little to no HVAC energy penalty while reducing server energy consumption and enhancing server reliability and lifespan. Winter supply air temperatures of 18.3°C (65°F) are common, but in general, targeting a supply air temperature around 24°C (75°F) is often seen as the sweet spot, balancing reduced server fan energy use with manageable HVAC costs.

When possible, the data centre should be designed to allow temperatures to drift slightly higher during peak summer ambient conditions, permitting the HVAC equipment capacity and cost to be reduced. Additionally, the peak power usage is reduced with this strategy, favourably impacting the electrical infrastructure and leading to a lower total cost of ownership.

In 2021, ASHRAE introduced a new environmental class for high-density, air-cooled equipment, class H1. These high-density products that use high-powered processing components require a tighter control envelope, with supply air temperature recommended between 18-22°C (64.4-71.6°F). For this reason and many others, it is more effective, and in many cases essential, to cool high-performance servers with liquid — or more commonly, a combination of liquid and air.

Ultimately, the challenge lies in balancing cost, energy efficiency, reliability and serviceability. New innovations are great, but if they don’t check all four boxes, they miss the mark.

Exploring modern cooling: Innovations in air and liquid systems

The global data centre cooling market shows no signs of slowing. It is expected to surge from US$16.84 billion in 2024 to an impressive US$42.48 billion by 2032, reflecting a robust Compound Annual Growth Rate (CAGR) of 12.3%. The primary driver of this growth is AI, of course.

  • Air cooling

Still the most common data centre cooling method, air cooling involves circulating cool air to absorb and dissipate heat generated by the equipment. Hot-aisle (or cold-aisle) containment is an essential element of any efficient design.

Air cooling can be achieved in many ways, but a common trend has been to use perimeter-mounted Computer Room Air Handling (CRAH) units to ‘flood’ the room with cool air. CRAH units have fans to move the air and coils that absorb heat into circulating chilled water or refrigerant. Alternatives include in-row coolers, downflow CRAHs delivering air to raised floors, and above-rack coolers configured in various ways, to name a few.

  • Liquid cooling

Liquid cooling directly targets heat sources, such as CPUs (Central Processing Units) and GPUs (Graphics Processing Units), by circulating a coolant fluid through cold plates that absorb heat from the processors, or alternatively, using immersion cooling approaches. The most common cooling fluid used today for cold plate applications is high-purity water blended with 25% propylene glycol (PG25) plus inhibitors to limit corrosion and biological growth.

However, two-phase fluids (refrigerants) are gaining momentum because they improve heat transfer potential, eliminate biological concerns and reduce the risks associated with damaging IT gear in the event of a leak. These fluids, PG25 or refrigerant, are circulated in a closed loop, normally by a Coolant Distribution Unit (CDU) mounted in the rack or externally, that interfaces with facility-chilled water or refrigerant that ultimately transports the heat to the atmosphere.

A new approach introduced by Munters and ZutaCore integrates Munters SyCool (thermosiphon-based heat rejection) with ZutaCore’s HyperCool (in-rack two-phase thermal management system). This novel waterless approach provides end-to-end two-phase heat rejection.

ASHRAE and OCP (Open Compute Project) offer guidance for liquid cooling designs. For ASHRAE, the full suite of their documentation is available by subscribing to its Datacom Encyclopedia. OCP has multiple ongoing working groups, and one focused on fluid pipe distribution.

  • Hybrid cooling

Hybrid cooling combines both air and liquid cooling methods. The heat from CPUs and GPUs is removed with liquid (PG25 or refrigerant) delivered to cold plates, while most other server components are air-cooled. Typically, 70-85% of server heat can be liquid-cooled, leaving a residual of 15-30% to be air-cooled. Moving forward, it is likely that more cooling work will be done by liquid, which is more efficient.

Adaptability and customisation: Critical for optimal cooling

Customisable cooling solutions are crucial for air and liquid cooling because each data centre design can have unique constraints that cannot often be met with standard cooling products. Liquid cooling at a large scale is relatively new, so data centre owners, consulting engineers, contractors and equipment suppliers must work in close collaboration to achieve an optimal outcome.

What lies ahead: The future of cooling

We are in a period of rapid innovation as hybrid air and liquid-cooled data centres are deployed at an unprecedented scale. It will take several years for the industry to define best practices and adopt standards.

Fuelled by the increase in computational demand created by Generative AI, almost 50GW of data centre capacity is projected to be added to the US grid by 2028.

With increasing demand for data centres and their ability to support heavier workloads, advanced cooling solutions will continue to evolve. Embracing the emerging best practices and technologies will become more important than ever before, and as we look toward the future, one thing is clear: this is an exciting time to engage in the conversation surrounding data centre cooling.

Some liquid cooling considerations:

  • The capacity of individual liquid cooling loops impacts pipe sizing, potential damage resulting from a leak, optimal sizing of supporting CDUs and has overall cost implications
  • Liquid cooling systems must ensure compatibility between the working fluid and all materials that contact the fluid. The current piping material of choice is 316 stainless steel. Maximum velocity of the fluid within the piping, to minimise potential erosion, is recommended at 2.7 m/s. It is suggested that the circulating single-phase fluids be filtered to 25 microns to prevent fouling of microchannels in the cold plates
  • A common design metric for single-phase technology working fluids is 1.5 litres/minute per kW of heat rejection, which for PG25 results in a 10°C (18°F) temperature difference between the supply and return fluid
  • Pumps within CDUs should derive their power from a UPS (Uninterruptible Power Supply) source and be sized to allow for the hydronic losses from discharge, through the entire hydronic circuit, and back. CDUs with single pumps (VFD controlled) provide better efficiency and control relative to CDUs with multiple pumps and VFDs
  • Redundant CDUs are needed to provide the best resiliency and simplicity, as opposed to CDUs that provide redundant internal components. Components like expansion tanks are ideally installed in the common hydronic loop piping, and not internal to CDUs
  • New considerations can be found in ASHRAE’s Liquid Cooling: Resiliency Guidance for Cold Plate Deployments

Browse our latest issue

Intelligent Data Centres

View Magazine Archive