Gordon Johnson, Senior CFD Manager, Subzero Engineering, argues that the future is undoubtedly hybrid, and by using air cooling, containment and liquid cooling together, owners and operators can optimise and future-proof their data centre environments.
Many data centres are experiencing increasing power density per IT rack, rising to levels that just a few years ago seemed extreme and out of reach, but today are considered both common and typical while simultaneously deploying air cooling. In 2020 for example, the Uptime Institute found that due to compute-intensive workloads, racks with densities of 20kW and higher are becoming a reality for many data centres.
This increase has left data centre stakeholders wondering if air-cooled IT equipment (ITE), along with containment used to separate the cold supply air from the hot exhaust air, has finally reached its limits and if liquid cooling is the long-term solution. The answer is not as simple as yes or no, however.
Moving forward it’s expected that data centres will transition from 100% air cooling to a hybrid model encompassing air and liquid-cooled solutions with all new and existing air-cooled data centres requiring containment to improve efficiency, performance and sustainability. Additionally, those moving to liquid cooling may still require containment to support their mission-critical applications, depending on the type of server technology deployed.
One might ask why the debate of air versus liquid cooling is such a hot topic in the industry right now. To answer this question, we need to understand what’s driving the need for liquid cooling, the other options and how can we evaluate these options while continuing to utilise air as the primary cooling mechanism.
Can air and liquid cooling coexist?
For those who are newer to the industry, this is a position we’ve been in before with air and liquid cooling successfully coexisting while removing substantial amounts of heat via intra-board air-to-water heat exchangers. This process continued until the industry shifted primarily to CMOS technology in the 1990s and we’ve been using air cooling in our data centres ever since.
With air being the primary source used to cool data centres, ASHRAE (American Society of Heating, Refrigeration and Air Conditioning Engineers) has worked towards making this technology as efficient and sustainable as possible. Since 2004, it has published a common set of criteria for cooling IT servers with the participation of ITE and cooling system manufacturers entitled ‘TC9.9 Thermal Guidelines for Data Processing Environments’.
ASHRAE has focused on the efficiency and reliability of cooling the ITE in the data centre. Several revisions have been published with the latest being released in 2021 (revision 5). This latest generation TC9.9 highlights a new class of high-density air-cooled ITE (H1 class) which focuses more on cooling high-density servers and racks with a trade-off in terms of energy efficiency due to lower cooling supply air temperatures recommended to cool the ITE.
As to the question of whether or not air and liquid cooling can coexist in the data centre white space, it’s done so for decades already, and moving forward many experts expect to see these two cooling technologies coexisting for years to come.
What do server power trends reveal?
It’s easy to assume that when it comes to cooling, a one-size will fit all in terms of power and cooling consumption, both now and in the future – but that’s not accurate. It’s more important to focus on the actual workload for the data centre that we’re designing or operating. In the past, a common assumption with air cooling was that once you went above 25kW per rack it was time to transition to liquid cooling. But, the industry has made some changes in regard to this, enabling data centres to cool up to and even exceed 35kW per rack with traditional air cooling.
Scientific data centres, which include largely GPU-driven applications like Machine Learning, AI and high analytics like crypto mining, are the areas of the industry that typically are transitioning or moving towards liquid cooling. But if you look at some other workloads like the cloud and most businesses, the growth rate is rising but it still makes sense for air cooling in terms of cost. The key is to look at this issue from a business perspective – what are we trying to accomplish with each data centre?
What’s driving server power growth?
Up to around 2010 businesses utilised single-core processors but once available they transitioned to multi-core processors. However, there still was a relatively flat power consumption with these dual and quad-core processors. This enabled server manufacturers to concentrate on lower airflow rates for cooling ITE, which resulted in better overall efficiency.
Around 2018, with the size of these processors continually shrinking, higher multi-core processors became the norm and with these reaching their performance limits, the only way to continue to achieve the new levels of performance by compute-intensive applications is by increasing power consumption. Server manufacturers have been packing in as much as they can to servers but because of CPU power consumption, in some cases, data centres were having difficulty removing the heat with air cooling, creating a need for alternative cooling solutions, such as liquid.
Server manufacturers have also been increasing the temperature delta across servers for several years now, which again has been great for efficiency since the higher the temperature delta, the less airflow is needed to remove the heat. However, server manufacturers are, in turn, reaching their limits, resulting in data centre operators having to increase the airflow to cool high-density servers and to keep up with increasing power consumption.
Additional options for air cooling
Thankfully, there are several approaches the industry is embracing to cool power densities up to and even greater than 35kW per rack successfully, often with traditional air cooling. These options start with deploying either cold or hot aisle containment. If no containment is used typically, rack densities should be no higher than 5kW per rack with additional supply airflow needed to compensate for recirculation air and hot spots.
At some point, high-density servers and racks will also need to transition from air to liquid cooling, especially with CPUs and GPUs expected to exceed 500W per processor or higher in the next few years. But this transition is not automatic and isn’t going to be for everyone.
Liquid cooling is not going to be the ideal solution or remedy for all future cooling requirements. Instead, the selection of liquid cooling instead of air cooling has to do with a variety of factors, including specific location, climate (temperature and humidity), power densities, workloads, efficiency, performance, heat reuse and physical space available. This highlights the need for data centre stakeholders to take a holistic approach to cooling their critical systems. It will not and should not be an approach where we’re considering only air or only liquid cooling moving forward. Instead, the key is to understand the trade-offs of each cooling technology and deploy only what makes the most sense for the application.