Ireland’s largest university upgrades data centre cooling system with Schneider

Ireland’s largest university upgrades data centre cooling system with Schneider

The Future Campus project at University College Dublin (UCD) called for space utilised by facility plant and equipment to be given up for development to support the student population. Total Power Solutions, an Elite Partner to Schneider Electric, worked with UCD’s IT Services organisation to upgrade its primary data centre cooling system, to provide greater resilience for its HPC operations while releasing valuable real estate. Tom Cannon, Enterprise Architecture Manager at University College Dublin, discusses the project in further detail.

Introduction: Data centres at Ireland’s largest university

University College Dublin (UCD) is the largest university in Ireland, with a total student population of about 33,000. It is one of Europe’s leading research-intensive universities with faculties of medicine, engineering, and all major sciences as well as a broad range of humanities and other professional departments. 

The university’s IT infrastructure is essential to its successful operation, for academic, administration and research purposes. The main campus at Belfield, Dublin, is served by two on-premises data centres that support all the IT needs of students, faculty and staff, including high-performance computing (HPC) clusters for computationally intensive research. The main data centre in the Daedalus building hosts all the centralised IT including storage, virtual servers, Identity and Access Management, business systems, networking and network connectivity in conjunction with a smaller on-premises data centre.

“Security is a major priority, so we don’t want researchers having servers under their own desks,” said Tom Cannon, Enterprise Architecture Manager at University College Dublin. “We like to keep all applications inside the data centre, both to safeguard against unauthorised access — as universities are desirable targets for hackers — and for ease of management and efficiency.”

Challenges: Ageing cooling infrastructure presents downtime threat and reputational damage

Resilience is a key priority for UCD’s IT Services. Also, with its campus located close to Dublin’s city centre, real estate is at a premium. There are continuing demands for more student facilities and consequently the need to make more efficient use of space by support services such as IT. Finally, there is a pervasive need to maintain services as cost-effectively as possible and to minimise environmental impact in keeping with a general commitment to sustainability.

As part of a major strategic development of the university’s facilities called Future Campus, the main Daedalus data centre was required to free up some outdoor space taken up by a mechanical plant, and make it available for use by another department. The IT Services organisation took this opportunity to revise the data centre cooling architecture to make it more energy and space efficient as well as more resilient and scalable.

“When the data centre was originally built, we had a large number of HPC clusters and consequently a high rack power density,” said Cannon. “At the time, we deployed a chilled water-cooling system as it was the best solution for such a load. However, as the technology of the IT equipment has advanced to provide higher processing capacity per server, the cooling requirement has reduced considerably even though the HPC clusters have greatly increased in computational power.”

One challenge with the chilled water system was that it relied upon a single set of pipes to supply the necessary coolant, which therefore represented a single point of failure. Any issues encountered with the pipework, such as leaks, could therefore threaten the entire data centre with downtime. This could create problems at any time in the calendar, however, if it was to occur at critical moments such as during exams or registration it would have a big impact on the university community. Reputational damage, both internally and externally, would also be significant.

Solution: Migration to Schneider Electric Uniflair InRow DX Cooling Solution resolves reliability, scalability and space constraints

UCD IT Services took the opportunity presented by the Future Campus project to replace the existing chilled water-based cooling system with a new solution, utilising Schneider Electric’s Uniflair InRow Direct Expansion (DX) technology, using a refrigerant vapour expansion and compression cycle. The condensing elements have been located on the roof of the data centre, conveniently freeing up significant ground space on the site formerly used for a cooling plant.

Following on from an open tender, UCD selected Total Power Solutions, a Schneider Electric Elite Partner, to deliver the cooling update project. Total Power Solutions had previously carried out several power and cooling infrastructure installations and upgrades on the campus and is considered a trusted supplier to the university. Working together with Schneider Electric, Total Power Solutions was responsible for the precise design of an optimum solution to meet the data centre’s needs and its integration into the existing infrastructure.

A major consideration was to minimise disruption to the data centre layout, keeping in place the Schneider Electric EcoStruxure Row Data Centre System (formerly called a Hot Aisle Containment Solution, or HACS). The containment solution is a valued component of the physical infrastructure, ensuring efficient thermal management of the IT equipment and maximising the efficiency of the cooling effort by minimising the mixing of the cooled supply air and hot return – or exhaust – airstream.

The new cooling system provides a highly efficient, close-coupled approach which is particularly suited to high density loads. Each InRow DX unit draws air directly from the hot aisle, taking advantage of higher heat transfer efficiency and discharges room-temperature air directly in front of the cooling load. Placing the unit in the row yielding 100% sensible capacity and significantly reduces the need for humidification.

Cooling efficiency is a critical requirement for operating a low PUE data centre, but the most obvious benefit of the upgraded cooling system is the built-in resilience afforded by the 10 independent DX cooling units. No longer is there a single point of failure; there is currently sufficient redundancy in the system that if one of the units fails, the others can pick up the slack and continue delivering cooling with no impairment of the computing equipment in the data centre.

“We calculated that we might just have managed with eight separate cooling units,” said Cannon, “but we wanted the additional resilience and fault tolerance that using 10 units gave us.” Additional benefits of the new solution include its efficiency – the system is now sized according to the IT load and avoids the overcooling of the data centre both to reduce energy use and improve its PUE.

In addition, the new cooling system is scalable according to the potential requirement to add further HPC clusters or accommodate innovations in IT, such as the introduction of increasingly powerful but power-hungry CPUs and GPUs. “We designed the system to allow for the addition of four more cooling units if we need them in the future,” said Cannon. “All of the power and piping needed is already in place, so it will be a simple matter to scale up when that becomes necessary.”

Implementation: Upgrading a live environment at UCD

It was essential while installing the new system that the data centre kept running as normal and that there was no downtime. The IT department and Total Power Solutions adopted what Cannon calls a ‘Lego block’ approach; first to consolidate some of the existing servers into fewer racks and then to move the new cooling elements into the freed-up space. The existing chilled-water system continued to function while the new DX-based system was installed, commissioned and tested. Finally, the obsolete cooling equipment was decommissioned and removed. 

Despite the fact that the project was implemented at the height of the COVID pandemic with all the restrictions on movement and the negative implications for global supply chains, the project ran to schedule and the new equipment was successfully installed and implemented without any disruption to IT services at UCD.

Results: A cooling boost for assured IT services and space freed for increased student facilities

The new cooling equipment has resulted in an inherently more resilient data centre with ample redundancy to ensure reliable ongoing delivery of all hosted IT services in the event that one of the cooling units fails. It has also freed up much valuable real-estate that the university can deploy for other purposes.

As an example, the building housing the data centre is also home to an Applied Languages department. “They can be in the same building because the noise levels of the new DX system are so much lower than the chilled-water solution,” Cannon said. “That is clearly an important issue for that department, but the DX condensers on the roof are so quiet you can’t tell they’re there. It’s a much more efficient use of space.”

With greater virtualisation of servers, the overall power demand for the data centre has been dropping steadily over the years. “We have gone down from a power rating of 300kW to less than 100kW over the past decade,” said Cannon. The Daedalus data centre now comprises 300 physical servers but there are a total of 350 virtual servers split over both data centres on campus.

To maximise efficiency, the university also uses EcoStruxure IT management software from Schneider Electric, backed up with a remote monitoring service that keeps an eye on all aspects of the data centre’s key infrastructure and alerts IT Services if any issues occur.

The increasing virtualisation has seen the Power Usage Effectiveness (PUE) ratio of the data centre drop steadily over the years. PUE is the ratio of total power consumption to the power used by the IT equipment only and is a well understood metric for electrical efficiency. The closer to 1.0 the PUE rating, the better. “Our initial indications are that we have managed to improve PUE from an average of 1.42 to 1.37,” said Cannon.

“However, we’re probably overcooling the data centre load currently, as the new cooling infrastructure settles. Once that’s happened, we’re confident that we can raise temperature set points in the space and optimise the environment in order to make the system more energy efficient, lower the PUE and get the benefit of lower cost of operations.”

The overall effects of installing the new cooling system are therefore: greater resilience and peace of mind; more efficient use of space for the benefit of the university’s main function of teaching; greater efficiency of IT infrastructure and consequently a more sustainable operation into the future.

Browse our latest issue

Intelligent Data Centres

View Magazine Archive