Data centre design is more than just appearance – it’s an intricate blend of logistics, energy efficiency and technology implementation. Voices from the industry share their views on achieving a balance between aesthetics, infrastructure and network innovations, stressing the necessity of synchronising real-world applications with the adaptive technologies which form the backbone of our digital systems.
Chris Mason, Senior Architect, studioNWA
What is euphemistically known as ‘the cloud’ is, in reality, remote storage and processing of data in physical locations. These locations are data centres, and they form an intrinsic of part of the modern world and how it works.
Like most storage buildings throughout history, the design of data centres is driven primarily by the technical requirements of what takes place inside them rather than by consideration of their external expression or context. While there can be a visual appeal to the honest expression of a building’s function, in the case of data centres it tends towards the boxy structure.
Data centres are often deliberately anonymous. The move from localised to remote data storage was promoted on the idea of the removal of the physical in favour of the notional cloud. On a practical level, there is also a security advantage in not drawing attention to their location. There are exceptions. Some data centre designs embrace the scale and pure geometry the functional requirements allow, achieving genuinely sculptural results, such as the Portugal Telecom data centre in Covilha.
Whilst anonymity is an acceptable approach in many locations the requirement for low-latency, the availability power or other advantageous adjacencies will increasingly place data centres in more sensitive environments.
Where data centres exist in urban areas the issues are particularly acute. The scale and lack of transparency are more suited to industrial environments than a typical urban context. There are also issues with noise and the necessary security measures at site perimeters.
Good design principles can be employed to mitigate these issues:
- Making the most of the outward-facing aspects of the project, the offices and landscaping
- Provide publicly accessible functions such as co-working spaces and tech hubs at the interface with the public realm
- Design the exterior of the buildings to reflect the principles that govern the design of the equipment within them: well-considered, adaptable, carefully constructed components using quality materials and made to last
As the need for data storage and the awareness of the environmental impact grows, anonymity may no longer work as a default position. Data centres will have to perform better environmentally and learn to be good neighbours.
Tim Mitchell, Sales Director, Klima Therm
Data centres produce a lot of heat, which can very easily be captured, recycled and used in district heating systems. The barriers to widespread adoption of this approach are not technological. The market has all the machinery and skills required to use the heat created by data centres but there are legal and practical challenges – for out-of-town data centres in particular.
The legal issues of who is responsible for what elements of the system and the energy being fed and taken from it are initially significant, but certainly not insurmountable: where there’s a will there’s a way, and the balance of carrot (e.g. financial incentives for participants) and stick (national or local planning rules) must be found to drive uptake.
The practical challenges of what to do with the heat generated by these ‘out-of-town’ data centres can be solved by using the ‘smart city’ concept; grouping net heat generators, like data centres, with industries or buildings that are net heat users, such as primary manufacturing facilities. For example, siting data centres near hospitals, hotels, leisure centres and housing developments provide a readymade, constant market for their heat. Again, the carrot/stick balance must be carefully managed to ensure a win-win situation for all participants.
The benefits of this approach are reciprocal. Moving heat to an ambient loop can make the data centre more efficient than if this heat were rejected to the atmosphere, as in an air-cooled chiller. This efficiency means less primary energy is required to run the data centre. The principle of one energy input and two useful energy outputs is a massive benefit to the overall carbon footprint of all buildings and infrastructure connected to the loop.
Alastair Waite, Senior Manager, Global Data Centre Market Development, CommScope
It’s clear AI is affecting data centre construction, deployments and network architecture design in general. From a regulatory standpoint, power-hungry AI has been increasing the difficulty of securing approval for the building of new data centres; regulators are hyperconscious of the environmental footprint of data centres and their impact on local communities.
It will be interesting to see how the designation of data centres as Critical National Infrastructure (CNI) by the UK government will affect all stages of their lifecycle: site location, construction and operation. The main changes will involve increased government oversight, reporting and potential audits, along with new standards specific to data centres to ensure they meet high security and operational benchmarks.
AI is also continuing to be a challenging factor for day-to-day data centre design and architectures. For example, processing large AI workloads requires GPU servers to have significantly higher connectivity between them, but because of power and heat constraints, there is a limitation on the number of servers which can be installed in each rack. This leads to a situation where each GPU server connects to a switch within its row or room, requiring more inter-rack fibre cables than previously seen in cloud data centres, running 400G and 800G connections.
However, this is problematic. AI and Machine Learning (ML) algorithms are highly sensitive to latency – similar to High-Performance Computing – meaning AI clusters need to keep GPU servers located nearby, with most connections limited to 50 metres. That being said, not all data centres can accommodate GPU racks as a single cluster. These racks easily require over 40kW of power, forcing traditionally cooled data centres to spread them out, which wasn’t a problem in traditional data centres.
Cabling innovations allow data centres to navigate these narrow and congested GPU server-to-switch pathways and the increased cabling complexities that come with AI. Innovations like rollable ribbon fibre allow up to six 3,456 fibre cables to fit into a four-inch duct, doubling the density of traditional fibres, helping to keep GPU enabled Servers fed with the huge amounts of data they need to process Large Language Models (LLMs). Coupled together with new dense connector technologies like the MPO-16 connector, network designs can provide both high-density connectivity and support of mainstream IEEE high-speed roadmap speeds to 1.6Tb. Essential for future-proofing networks in preparation for AI networks.