As market demand for computing power increases (partly fueled by the virality of generative artificial intelligence), engineers are required to rethink data center design. In response, facilities are adding high-density racks and servers to meet current and future data center capacity needs. However, these emerging high-density data centers have intensive power and cooling requirements that create significant challenges for engineers tasked with designing efficient, sustainable facilities.

Increasing IT power density requirements drive up the facility’s electrical demand and heat load until, eventually, traditional cooling methods are simply not efficient at ensuring optimal system performance. In response, many data centers are integrating liquid cooling systems.

What is liquid cooling?

Liquid cooling in data centers refers to the practice of using liquid to transport heat from IT equipment. Instead of using large air ducts and fan systems as done in air-cooled systems, liquid cooling systems employ significantly smaller pumps and pipes to transport large heat quantities. Liquid cooling can be a highly efficient and effective heat removal method, circulating water or coolant directly to a component to regulate its temperature.

For decades, data centers have used liquid cooling as a boutique solution for small applications, but the recent escalation in AI applications requiring high-density chips is driving the need for wider adoption in contemporary data center design. Low-density chips don’t generate nearly as much heat as high-density ones, making air-cooling an effective thermal management method in these facilities. However, as rack density continues to increase alongside growing demands in computing power, we eventually hit a point where it is no longer possible to dissipate heat using air. Therefore, liquid, which has a significantly higher specific heat capacity than air, must be used.

In addition to increased efficacy, deploying liquid cooling offers key sustainability benefits. Waste heat in liquid form is easier to harness and reuse as a heat source in commercial and industrial settings and surrounding communities as well as to support business operations such as pharmaceuticals, food and beverage, hotels and resorts, hospitals, and agriculture. Heat recovery or reuse can significantly reduce carbon footprint, enabling end-users to avoid generating additional emissions when creating the necessary heat for their operations.

When designing liquid cooling systems to drive data center cooling efficiency, engineers must consider several design factors, including flexibility, adaptability and scalability. These elements are critical to ensuring that both existing and future data centers are poised to contribute to carbon emission reduction and heat reuse in industrial and commercial settings.

Four pipe design considerations

Engineers must take a different approach to pipe design than used historically to successfully deploy liquid cooling and heat recovery systems in data centers. We’ll discuss four factors to consider for the best system design.

1. Design flexibility

The liquid cooling system layout must be adjusted each time a facility’s rack density increases to ensure optimal performance. The challenge is that the industry is currently in a period of rapid growth and transformation, and the future of data center design remains to be defined. Will the standard rack densities be 25KW, 50KW, 100KW, or 200KW per IT rack? How often and quickly will owners need to upgrade or expand capacity? Modular design is the key to meeting today's and tomorrow’s facility requirements.

Mechanical pipe-joining and equipment modules are central to achieving a modular system design that can keep pace with changing demand. Integrating small, adaptable modules into the system design will enable rapid deployment at scale and simplify redesign. Welding, flanging and threading are viable pipe-joining alternatives, but modifying these systems requires a greater time and labor investment at the installation stage to reroute or change pipe sizes. Flexible, modular design also leaves the door open for data centers to harness and redirect waste heat in the future. Designers can retrofit data centers for heat provision by adding a heat exchanger to the modular system.

2. Scalability

The industry-wide transition to liquid cooling systems is causing supply chain shortages. Liquid cooling systems use a variety of valves and pipe diameters depending on the size of the system and the total number of racks. At a small scale — for example, 1 MW — securing materials to increase rack size is not a problem. However, the inventory starts to dry out when we talk about 50 MW or 300 MW facilities.

It’s imperative that designers find ways to standardize pipe systems without constraining innovation. The final design must remain flexible enough to be adapted easily, bringing us back to the importance of modularity. Switching to a modular approach enables project teams to standardize bills of materials and place orders at high volumes, streamlining delivery and supply chains and reducing embodied carbon emissions.

Engineers may also consider reusing pipe components from the original system during system expansion. For example, a new loop may use a smaller-diameter pipe that can be reused from a recent replacement so long as the materials are in good condition. Incorporating construction circularity into the design process will also help projects reduce their carbon footprint and meet net-zero goals.

3. Water cleanliness

Water quality is critical to liquid-to-chip (LTC) cooling design. New high-density chips often use microchannel heat exchangers that can easily clog from impurities in the cooling liquid. Contamination sources can include construction, bio-fouling and micro-corrosions inside the pipe. Microchannel heat exchangers require 25-micron filtration and ultra-clean manufacturing processes to avoid clogging.

Construction material selection and installation methods can directly impact water cleanliness. Fused or welded connections can create internal contamination sources that are difficult to clean. Meanwhile, mechanical pipe-joining solutions can support superior pipe cleanliness because they do not change the pipe’s state and simplify the use of stainless steel, which is notoriously challenging to fuse.

4. Risk reduction

Pipe connection reliability is a pressing concern in mission-critical facilities with sensitive equipment. Failure at any joint when distributing liquid to a server room can be catastrophic. System engineers can help mitigate risk with their product and material specifications. A modular, standardized design can drive out risk because it creates repeatability during installation. In addition, modules can be thoroughly cleaned and tested before delivery to the job site.

System designers must also evaluate the pipe-joining method for its leak potential. For instance, a welded system will need to undergo rigorous weld quality testing to verify integrity, which might involve radiography and ultrasonic testing. Any imperfections will need to be reworked and retested. On the other hand, a grooved system with mechanical couplings can often be visually verified for proper installation, with maintenance-free reliability engineered into the product’s design for the facility’s lifespan.

How is data center heat recovery used around the world?

It’s common to see facilities incorporate direct heat recovery in their sustainable designs, repurposing excess thermal energy to heat areas like offices. The key difference between other industries and data centers is that data centers, especially high-density ones, generate so much waste heat that they could easily supply entire water parks with hot water — and likely have a surplus to spare.

Data centers worldwide are finding eco-friendly ways to repurpose waste heat, turning it into an asset used to power surrounding businesses and communities. We see this in practice predominantly in Europe, although it is gaining traction in North America. While there are many applications and the system designs vary, some examples of the design in action include:

District heating —In Dublin, Amazon’s Tallaght data center delivers heat to more than 505,000 square feet of local public buildings, 32,000 square feet of commercial buildings and 135 affordable rental apartments as part of the Tallaght District Heating Scheme. In Denmark, Meta’s Odense data center boasts a heat recovery infrastructure that recycles 100,000 MWh of energy annually and can heat 6,900 homes.

Agriculture — In Norway, Green Mountain’s colocation data center helps to warm the world’s first land-based lobster farm. We see another example of these so-called “organic data centers” in Lévis, Quebec, where QScale’s Q01 Campus uses a waste heat recovery-and-reuse system to provide heat to surrounding greenhouses. It’s estimated to contribute enough energy to grow 82,200 tons of tomatoes.

A limitation with many of these projects is that the data center building must be located near the benefactor to minimize heat loss in air and evaporative cooling systems. Here is where the benefits of liquid cooling can make a profound impact: with its superior thermal conductivity properties, hot liquid can be transported at greater distances than hot air, enabling more communities to tap into a local data center’s resources.

The future of design and construction

Data centers are evolving into “hyperscale, data center ecosystems” that need to integrate with their communities to operate sustainably and in harmony. The doors to new community design are open; all that is needed are the right systems to enable them. In the coming years, facility engineers will play a central role in designing reliable, adaptable cooling systems that can be tapped as assets for the general public’s benefit. Overcoming the challenges discussed in this article will contribute to the model’s success and get our communities closer to a net zero future.