Given today’s ever-changing market, some data center owners and operators may be wondering what is the best way to keep up with advancing technology? How can a data center, regardless of age, continue to perform at peak efficiency and maintain maximum operational uptime?
Motivair’s Data Center and IT Cooling technology continues to forge an innovative path for data center owners, supercomputer operators and engineers looking to cool critical equipment and get maximum returns from their IT investment.
Here are three factors to consider when deploying liquid cooling in your data center:
EXISTING INFRASTRUCTURE – WHATS DRIVING THE NEED FOR CHANGE?
Legacy data centers have an important role to play in how you deploy liquid cooling. You need to have a solid understanding of the existing infrastructure you have today to accommodate what you will need tomorrow.
Equipment inside the white space is getting hotter and more dense. Data centers have seen an exponential growth in the need for high density servers and switches in the last five years – and that trend will continue.
Your equipment needs to be closely coupled for latency to improve efficiency and performance. This coupled with the additional costs for supporting infrastructure makes it less effective to spread out HPC systems. That is why data centers are seeing such an increase in density.
For example, “AI requires large amounts of data and processors, often coupled tightly when the model requires a single shared memory pool,” 451 Research noted in an April report. “Such equipment typically needs more power than generic servers. In fact, 1kW per rack unit (and there can be 30+ units in a rack) is the current trend for HPC/AI servers, leading to power use of 20-40kW per rack or more, and thus requiring different cooling at scale.”
“By 2025, the number of micro data centers will quadruple due to technological advances such as 5G, new batteries, hyperconverged infrastructure, and various software-defined systems,” said Henrique Cecci, an analyst with Gartner, in “The Future of Enterprise Data Centers – What’s Next?”
Almost all of these sites will operate at significantly higher power densities than current generations of IT equipment.
IDENTIFY HIGH-DENSITY APPLICATIONS
The increasing watts per CPU on the roadmap from Intel (Saphire Rapids) and AMD (EPYC) coupled with growing use of use of HPC and AI in complex processes shows that liquid cooling is in the future for most data center operators.
As the use of HPC and AI CPUs expands it will become clearer that software requires the latest in computer hardware; The servers will require the latest in cooling technology; They are all interconnected. Power and cooling drive the servers, the servers drive HPC/AI. None of that operates without all the systems working in harmony.
Ideally, you need to run 30-50 KW per rack for high-density applications. Ask yourself: Can you do that today in your facilities? Can your facility handle the higher wattage CPUs? Can it handle water-based rack or server cooling?
Next generation hardware will require liquid cooling due to thermal challenges resulting from higher wattage chips in densely packed servers. Liquid Cooling needs a separate secondary fluid loop because of fluid quality requirement. You will need to have system cooling water available, capacity available, and the necessary equipment to decouple from the primary chilled water source, using Coolant Distribution Units (CDUs).
MATERIAL AND LIQUID COMPATABILTY
They type of liquids required for these high-performance computers are often designed and developed by the computer or chip manufacturer for specific use with its system. Simple water will not work when it comes to liquid cooling. The fluids used to cool the IT equipment is highly purified with virtually zero particulates.
In order to ensure optimal performance and reliability, the close coupled loop for the computer system must be designed for use with these specialized fluids, taking into account industry certified and laboratory validated material compatibility specifications and filtration demands in parallel with certain control and resiliency features.
Failure to address these needs of the system will result in reduced life expectancy of the IT equipment and less than optimal server performance.