Data center thermal management technologies continue to evolve to serve new demands of both large and small (edge) data centers. Co-location, cloud, enterprise, and edge facilities are utilizing a variety of cooling options – chilled water, pumped refrigerant plus aisle, row, and rack-level air or liquid cooling. Depending on the density of applications within a data center, more than one method of cooling may be in play.

Regardless of the method of cooling selected, advanced thermal controls have become the critical “glue” that brings just the right temperature and airflow to racks to ensure uptime, maximize efficiency, and reduce operating costs. The increasing use of high-density applications such as artificial intelligence, machine learning, and high-performance computing and data analytics is also creating new cooling demands for existing data centers.

Getting the best performance out of your data center cooling system is crucial to its overall efficiency and operational cost. Any data center is equipped with various IT equipment, especially server. Servers are fed with electric power, and when servers are at work, they generate heat. If the number of server increases, then the temperature inside the data center increases as well, which can affect the servers. It is very important to remove the heat  generated by these servers to avoid breakdown and to increase life span of the servers.

Today’s cooling technologies must respond to several data center trends.

  • Data centers are getting larger. It’s not unusual to see projects more than 10 MW in size, which was unheard of just a few years ago. Thermal management must meet the goals of maximizing uptime, reducing costs, and increasing both efficiency and speed to market.
  • Conversely, some data centers are getting smaller. At the other end of the spectrum, the number of edge sites is exploding. The edge is becoming increasingly mission critical, requiring thermal management solutions that ensure availability while also delivering efficiency benefits that cascade across large distributed networks. As these are often unmanned, or “lights out” facilities, remote thermal monitoring and control is a key factor in uptime and maintenance.
  • They’re also warmer than ever before. Chances are, you’ll no longer need a sweater or jacket when walking through a data center. ASHRAE 2016 thermal guidelines increased the allowable temperature range from 18°C (64.4°F) to 27°C (80.6°F). That opens up cooling options and potentially reduces operational expense and capital expenditures. But it’s imperative to remember that reliability and availability remain at the fore.
  • In some places, data centers are building up instead of out. Building height has a direct correlation to the specified cooling technology. Chilled water solutions lend themselves well to multistory buildings (three stories or more). That means that while chilled water may not be a specifier’s first choice, it may be required due to height limitations for many current pumped refrigerant-based cooling solutions.

Proper cooling equipment needs to be designed to ensure constant temperature, as well as to accurately control the humidity in order to avoid static electricity or condensation. With the increasing density of servers in rooms, Heating, Ventilation, and Air Conditioning (HVAC) systems require advanced reliability and safety, highly efficient motor drives and an increased level of integration. Precision Air Conditioner also known as Close Control Unit (CCU) or Close Control Air Conditioner or Computer Room Air Conditioner (CRAC) or server room air conditioner use for precise control of Temperature and Humidity required for Data Centre.

At IDCS, we have proven expertise in all the principal of cooling methodologies currently deployed in server rooms and data centers, including;