by Sherman Ikemoto
The modern data center suffers from a serious operational flaw:
The facility and IT equipment cooling systems operate completely independently, as if they were never meant to work together.
The facility blindly pumps cooling air toward the IT equipment. In turn, the IT equipment blindly consumes the air that it finds at its intake vent. Little is done during operations to ensure that supply or exhaust air actually reaches its targets. In this sense, the cooling strategy for data centers is like trying to extinguish a raging house fire while blindfolded. The best you can do is hope the water hits the fire and that enough is being sprayed.
Three problems result from this operational flaw. The first is cost overruns. Water is wasted when fires are fought without regard to where the flames are located. Similarly, cooling system power is wasted when airflow is supplied without regard to airflow requirements and IT load locations. Cooling cost overruns can be as high as 50 percent or more under worst-case conditions.
The second is a decrease in IT equipment performance and reliability. The house is at less risk of being consumed by the fire when water is directed precisely at the base of the flames. Similarly, IT equipment is at significantly less risk of overheating when air is supplied with more precision in terms of temperature and availability to the intakes (failure rates increase exponentially with temperature; see Arrhenius function: R
(failure rate) = Ae(–Ea /kT)). Reliable operation means everything to many data center operators. Financial institutions, for example, can lose revenue at staggering rates when servers slow down or fail due to ineffective cooling.
The third, and possibly most critical, is loss of IT loading capacity by the data center. Like the charred remains of a house, capacity to house IT equipment is lost to hot spots that form as a result of airflow mismanagement. Many data centers lose 40 percent or more of their IT loading capacity to hot spots. This reduces data center life span and forces the enterprise to buy expensive new data center capacity much sooner than planned. The unexpected capital outlays can amount to hundreds of millions of dollars.
Why do these problems exist?
The current practice is to blindly flood the data center with cooling air with little regard to how much or where it is needed. Similar to fighting house fires without a blindfold, one can imagine how IT reliability (life) and data center capacity (property) are put at risk with this approach to cooling.
With so much at stake, why has blind cooling of data center IT equipment set in?
The short answer is lack of education. The people responsible for data center cooling often manage office spaces as well. The normal practice for managing office environments is to control ambient temperature for human comfort. This is done with steady, slow-moving and evenly distributed airflow to ensure spatially consistent and comfortable temperatures at any location in the room.
This approach is being applied to the data center in the absence of information about how to remove the intense heat that is generated by IT equipment. In contrast, IT equipment needs high volumes of fast-moving air delivered precisely to its small intakes. This is like the need to fight a fire with a stream of water directed at the base of a flame. Further complicating the issue, IT equipment, like fire, is a moving target as changes occur on the data center floor.
Perhaps unwittingly, IT manufacturers and standards organizations have reinforced the notion that temperature in the data center is the only concern. To be fair, IT equipment power densities have been low enough until recently to operate reliably under this condition. However, modern IT equipment has consumed the safety margin, and the hard decision to adopt airflow management techniques is upon us. For the modern data center, airflow is to IT equipment as water is to fire.
Fight data center fires like a pro.
Hardware manufacturers use computational fluid dynamics (CFD) modeling software to design the heat removal systems for the IT equipment. Given that a data center is physically a large box of electronics, it makes sense to adopt the same techniques, just on a larger scale.
This is now possible with 6SigmaDC software from Future Facilities. The CFD modeling capabilities of 6SigmaDC enable the data center manager to simulate the airflow and temperature conditions of any present or future state of the data center. Imagine fighting a fire with advance knowledge of where the flames will move over time. 6SigmaDC effectively enables data center managers to find and fix hot spots before they appear on the data center floor.
One important difference between IT equipment and data centers is that the internal configuration of data centers changes over time. The need for CFD modeling of data centers is ongoing. Given the pace of change and the number of people that are typically involved in data center operations, how can a CFD model remain relevant?
The solution is to integrate the CFD model with software used to manage IT operations. This is available with the latest releases of nlyte and 6SigmaDC software. nlyte and 6SigmaDC automatically synchronize the IT asset database and the CFD model. At any time, the facilities engineering team can see the changes being planned by IT operations. In parallel, facilities can predict the risk of “fires” being started or simply see in advance the impact of planned IT changes on cooling system operation. If a negative impact on performance is predicted, facilities, for the first time, has the time to inform IT operations and work together on deployment options that benefit the entire data center. The result is lower operational costs; higher IT reliability; and more efficient and effective utilization of critical data center space, power, cooling and airflow resources.
This approach is called the Virtual Facility and it is available only from nlyte and Future Facilities.
Sherman Ikemoto is general manager for Future Facilities Inc., a leading supplier of data center design and modeling software and services. Prior to joining Future Facilities, Sherman worked at Flomerics Inc. as a sales, marketing and business development manager. Sherman has more than 20 years’ experience in the field of thermal fluids and electronics cooling design. Sherman has a Bachelor of Science degree from San Jose State University and a masters in mechanical engineering from Santa Clara University.
The post How Operating a Data Center Is Like Fighting a House Fire appeared first on Nlyte.