Create a free Manufacturing.net account to continue

High Density Cooling: A Practical Application For Water In The Data Center

By Bret W. Lehman, PEGeorgia Tech’s Razor HPC cluster, at the Institute’s Center for the Study of Systems Biology (CSSB), demonstrates a cross-organizational, collaborative solution for space utilization and cooling.

 
It is a well known fact that the latest computing technology is pushing the limits of today’s data centers in more ways than one. Most end users cite challenges in space utilization, power delivery, cooling, and even structural loading. Server form factors have shrunk from the multi-EIA unit packages of yesterday to the sleek blade form factor that allows as many as 84 servers to be packaged in a single rack. Consolidation of applications from larger, legacy machines to the smaller, more powerful blades creates both the ability and the desire to pack more servers into existing data center spaces. This increases the importance of collaboration between the business partners who share ownership of different portions of  the data center, namely the IT and facilities organizations.

Georgia Tech’s Razor HPC cluster, at the Institute’s Center for the Study of Systems Biology (CSSB), demonstrates a cross-organizational, collaborative solution for space utilization and cooling.  A water-cooled, rack-level heat exchanger was deployed to help create a very high density (300W/sqft) cooling solution within an existing facility where significant cooling limitations existed.  In effect, the rack door heat exchanger solution allowed for the creation of an area with cooling density 10 times greater than the capabilities of the rest of the facility.

A number of user-imposed challenges forced formulation of a more nimble implementation plan.   First, the hosting environment for the cluster was required to be of showcase quality.  Tours were intended for the area so the floor area of the cluster was required to be reduced to a bare minimum.  Excessive noise and discomfort from air movement were likewise required to be reduced to a minimum.  Finally, an extremely tight schedule required the facility be completed in roughly 30 days.

In order to meet the requirements, the strategic decision to employ the rack door heat exchanger was made.  The device is a copper-tube, aluminum-fin, air to water heat exchanger that replaces the rear panel of a computer rack.  Hot air from the server exhausts passes across the heat exchanger coil, removing approximately 55 percent of the rack heat load from the air stream before it enters the room.  It is a completely open system with no power or supplemental air movers required.  Its function significantly reduces the burden on the room air conditioning system; cutting down on the capacity of air conditioning that must be installed, as well as significantly reducing the noise and discomfort associated with moving the air that performs the cooling function.  It was decided to implement this technology only on the racks filled with high density blade servers.

The first challenge the heat exchangers resolved was underutilized floor space.  By utilizing the heat exchangers, it became possible to fully load six blade chassis per cabinet and the square footage required to house and cool the cluster was reduced to an optimal 1000 sq ft.  Next, removal of such a large amount of heat from the room air stream significantly reduced the amount of air movement necessary for the cooling solution, thereby reducing noise and discomfort.  Finally, the facility had - in surplus - four spare 20-ton air conditioning units which could provide exactly the amount of sensible air side cooling required.  This helped alleviate the final concern regarding the implementation schedule. 

The entire high density cluster area was completely segregated from the remainder of the data center below the raised floor.  This, along with the general layout of the key components of the cooling solution, further optimized the cooling solution in two ways.  First of all, a very high static pressure was generated at the perforated tile locations.  Air was directed below the raised floor in the direction indicated by the blue arrows on the four computer room air conditioning (CRAC) units shown at the top of the figure.  By partitioning the entire subfloor area, a dead-head situation was created in the perforated tile area, thereby maximizing static pressure and air flow rates.  Secondly, because the air conditioning units were located in such close proximity to the rack exhausts, direct return of warm air to the unit intakes was ensured to optimize unit efficiency.  Finally the concept of  the hot aisle-cold aisle principle was taken to the extreme  –  a wall completely separating the warm and cold sides of the cluster, shown as the thick dashed line in Figure 1, guaranteed an absolute minimum of warm air recirculation, a problem that plagues many modern-day data centers.

The introduction of a water-based rack option helped to create the desired showcase facility, with minimal floor space and air movement.  Savings realized through implementation of this solution included significantly less air conditioning equipment, build-out of approximately 40 percent less raised floor space, and far less racking hardware.  A fringe benefit of this solution was additional savings in the form of operational costs, on the order of 15 percent less than a conventional, fully CRAC based solution.. 

Increasing heat densities and the desire to pack more computing power into smaller spaces created a number of challenges for deployment of a powerful supercomputer for Georgia Tech’s CSSB.  A hybrid cooling solution featuring a water-based rack heat exchanger proved to be the most effective way to create an optimal solution within the parameters given.  The device’s high heat removal capability within the rack footprint allowed for maximum packing density for the blades in the cluster and achievement of an optimal floor space requirement.  This solution will serve as an effective model for how end users must drive collaboration between teams to achieve high density cooling solutions as they transition from today’s data center facilities forward into future designs. Engineering teams from IBM and Georgia Tech teamed with peers from BellSouth and Minick Engineering to create this high density solution within BellSouth’s Atlanta Regional Data Center.

Bret Lehman is Global Offerings Development Executive for IBM’s Site and Facilities Services business unit.

More