Three Steps To Data Center Management

Modern data centers are complex beasts; with a variety of appliances running across a multitude of hardware configurations and physical layouts, they are the product of years of expansion and growth. One of the first things I ask customers is what management tools they are using.  There's always a laundry list, some open-source (like Nagios) and some from large software vendors (like IBM Tivoli).

Modern data centers are complex beasts; with a variety of appliances running across a multitude of hardware configurations and physical layouts, they are the product of years of expansion and growth. One of the first things I ask customers is what management tools they are using. 

There's always a laundry list, some open-source (like Nagios) and some from large software vendors (like IBM Tivoli). These IT management tools closely watch over systems, storage, networking, and applications — four of the primary assets in a data center. 

But when I ask what they use for energy management, there's usually a blank stare.  Energy is the fifth major asset, but few treat it as such today. As with any of these assets, when you run out of this finite resource, your services go down. 

If you don't manage them proactively, your performance suffers. Energy is no different. And with pressure to reduce cost, improve performance, plan for increased or decreased capacity, or mitigate potential risks, management tools are the key to accomplishing these tasks. 

Awareness of the importance of energy in the data center has grown dramatically in the last few years. And due to the importance of energy in business continuity and contribution to data center operating costs, it is logical that it is time to get a handle on energy in 2010.

Gartner released their prognostications for 2010, with a scary title "Critical Issues Facing Data Centre Managers will worsen in 2010." The entire discussion is about energy, the last unknown and unmanaged frontier in the data center. This section is particularly relevant:

Energy management can be effective only through advanced monitoring, modeling and measuring techniques and processes. Metrics form the bedrock for internal cost and efficiency programmes and Gartner urges datacentre managers and IT organizations to make this area a high priority, which will be essential for the adoption of so many new technologies and adherence to government policies.

So, how can IT managers act on Gartner’s advice?

Measure, Analyze, Act

Energy efficiency in the data center depends on the ability to clearly measure, analyze and ultimately act on strategic implementations. Once you proceed through these three steps, you must repeat them on a regular basis to keep continual data center changes in check.


It’s amazing how much power is inefficiently managed in the data center. A recent McKinsey report shows that the average data center only utilizes six percent of its servers and only 50 percent of its facilities.

First and foremost, you must figure out exactly how much energy you’re currently using and where you’re using it. Arm yourself with as much information as possible. Measure where and when you’re spending your power and work to set a new baseline for your consumption.

Also, keep in mind that the power coming into a data center is not the same as the power being used by the equipment. By measuring every step that the power takes along the way to the equipment, whether it’s through the uninterruptible power supply (UPS) or the power distribution system, you can get an accurate read on where the biggest opportunities for improvement exist.

This process is true for any industry that emits power—data centers, commercial buildings, industrial facilities, your own home, and so on.

Make sure your measurements are thorough and detailed. It isn’t important how you get the data, just that the data is recorded about how much energy is being used.  Don’t take any shortcuts, since inaccurate data will lead you astray.


In the Analyze Phase, you dive into the details to find precise power saving remedies. Analyze the power readings you unearthed in the measurement phase. Determine what areas are consuming the most power.

Determine if power is “stranded” (allocated but not used) or potential “energy leaks” (power loss attributed to inefficient distribution). In fact, due to overhead associated with cooling, lighting, and some other factors, only about one-half of the energy coming into a data center actually goes to the servers.

What are the costs associated with each area of the data center? Are there any significant trends you’ve found? In the Analyze Phase, the goal is to take the massive amount of data you’ve gathered and put it into a succinct and understandable context.

By looking at all these points across the data center, you can identify where your power losses are specifically located. Once you identify this wasted energy, you can take the steps to mitigate those power efficiency losses as they travel on their way to the IT equipment.


Once you’ve done your measurements and have analyzed and identified the biggest consumers of energy, you’re ready to put together a strategy to reduce energy consumption, plan for future capacity, or develop plans to extend the life of your existing data center.

By putting policies and strategies in place, you can measure every tier of the data center—servers, storage, racks, cooling systems—and analyze and identify equipment that’s not being used in an efficient manner.

Once you’ve identified the things that aren’t performing up to your specifications, you can act by employing new policies, by consolidating equipment using new techniques such as virtualization, and by rescheduling jobs to run more efficiently.

Saving one watt at the server component level can save another 1.84 watts without doing anything else due to overhead reductions—a total saving of 2.84 watts.

It’s not just about optimizing the facility and equipment. You can adjust how people perform their jobs in ways that could result in energy reduction as well. Day-to-day operations management determines what applications run where, and how applications are rolled out.

You can adjust where applications run to take advantage of the most efficient servers for that task. You can also incorporate new software, replace faulty equipment, and turn off equipment that isn’t being used.

Here’s another reason to act. According to the EPA, enterprises can save up to $4 billion annually in their data centers by becoming more efficient. Furthermore, many utilities are offering incentives for every kilowatt-hour saved, as long as you can prove the reductions.

Today there are non-invasive software solutions for managing energy in the data center. Software solutions, such as from Sentilla, can automate the continual process of measuring, analyzing, and acting on the real-time impacts of energy consumption.

Software can even manage energy use in the data center on both metered and unmetered equipment, so there’s no disruption or meters to install, and thus no excuse for delaying energy management as a key data center initiative.

Enterprise energy management software enables IT managers to gain control of every aspect of energy consumption in the data center. Performance and efficiency can be improved, downtime reduced, and expansion or consolidation can be rationalized with real data.

Joe Polastre is co-founder and chief technology officer at Sentilla, a company that provides enterprise software for managing power in the data center. Joe is responsible for defining and implementing the company’s global technology and product strategy.

Winner of the 2009 Silicon Valley/San Jose Business Journal 40 Under 40 award and named one of BusinessWeek’s Best Young Tech Entrepreneurs, Joe often speaks about energy management and the role of physical computing — where information from the physical world is used to make energy efficiency decisions. Before joining Sentilla, Joe held software development and product manager positions with IBM, Microsoft and Intel. Joe is active in numerous organizations, including The Green Grid, US Green Building Council, ACM and IEEE.

Joe holds M.S. and Ph.D. degrees in Computer Science from University of California, Berkeley, and a B.S. in Computer Science from Cornell University.