Not many years ago, server power consumption wasn’t a big concern for IT administrators. The supply of power was plentiful, and in many cases power costs were bundled with facility costs. For the most part, no one thought too hard about the amount of power going into servers.
What a difference a few years can make. In today’s ever-growing data centers, no one takes power for granted. For starters, we’ve had too many reminders of the threats to the power supply, including widely publicized accounts of catastrophic natural events, breakdowns in the power grid, and seasonal power shortages.
Consider these examples:
- In the wake of the March 2011 earthquake and tsunami and the loss of the Fukushima Daiichi nuclear power complex, Japan was hit with power restrictions and rolling power blackouts. The available power supply couldn’t meet the nation’s demands.
- In the United States, overextended infrastructure and recurring brownouts and outages have struck California and the Eastern Seaboard, complicating the lives of millions of people.
- In Brazil and Costa Rica, power supplies are threatened by seasonal water scarcity for hydro generation, while Chile wrestles with structural energy scarcity and very expensive electricity.
Then consider today’s data centers, where a lot of power is wasted. In a common scenario, server power is over-allocated and rack space is underpopulated to cover worst-case loads. This is what happens when data center managers don’t have a view into the actual power needs of a server or the tools they need to reclaim wasted power.
All the while, data centers are growing larger, and power is becoming a more critical issue. In some cases, data centers have hit the wall; they are out of power and cooling capacity. And as energy costs rise, we’ve reached the point where some of the world’s largest data center operators consider power use to be one of the top site-selection issues when building new facilities. The more plentiful the supply of affordable power is, the better off you are. All of this points to the need for policy-based power management. This forward-looking approach to power management helps your organization use energy more efficiently, trim your electric bills and manage power in a manner that allows demand to more closely match the available supply.
And the benefits don’t stop there: A policy-based approach also allows you to implement power management in terms of elements that are meaningful to the business instead of trying to bend the business to fit your current technology and power supply.
Ultimately, the case for policy-based power management comes down to this: It makes good business sense.
Using policy-based power management to rein in energy use
In today’s data centers, power-management policies are like the reins on a horse. They put you in control of an animal — power consumption — that has a tendency to run wild.
When paired with the right hardware, firmware and software, policies give you control over power use across your data center. You can create rules and map policies into specific actions. You can monitor power consumption, set thresholds for power use, and apply appropriate power limits to individual servers, racks of servers, and large groups of servers.
So how does this work? Policy-based power management is rooted in two key capabilities: monitoring and capping. Power monitoring takes advantage of sensors embedded in servers to track power consumption and gather server temperature measurements in real time.
The other key capability, power capping, fits servers with controllers that allow you to set target power consumption limits for a server in real time. As a next step, higher-level software entities aggregate data across multiple servers to enable you to set up and enforce server group policies for power capping.
When you apply power capping across your data center, you can save a lot of money on your electric bills. Just how much depends on the range of attainable power capping, which is a function of the server architecture.
For the current generation of servers, the power capping range might be 30 percent of a server’s peak power consumption. So a server that uses 300 watts at peak load might be capped at 200 watts, saving you 100 watts. Multiply 100 watts by thousands of servers, and you’re talking about operational savings that will make your chief financial officer stand up and take notice. Dynamic power management takes things a step further. With this approach, policies take advantage of additional degrees of freedom inherent in virtualized cloud data centers, as well as the dynamic behaviors supported by advanced platform power management technologies. Power capping levels are allowed to vary over time and become control variables by themselves. All the while, selective equipment shutdowns — a concept known as “server parking” — enable reductions in energy consumption.
Collectively, these advanced power management approaches help you achieve better energy efficiency and power capacity utilization across your data center. In simple terms, you’re in the saddle, and you control the reins.
Get bigger bang for your power buck
In today’s data centers, the name of the game is to get a bigger bang for every dollar spent on power. Policy-based power management helps you work toward this goal by leveraging hardware-level technologies that make it possible to see what’s really going on inside a server. More specifically, the foundation for policy-based power management is formed by advanced instrumentation embedded in servers. This instrumentation exposes data on temperature, power states and memory states to software applications that sit at a higher level, using technology that:
- Delivers system power consumption reporting and power capping functionality for the individual servers, the processors and the memory subsystem.
- Enables power to be limited at the system, processor and memory levels — all using policies defined by your organization. These capabilities allow you to dynamically throttle system and rack power based on expected workloads.
- Enables fine-grained control of power for servers, racks of servers, and groups of servers, allowing for dynamic migration of workloads to optimal servers based on specific power policies with the appropriate hypervisor.
Here’s an important caveat: When it comes to policy-based power management, there’s no such thing as a one-size-fits-all solution. You need multiple tools and technologies that allow you to capture the right data and put it to work to drive more effective power management — from the server board to the data center environment.
It all begins with technologies that are incorporated into processors and chipsets. That’s the foundation that enables the creation and use of policies that bring you a bigger bang for your power buck.
Build a bridge to a more efficient data center
Putting policy-based power management in place is a bit like building a bridge over a creek. First you lay a foundation to support the bridge, and then you put the rest of the structure in place to allow safe passage over the creek. While your goal is to cross the creek, you couldn’t do it without the foundation that supports the rest of the bridge structure.
In the case of power management, the foundation is a mix of advanced instrumentation capabilities embedded in servers. This foundation is extended with middleware that allows you to consolidate server information to enable the management of large server groups as a single logical unit — an essential capability in a data center that has thousands of servers.
The rest of the bridge is formed by higher-level applications that integrate and consolidate the data produced at the hardware level. While you ultimately want the management applications, you can’t get there without the hardware-level technologies.
Let’s look at this in more specific terms. Instrumentation at the hardware level allows higher-level management applications to monitor the power consumption of servers, set power consumption targets, and enable advanced power-management policies. These management activities are made possible by the ability of the platform-level technologies to provide real-time power measurements in terms of watts, a unit of measurement that everyone understands.
These same technologies allow power-management applications to retrieve server-level power consumption data through standard APIs and the widely used Intelligent Platform Management Interface (IPMI). The IPMI protocol spells out the data formats to be used in the exchange of power-management data.
Put it all together and you have a bridge to a more efficient data center.
Cash in on policy-based power management
When you apply policy-based power management in your data center, the payoff comes in the form of a wide range of business, IT and environmental benefits. Let’s start with the bottom line: A robust set of power-management policies and technologies can help you cut both operational expenditures (OpEx) and capital expenditures (CapEx).
At the OpEx level, you save money by applying policies that limit the amount of power consumed by individual servers or groups of servers. That helps you reduce power consumption across your data center.
How much can you save? Say that each 1U server requires 750 watts of power. If your usage model allows you to cap servers at 450 watts, you save 300 watts per machine. That helps you cut your costs for both power purchases and data center cooling. And chances are you can do this without paying server performance penalties, because many servers don’t use all of the power that has been allocated to them.
At the CapEx level, you cut costs by avoiding the purchase of intelligent power distribution units (PDUs) to gain power monitoring capabilities and by reducing redundancy requirements, which saves you thousands of dollars per rack.
More effective power management can also help you pack more servers into racks, as well as more racks, into your data center to make better use of your existing infrastructure. According to the Uptime Institute, each PDU kilowatt represents about $10,000 of CapEx, so it makes sense to try to make the best use of your available power capacity.
Baidu.com, the largest search engine in China, understands the benefits of making better use of existing infrastructure. It partnered with Intel to conduct a proof of concept (PoC) project that used Intel Intelligent Power Node Manager and Intel Data Center Manager to dynamically optimize server performance and power consumption to maximize the server density of a rack.
Key results of the Baidu PoC project:
- At the rack level, up to 20 percent of additional capacity increase could be achieved within the same rack-level power envelope when an aggregated optimal power-management policy was applied.
- Compared with today’s data center operation at Baidu, the use of Intel Intelligent Power Node Manager and Intel Data Center Manager enabled rack densities to increase by more than 40 percent.
And even then, the benefits of policy-based power management don’t stop at the bottom line. While this more intelligent approach to power management helps you reduce power consumption, it also helps you reduce your carbon footprint, meet your green goals, and comply with regulatory requirements. Benefits like those are a key part of the payoff for policy-based power management.
For more resources on data center efficiency, see this site from our sponsor.