How to Serve Up a More Efficient Data Center

by Cameron Walker

The e-mail that gets sent with a click of the mouse seems to appear in distant Inboxes as if by magic. But behind all those e-mails (and everything from digital medical records to YouTube videos of cats) lie rooms and buildings packed with computer servers, routers and switches, uninterruptible power supplies, extensive cooling systems, and more—all working around the clock to make the information that people rely on available without a hitch. Whether a single server in a city office’s closet or an enormous complex devoted to data from a Silicon Valley powerhouse, these data centers rely on large quantities of energy to collect, transmit, and store data, and to maintain the center’s servers as well.

On the outside, a data center might look like a standard office building. But inside, racks of servers—and the equipment that keeps the servers cool—draw much more power than a typical office. According to a 2011 report by Jonathan Koomey, a research fellow at the Steyer-Taylor Center for Energy Policy and Finance at Stanford University, in 2010 data centers around the world used approximately 1.3% of the global energy grid; in the United States, they may pull 2% or more of the nation’s electricity.

In the past, data centers were created by packing an empty room with servers, plugging them in, and leaving them constantly running. The goal is to maintain uptime so that data is available at a moment’s notice. But many data centers run at peak capacity only a few times a year. As a result, many data centers can be “horribly inefficient,” said Brad Wurtz, president and CEO of Santa Clara, California-based Power Assure, which makes energy-management software for data centers.

As a result, data centers are perfect targets for improving efficiency. And many data centers are taking big steps to do so, whether by retrofitting existing centers or building new centers with an eye to using as little energy as possible—and saving money in the process.

Data centers use power usage effectiveness (PUE) as a measure of efficiency. PUE is the ratio of how much energy is being used to maintain the data center compared to how much energy the servers themselves use. The less overhead power, particularly in cooling, the data center uses, the more efficient the data center and the closer the PUE is to 1. According to the Uptime Institute’s 2012 data center survey, the average PUE for data centers around the world is between 1.8 and 1.89. In comparison, the new data center at the National Renewable Energy Laboratory (NREL), designed to support both the Golden, Colorado laboratory’s computing needs and to showcase energy-efficient strategies for data centers, has a projected PUE of 1.06 or better.

But existing data centers can make big strides in improving efficiency, too. “Pretty much any data center could get down to 1.4 or 1.5, which would save millions of dollars,” Wurtz said.

Cool It Down

One target for data centers looking to become more energy efficient is to look at how to keep their servers cool. Servers use electricity and produce heat as a result—and the more servers, the hotter it gets. Data centers need to find a way to deal with all this heat, and servers even run the risk of crashing if temperatures climb too high.

In the past, traditional air-conditioning units have been deployed to keep servers running smoothly. Air-conditioning units, particularly if they’re running constantly, can draw an enormous amount of energy; units have fans that produce heat when they run as well as circulate cool air.

Many data centers are looking at new ways of cooling to save both energy and money. “Exploiting environmental means of cooling is one of the latest things people are doing to get greater efficiency for their data centers,” said Jeffery Broughton, the systems department head at the National Energy Research Scientific Computing Center at the Lawrence Berkeley National Laboratory (LBNL). The Laboratory’s own on-site data center, scheduled to go online in 2015, will draw in outside air through the basement, taking advantage of the temperate San Francisco Bay Area climate and the LBNL’s hilly site. Fans will blow the air into the computer room on the floor above.

Cooling is a major focus of the California Energy Commission’s work in data center energy efficiency, according to Energy Commission Chair Robert B. Weisenmiller: “Over the past decade the Energy Commission has invested in research that identified the high energy cost of data center cooling, developed research road maps to address cooling costs, and funded innovative data center projects aimed at improving energy efficiency.”

Recently, the Energy Commission participated in a program with Vigilent, an El Cerrito-based firm that provides intelligent energy management systems for buildings, including data centers, to monitor and analyze eight State of California data centers and provide automated control of cooling. These eight very different data centers, ranging from a 667-square-foot (sqft) CalTrans data center to the state’s 40,000 sqft Gold Camp data center, saved an average of 40% on energy—savings that amounted to more than 2.3 gigawatts in annual savings and slashed $240,000 from the state’s yearly utility bill.

Monitoring and automating a data center’s cooling equipment can be just as important as upgrading the equipment itself. Some data center managers might think that once they have a bank of servers that is getting too hot, it’s time to add more space, said Vigilent product manager Dan Mascola. But by putting in sensors that monitor temperature and other environmental conditions, a data center can reorganize its servers and cooling equipment to make the best use of it. Compiling this data into a model, control systems can be placed on each cooling element, turning each part of the cooling system on and off when needed to respond to servers’ workloads.

Utility and other incentives may be available for making changes like these. In California, Digital Realty’s 135,000 sqft data center used a combination of grants and incentives to replace conventional, one-speed fans with more efficient variable-speed models. The new fans are linked to a control system that allows them to respond to IT server inlet air temperature and better distribute air throughout the space. This upgrade resulted in a two-thirds savings in the energy needed to cool the facility. A grant from the California Energy Commission’s Public Interest Energy Rebate (PIER) program, and rebates from Southern California Edison, meant the project paid for itself within a year.

Changing what temperature you’re aiming for can also make huge differences. In the past, data center managers didn’t feel safe unless the center felt like a meat locker inside; employees often had to wear jackets and gloves. “The more that you have to chill things down, the more energy that you’re using to accomplish that,” said Broughton. Computer manufacturers have started to develop servers that can run safely at higher temperatures. Data centers are now trying to run at much balmier temperatures (Google data center techs reportedly work in shorts and T-shirts)—a change that “goes a long way to improving energy efficiency,” Broughton said.

Serve It Up

Servers themselves used to be very inefficient, said Dennis Symanski, senior project manager of the Electric Power Research Institute (EPRI), a nonprofit utility research group; but now they pack much more computing power while using less energy. EPRI worked with the U.S. Environmental Protection Agency’s Energy Star program to write test protocols for computer servers; those sourcing new servers can look for the Energy Star rating. The next Energy Star target will cover all of the associated equipment that supports a data center, including switches, routers, and the hubs that take in information and transmit it among servers.

Replacing older, inefficient servers can be a big efficiency boost, as new technology makes server efficiency double every 18 months. Doing an audit of data center energy use—including server use—can help data centers optimize their efficiency. Data centers can add in a few new servers that can take on the bulk of the workload, saving older servers for backup use, or just replace old servers altogether. Energy savings on just the servers can pay for themselves in power savings within a year. The conventional wisdom was to replace servers every five years, said Wurtz of Power Assure. “Now it makes sense to replace every two to three years, and you come out ahead.”

Sometimes a data center might not have to invest in new servers at all. Many data centers buy a new server whenever they need to put in a new function. One server might be for payroll, another for billing, another for Web access. And in most data centers, all of these servers are using power around the clock, even though they might not be needed. Virtualization software, however, allows a single physical server to run multiple virtual servers—so payroll, billing, and Web access can all be run on the same server and the other two physical servers can be unplugged, said Symanski, who conducts research for EPRI that focuses on how to make data centers more efficient.

Future advances in data center efficiency will depend heavily on IT measures—and creating incentives for those working in this field to develop these next steps, said Matt Stansberry of the Uptime Institute, which studies data center efficiency trends. “IT organizations that are willing to take a systematic approach, starting at the application and data layers—consolidating applications and servers, de-duplicating data, removing comatose but power-draining servers, building redundancy into the applications and IT architecture rather than physical systems—will drive the next wave of efficiency gains.”

Cutting-Edge Data Centers

Data centers being built from the ground up are taking advantage of the latest in technology and design to keep their energy usage—and costs—low.

NREL’s new data center is built like an aquarium, with clear floor tiles through which visitors can see the servers and all the equipment that supports them. One of the energy-saving methods is using liquid, not air, to cool servers right at the computer chip; this direct-cooling method siphons off at least 90% of the servers’ heat; data center heat exchangers deal with server heat dissipated into the air. “A juice glass full of water has the same cooling capacity as a roomful of air,” said Steve Hammond, NREL’s director of computational science. The chips themselves can safely operate at up to 150°F.

This data center is also integrated with the rest of the building; almost all of the heat produced by the data center can be used to warm up the offices, labs, and other spaces. Data center heat can even be used to melt snow from walkways. “We do it because it makes sense to this because it’s part of our mission. Others do it because it makes dollars and cents,” said Hammond. “It cost us less to build this than a typical data center.” NREL’s data center will save one million dollars annually because of its range of energy-saving practices and technologies.

Although they’re first going to look at the center’s energy efficiency, NREL also plans to investigate integrating their on-site 4-megawatt photovoltaic system with the data center. Other data centers are building in new ways to generate and store their own electricity. Apple’s new North Carolina data center will generate 60% of its own energy from on-site sources, including 50 biogas-powered fuel cells and a 100-acre solar array.

In Cheyenne, Wyoming, Microsoft is building a zero-carbon data center with the help of a neighboring waste management facility. Waste will be converted into biogas, which will then power a 300-kilowatt fuel cell. Like data centers, sewage treatment plants need to be constantly available, and Microsoft estimates that this project, dubbed the Data Plant, could be the first of many partnerships between the two.

Another Wyoming data center that incorporates efficiency into its design is the National Center for Atmospheric Research (NCAR)-Wyoming Supercomputing Center, also in Cheyenne. The Boulder, Colorado-based NCAR has scientists working on enormous amounts of data to better understand Earth’s climate, among other things; the new Wyoming data center, opened in October 2012, has a projected PUE of less than 1.1. Along with taking advantage of highly efficient options in server and cooling technology, the center has been built modularly, so that it’s only running the servers it needs now at the capacities they’re designed for; when the time comes to expand, the center can integrate even more efficient products as they become available. The center takes advantage of Wyoming’s high elevation, low humidity, and year-round cool temperatures to use ambient cooling as much as 363 days a year.

Wyoming’s Data Center Push

Wyoming’s cool climate—and warm welcome—has attracted a range of data centers that are trying to operate efficiently. The state identified encouraging data centers as a goal in the last five years, said Ben Avery, director of the business and industry division of the Wyoming Business Council, the state’s economic development agency, which also houses the State Energy Office. The governor’s office has a $15 million fund set aside to help counties, cities, and towns build the infrastructure they need to be attractive hosts to data centers. Municipalities can apply for state grants on behalf of a data center company to reimburse utility power expenses. Data centers are also eligible for sales tax and permitting exemptions.

Some environmental groups have taken companies to task for locating data centers in Wyoming and honing in on the state’s low-cost, coal-fired power. Avery pointed out that there’s also a substantial amount of wind energy in the state, and more and more companies have been looking to incorporate green power into their data centers. The NCAR-Wyoming Supercomputing Center worked with utility Cheyenne Light, Fuel & Power to purchase 10% of the data center’s energy from a neighboring wind farm.

One business that has used state grants to help cover the cost of power is Green House Data, which has a 10,000 sqft data center and is in the process of building a second, 35,000 sqft facility in Cheyenne. Using ambient cooling, modular design, and a floor layout that optimizes energy use, among other things, the current data center reaches a PUE of close to 1.2.

One of the benefits of being in the state is the business-friendly environment, said Green House Data president Shawn Mills. Power prices are stable, as are taxes. Because energy use is such a large component of their business, “understanding what tomorrow’s going to look like is almost as important as what it looks like today,” Mills said. Green House Data’s facility pays the same price for electricity year-round, any time of day—a consideration that has been incorporated into their data centers’ design.

According to Mills, Green House Data was able to work directly with their local utility company in planning both of their data centers, which was a huge boon to the project and to energy savings. Getting involved early in the planning process with large power consumers and understanding their energy use needs and patterns can make huge differences in the process, he said.

Cheyenne Light, Fuel & Power has worked with Green House Data, NCAR, Microsoft, and others in the planning process for new data centers. The interaction between the utility, along with the wider community, is as critical as state incentives, said utility spokesperson Sharon Fain. When a data center is first proposed for a community, “it’s important for folks in that community to have the resources—the utility, the city, an economic development group—to come together and sit down and develop that partnership,” Fain said.

Big and Small

Although large, highly efficient data centers like these get the most recognition, data centers are everywhere—in hospitals, schools, and municipal buildings, or on a single floor of a high-rise office building or in a closet in a police station. But large data centers are much more likely to pursue efficiency measures and to track down incentives to support these measures. “Small data center operators often don’t have the dedicated resources to deploy highly efficient data center technologies, let alone chase down some kind of incentive program,” said Stansberry of the Uptime Institute.

Utility incentive programs for commercial and industrial customers may apply to improving efficiency in data centers of a range of sizes. “A lot of customers don’t know that utility incentives exist for these types of projects,” said Mascola of Vigilent. Many technologies are so new that utilities haven’t developed specific incentive programs for them, but contacting the utility for more information may result in tailored programs and incentives. The Database of State Incentives for Renewable Energy is in the process of creating a searchable list for incentives available to data centers; at least 14 states have incentive programs, many through utilities, that could be applied to data center efficiency measures.

LBNL has worked with several smaller data centers in the San Francisco Bay Area to determine how they could improve efficiency. For two local cities, Benicia and Martinez, LBNL researchers found that once municipalities bought new computing equipment, the new equipment was so much more efficient that the older servers could be retired. The result was empty rack space and the opportunity to combine the multiple smaller data centers scattered around a municipality into a single location.

Some of the biggest challenges to improving data center efficiency have more to do with people than technology. Whether moving small data centers to a single location or shelling out for more-efficient fans and servers, everyone has to be on board, from the IT folks to the facilities manager to the person who pays the bills. According to NREL’s Hammond, “You have to break down some of the organizational barriers and get everyone going in the same direction to realize more efficiency and financial savings.”

Tags: