Amanda Gasse for Zondits, August 10, 2015. Image credit: Tom Raftery
Server rooms are energy hogs. The U.S. Department of Energy (DOE) estimates that data centers consume 2% of our national electricity use, at 100 billion kilowatt-hours a year. That number continues to rise as storage requirements and demands continue to increase, cloud computing progresses, and the Internet of Things expands. The average commercial office building spends about 25% of its annual energy bill on server operations. The good news is that even small data centers can save energy with smart choices in equipment and best practices. Conserving energy will result in significant financial savings and reduce your environmental impact. Here are our five recommendations to improving data center energy efficiency.
Server virtualization offers a way to consolidate servers by allowing you to run multiple different workloads on one physical host server. A “virtual server” is a software implementation that executes programs like a real server. Multiple virtual servers can work simultaneously on one physical host server. Therefore, instead of operating many servers at low utilization, virtualization combines the processing power onto fewer servers that operate at higher total utilization. (See Figure 1 below.)
- Due to these benefits, virtualization has become commonplace in large data centers. A 2011 survey of over 500 large enterprise data centers found that 92% use virtualization to some degree. Of those, the ratio of virtual servers to physical host server averaged 6.3 to 1 and 39% of all servers were virtual.
Savings and Costs
- Virtualization enables you to use fewer servers, thus decreasing electricity consumption and waste heat. One watt-hour of energy savings at the server level results in roughly 1.9 watt-hours of facility-level energy savings by reducing energy waste in the power infrastructure (power distribution unit, UPS, building transformers) and reducing energy needed to cool the waste heat produced by the server.
- Virtualization enables the repurposing and decommissioning of some existing servers. According to the Uptime Institute, decommissioning a single 1U rack server can annually save $500 in energy, $500 in operating system licenses, and $1,500 in hardware maintenance costs.
Most modern IT equipment takes in cold air via the front of the unit and exhausts hot air out of the back of the unit. If servers are logically placed in rows with the front of the racks (and servers) all facing the same direction, then one has achieved a consistent airflow direction throughout the rows of racks. However, if several parallel rows of racks are placed with the same orientation, a significant efficiency problem arises. The hot exhaust air from the first row of racks gets sucked into the “cool” air intakes of the second row of racks. With each progressive row, the air temperature increases as hot air is passed from one row of servers to the next. (See Figure 2 below.)
To overcome this problem, the rows of server racks should be oriented so that the fronts of the servers face each other. In addition, the backs of the server racks should also face each other. This orientation creates alternating “hot aisle/cold aisle” rows. Such a layout, if properly organized, greatly reduces energy losses and also prolongs the life of the servers. (See Figure 3.)
Savings and Costs
- Hot aisle/cold aisle arrangements lower cooling costs by better managing airflow, thereby accommodating lower fan speeds and increasing the use of air-side or water-side economizers. When used in combination with containment, DOE estimates reduction in fan energy use of 20% to 25%.
- PG&E’s experience with hot aisle/cold aisle retrofits indicated that the payback was greater than two years.
- CRAC (computer room air conditioning) unit fans consume a lot of power and tend to account for 5% to 10% of a data center’s total energy use. (Typically, only cooling compressors use more energy in a data center.) Most CRAC units are unable vary their fan speeds with the data center server load, which tends to fluctuate. Because data center environments constantly change, variable-speed fan drives (or VSDs for short) should be used wherever possible. Retrofits of many CRAC and CRAH (computer room air handling) units are available.
- To sufficiently cool equipment and provide more than one backup unit, data centers concurrently operate multiple CRACs around the clock. This can cause short-cycled cooling and extreme static pressures in the plenum, wasting energy. (A CRAC unit compressor that short-cycles turns itself on and off too frequently—thus reducing efficiency.) VSDs save energy when the data center load fluctuates.
Savings and Costs
- A reduction of 10% in fan speed reduces that fan’s use of electricity by approximately 25%. A 20% speed reduction yields electrical savings of roughly 45%.
A long–term monitoring study of 19 data centers by the Uptime Institute concluded that only 60% of the cool air being pumped into the data center was cooling equipment. It also found that 10% of the data centers had hot spots. (Data center hot spots refer to server input air conditions that are either too hot or too dry, according to the American Society of Heating, Refrigerating and Air–Conditioning Engineers (ASHRAE) TC 9.9 guidelines.)
Sound airflow management strategies are becoming even more important as data centers accommodate modern high–density server racks, which demand 20 kW to 30 kW of power per rack versus 2 kW per rack just a few years ago—and generate ten or more times the amount of heat per square foot.
- Diffusers should be positioned to deliver cool air directly to the IT equipment. At a minimum, diffusers should not be placed such that they direct air at rack or equipment heat exhausts.
- Blanking panels are fundamental to efficient airflow control in server racks. On the front of server racks, unused rack spaces (open areas) are covered with blanking plates so that air passes through the equipment rather than around it. Blanking panels decrease server inlet air temperatures as well as increase the temperature of air returning to the CRAC, both of which improve operational efficiency.
Savings and Costs
- Adding a single 12″ blanking panel to the middle of a server rack can yield 1% to 2% energy savings.
- Blanking panels cost approximately $4 to $12 per 1U panel. Assuming 10 U of empty space per rack, the costs will be $40 to $120 per rack.
- An air-side economizer (see Figure 13 below) brings outside air into a building and distributes it to the servers. Instead of being re-circulated and cooled, the exhaust air from the servers is simply directed outside. If the outside air is particularly cold, the economizer may mix it with the exhaust air so its temperature and humidity fall within the desired range for the equipment.
- The air-side economizer is integrated into a central air handling system with ducting for both intake and exhaust; its filters reduce the amount of particulate matter, or contaminants, that are brought into the data center.
- Because data centers must be cooled 24/7, 365 days per year, air-side economizers may even make sense in hot climates, where they can take advantage of cooler evening or winter air temperatures. For various regions of the United States, Figure 14 shows the number of hours per year with ideal conditions for an air-side economizer.
Savings and Costs
- Intel IT conducted a proof-of-concept test that used an air-side economizer to cool servers with 100% outside air at temperatures of up to 90°F. Intel estimates that a 500kW facility will save $144,000 annually and that a 10MW facility will save $2.87 million annually. Also, the company found no significant difference between failure rates using outside air and an HVAC system.