Profiles of three new data centers—Orlando Health in Florida, Avnet Inc. In Arizona, and BendBroadband in Oregon—illustrate the revolutionary changes that have improved data center energy efficiency and return on investment.
By Lyn Corum
A quiet revolution in the past five years spurred by escalating energy costs in data centers has produced innovations going beyond energy efficiency. These include free cooling, and data center infrastructure management software, both of which have led to lower power costs. A move to cloud computing and independent data centers has also brought changes.
The push for energy efficiency came in 2007 from The Green Grid, a non-profit global consortium founded by a group of companies dedicated to advancing energy efficiency in data centers and business computing ecosystems. Data Centers have also sought certifications from the US Green Building Council’s LEED program or the Environmental Protection Agency’s Energy Star program, further motivating them to increase efficiency.
An August 2013 survey by the Uptime Institute reflected the latest trends. The Uptime Institute provides certification, education, and professional services for the global data center and emerging digital infrastructure industry. In a survey of 1,000 data center facilities, the institute says IT managers’ interest in energy efficiency is less important now, probably because their biggest gains happened five years ago. One of the most significant discoveries was the shift in spending away from enterprise-owned data centers and toward third-party data center providers.
The Green Grid created a set of metrics, and the PUE, or power usage effectiveness, metric is the simplest and is now widely used as an energy efficiency guide for data centers. PUE is defined as a facility’s total power measured at the utility meter divided by IT equipment power. IT power, losses in the power distribution, lighting power, and cooling power are all included in total power.
If a data center’s PUE is 3.0, it means the demand is three times greater than the energy necessary to power the IT equipment. If a server demands 500 W, it needs 1,500 W from the utility grid to deliver that demand due to processes that support the functioning of the IT equipment. An ideal PUE value is 1.0.
The Uptime Institute survey found that self-reported average PUE improved from 1.89 in 2012 to 1.65 in 2013.
In a 2011 white paper discussing energy efficiency in data centers, the Green Grid recommends installing variable speed drives in original equipment manufacturer (OEM) applications, upgrading computer room air handler (CRAH) units, implementing rack airflow management, moving the CRAH controls from the return air temperature to the rack inlet and increasing the temperature set points of the CRAH and chiller units.
The Green Grid is also recommending, based on ASHRAE data for North America, that data centers can operate using air-side economizers if IT managers can allow “occasional incursions” to as high as 40°C (104°F).
Avnet, Inc. is taking this advice and is planning to gradually raise the threshold temperatures in its data center in Chandler, AZ, over the next two years.
The Green Grid says there are great opportunities for energy and capital savings if the data center is allowed to operate using free cooling without chillers or cooling towers. Capital and operating costs are reduced, and with higher reliability since there are fewer components that can fail. For more information, the white paper detailing this can be downloaded from the Green Grid website. Look for WP46, Updated Air Side Free Cooling Maps.
Sustainability at Orlando Health
Orlando Health, a private, non-profit healthcare network based in Orlando, FL, realizing the impacts of health care reform on its need for more efficient systems for patient records and billing, embarked on creating a state-of-the-art data center that supports healthcare reform. To accomplish this it partnered with Chatsworth Products Inc., headquartered in Chatsworth, CA.
Three years in the planning, Orlando Health designed a data center that has a sustainable, flexible layout that will grow with the community and adhere to internal green initiatives, as well as carry a sustainable carbon footprint. It wanted to lower its overall power usage effectiveness (PUE) ratio, and to recycle materials and manage waste to minimize environmental impacts.
Orlando Health chose Chatsworth’s F-series TeraFrame cabinets that are equipped with vertical exhaust ducts, which allow for air to be directed out of the cabinet and into an isolated return air path above the drop ceiling. By isolating the hot return air, the use of cool air was improved, and they were able to reduce the number of total computer room air-conditioning units from 16 down to three. A fourth unit serves as backup and is normally shut off.
Cable trays were mounted on cabinets rather than having to anchor them to the ceiling above. This allows convenient and easy accessibility to network and power connections and ease in moving cabinets if necessary. By installing the overhead power and cabling, Orlando Health expects to decrease expenses in the long run because of savings in reduced maintenance and eliminating re-work and excessive cleaning.
Patricia Wood, Orlando Health’s technical services manager who managed the building of the new data center, says 120 new cabinets were installed, double the number in the old facility. Many are not yet needed but are on hold for expansion. Management chose to lease a new location and has migrated servers and storage controllers to the new data center in three phases.
The contemporary cabinets allowed Orlando Health to free up space and house more rows of cabinets and equipment because the air-conditioning needs were greatly reduced. Cabinets can easily be serviced or replaced because of the overhead power and cabling. Their glacier white color also reduces lighting costs.
Orlando Health removed an existing raised floor to create additional height in the space, allowing for overhead power and network cable pathways, and a larger return space above the drop ceiling for the hot exhaust air. These changes accommodate more equipment than would typically be allowed in a traditional layout.
|Photo Credit: BendBroadband
One of two KyotoCooling wheels being installed at the new BendBroadband Vault under construction in 2011
With a slab floor, the data center eliminated restrictions in air flow that a plenum under a raised floor can create, which would require excess air to be pushed through as makeup air. Thus, air can be moved more efficiently and ultimately cool more with less. The slab floor is also much cleaner, and dust and other particles that collect under-floor plenums are no longer wandering into the front of systems in the room, a problem that can make the systems fight harder to take in air.
Chatsworth also designed, specifically for Orlando Health, air boxes for side breathing switches, which are installed in the same cabinets as the servers. Having the switches in the same cabinets created air-flow issues that increased the heat load within the cabinet. The air boxes allow cool air to circulate around the switches within the cabinets. This provided an additional flexible and energy-efficient solution for the overall design of the data center and reduces the amount of cabling going outside of the rack. Chatsworth is now selling the air boxes to other customers, according to Wood.
Wood says, “We’ve had the same cooling capacity, twice the amount of floor space to cool, and we don’t have any type of heat issues at all. In fact, the temperature is staying 10 to 12 degrees cooler than what our other facility was [cooled at].”
Technology Evolution at Avnet
Unlike Orlando Health, Avnet Inc. modernized its 25-year old data center over the past eight years, in what Bruce Gorshe, Avnet’s director of data center operations, described as a continual evolution of technology.
Avnet, a Fortune 500 company, describes itself as one of the largest global distributors of electronic components, computer products, and embedded technology with revenues of $25.5 billion in fiscal year 2013. It is headquartered in Phoenix, AZ.
Avnet’s 13,000-square-foot data center is made up of four distinct areas. The back room contains the auxiliary power equipment, including the 2-MW standby Caterpillar diesel generator and Emerson UPS. The network center room holds the routers and other infrastructure, and the large storage area networks are in the front room and include servers and cabinets. The command center is where the technicians and analysts work. The environmental monitoring of both the indoor and outdoor air is done there.
|Photo Credit: Orlando Health
In Orlando Health Center’s data center, Chatsworth’s vertical air ducts are installed next to TeraFrame cabinets. The air ducts direct air out of cabinets and into an isolated return air path above the drop ceiling. Note the cable trays mounted on top of the cabinets and the overhead power lines and cabling above them.
A 500,000 square-foot warehouse is located nearby and serves as the company’s logistics center where orders are packed and shipped. Gorshe describes the revamping of the lighting infrastructure, which reduced cooling by 40 to 50 tons. In the warehouse, 450 metal halide lamps were replaced with T5 fluorescents—five tubes in each fixture along with motion detection sensors. The T8s in the data center were also replaced with T5s. Daylight sensors were installed to turn off the lights in the data center when outside light is abundant. Gorshe says everybody was excited about the new lighting and the savings it brought. The two electric power companies—Arizona Public Service and Salt River Project—provided studies, guidance, and rebates.
The air-conditioning systems that cool the data center are a bit unusual. Gorshe says the Arizona air is full of particulates and requires heavy filtration if it is used. To save money, management instead chose a system in which Freon is circulated in the data center’s 28 air-conditioning units that range in size from 3 tons to 20 tons. Massive circulation fans on the roof—5 kW each—blow air on radiators circulating the Freon to cool it.
A separate air conditioner with an air-side economizer brings in outside air when it is 10 degrees lower than the data center return air. Gorshe says the data center intake air temperature is set at 72–73°F, while return air is 85–90°F. He says they only need a 10-degree difference between intake air and outside air.
“It used to be intake air couldn’t be more than 72 degrees, but it has evolved,” he says. “You can run a system with intake air up to the 80s and 90s.”
For every one degree the temperature in the data center is raised, you save 3% to 4% in power cooling, Gorshe adds, and he will be raising the intake air over the next two years to 78–79 degrees, as discussed above.
Air-conditioning units are under the data center’s raised floors, and hot and cold aisle containment has been achieved using plastic curtains, typically used at warehouse loading docks. Exhaust air ducts are located in hot aisles and the exhaust air is sucked into air conditioning units. Gorshe says it is very efficient.
Seeing Into the Servers
Data center infrastructure management (DCIM) integrates IT and facilities services, creating a data center–wide view. It includes planning, management, optimization software, and services for space, power, and cooling within the data center. The software reports power usage effectiveness (PUE), provides automated control, data center visualization, scenario analyses, and integrated tool management.
Virtualization software partitions the physical server into smaller virtual servers to help maximize server resources. It lets each virtual server run its own operating system, and each virtual server can also be independently rebooted. It also conserves space through consolidation as several machines can be consolidated into one server running multiple virtual environments.
Keeping track of the now invisible virtual servers creates the opportunity for software to track them. Furthermore, on the facilities-side of the data center, these massive consolidations create hotspots, the need for more variable power, and cooling management systems. DCIM software can fill these roles.
“People really are using servers at 10% capacity, and this means they are not being used effectively,” says Terrence Clark, senior vice president for infrastructure management solutions at CA Technologies, headquartered in Islandia, NY, one of several companies offering DCIM software. “There are capacity management products that analyze the processing and storage usage of servers going out into the future. Then, the DCIM software will identify the additional capacity. We are marrying together capacity in the physical area and capacity in computing and storage.”
Clark commented on the changes that have occurred in data centers over the past 15 years. “Data centers are now more powerful, servers are smaller, and heating and cooling is more dynamic. And, management is very different. Instead of filling out spreadsheets, software collects information, such as power consumption, in real time. Temperature parameters can be set and monitored, and signals can be sent to variable frequency drives in cooling units to speed up or slow down fans to reduce power consumption.”
In another example, Clark says if the uninterruptible power supply is taken out for maintenance, IT staff needs to know what impact this will have on any given server. DCIM software can help IT staff identify the root cause of any issue more quickly, “because you know the relationships in the infrastructure,” he says.
Clark says in the past it didn’t matter where equipment was placed in a data center. However, as servers became more powerful, organizations needed to understand what equipment they have and where it is connected, and DCIM can help with that.
Clark uses an example of a financial services company that wants to expand its product line to offer mobile banking. The IT requirements include CPU, memory, and network specifications. Using capacity management, the company can see if they have the capacity to add this IT infrastructure. If the data center needs additional equipment, DCIM software can identify where to place it, and what additional power is needed. Once the mobile banking application is developed, the software can provide the assurance the IT infrastructure will deliver the product.
“It’s really being agile, delivering the application on time, that determines the company’s success,” says Clark.
When a business sees demand increasing to the point where it needs to consider expanding its IT services and air-conditioning requirements, IT management will need to decide whether to build another data center at great expense—perhaps as much as $500 million—or if it can develop the stranded capacity in its servers using DCIM or similar software to identify the capacity it needs and defer the expense. “This can provide value to the business,” says Clark.
Generally speaking, Clark says, there are always different ways to improve the thermodynamics of the heating and cooling systems. In the newer, smaller data center systems, heat is more localized and this has allowed engineers to innovate.
“You only want to cool spaces where heat is generated.”
Avnet is using DCIM software by Vigilant, and the data center has cut air-conditioning costs in half, says Gorshe.
“First, all the computer room air conditioners were moved, so they were clustered together. This prevents one conditioner humidifying on one side of the room, and another dehumidifying the air on the other side. Vigilant software runs an algorithm to idle less efficient cooling units if they are not needed. Previously, all air conditioning units were run 24/7, at 20 kilowatts to 30 kilowatts each. Further, Vigilant deployed 200 sensors throughout the data center, and these sensors report temperatures to the software which adjusts the running times of the air-conditioning units.”
Avnet also installed CA Technologies’ ecoMeter to monitor and report on power consumption in the data center to get an accurate measure of the power usage effectiveness or PUE. Gorshe says for a 25-year-old data center, one would expect the PUE to be 2.0 to 2.1. However, Avnet’s data center has a PUE of 1.6.
|Photo Credit: Airius
Air Pear’s air turbine moves a column of air
from a high ceiling to the floor.
Gorshe says, “Our sweet spot is monitoring at the breaker level. There will be more standards and controls coming out, and the benefit is knowing what your numbers are. By seeing how much power you are drawing at any one time and where you’re drawing it, you can shift the work load to different times and save dollars by taking advantage of lower electrical rates.”
He says the data center is also using virtualization. It used to have one server and one application and one operating system, plus physical storage, and this was very inefficient. It translates to a large number of employees and high power usage. By partitioning mid-range and small servers, the percentage of central processing unit usage can be increased to 90% to 100%, increasing efficiency and return on investment. The data center now has 2,500 systems on approximately 550 physical servers.
Gorshe says they’ve done the same with storage by going to a storage area network, eliminating tape drives and updating with disk drives. Instead of backing up data, the data center now uses deduplication, he adds. It is a specialized data compression technique for eliminating duplicate copies. There are a number of products on the market that group data that are similar and identify and replace duplicates with pointers—small characters—to the original data. Gorshe says this technique has reduced backups by 92%.
Kyoto Cooling Wheel
The Kyoto cooling wheel was invented by four young engineers in Amersfoort, The Netherlands, and named after the Kyoto Accord. There are now 76 installations located in 10 countries using the technology patented by KyotoCooling B.V., founded in 2007.
Dr. Bob Sullivan, a senior staff member with the Uptime Institute, has been supporting the Kyoto wheel. Interviewed for a 2011 story in GreenBiz.com, he describes the Kyoto fan as an aluminum honeycomb wheel that absorbs heat in one airflow stream and dissipates it in another.”
Sullivan explains in a case study of the State of Montana’s new data center written by Chatsworth Products Inc., how the cooling fan works.
“There are two isolated circulation paths, rather than an intake and exhaust path,” he says. “The computer room is configured with an isolated hot aisle where the hot air is circulated. The wheel is constantly rotating at low speed, and the hot wheel rotates into a chamber where outside air is circulated through the honeycomb. The wheel rotates in a stream of Montana’s cooler outside air, which is returned to the ambient environment carrying the computer room heat load with it.”
Further increasing energy savings in the new data center, Chatsworth Products’ vertical exhaust ducts were installed in cabinets and work in partnership with the KyotoCooling fan. They direct hot air out of the cabinets and into an isolated return path above the drop ceiling without fans. This allows the hot air to be completely separated from the cool air supply.
This free cooling produces large energy savings with the elimination of compressors and traditional cooling equipment—80% more energy efficient, according to the planning manager in Montana’s Architecture & Engineering Division.
In the GreenBiz.com interview, Sullivan says this cooling technique also isolates the computer room from the ambient environment and all the problems associated with bringing huge volumes of outside air into the data center. This includes dirt, fine combustion particles, gasses, and air with low or high dew point.
The wheel was designed specifically for data centers and is scalable from 100 kW to 850 kW of cooling capacity. According to KyotoCooling, the wheel can deliver up to 85% savings over traditional data center air handling solutions.
According to American distributor, Air Enterprises, energy efficiency measurements in American and European data centers where Kyoto wheels have been installed, PUE metrics ranged from 1.05 to 1.15.
The Vault Reduces Cooling 35%
BendBroadband, a cable company headquartered in Bend, OR, built a data center called the BendBroadband Vault that provides colocation, cloud, backup, and disaster data center services to the area. Built in 2011, it was LEED Gold certified the same year and won EPA’s Energy Star designation for energy efficiency. It holds a Tier III certification from Uptime Institute for construction and security.
BendBroadband Vault installed two KyotoCooling wheels producing a 35% reduction in cooling loads and a 62% reduction in interior fan power. According to its website, a 152.9-kW, 624 solar panel array provides 16% of the energy needs of the data center during peak daylight hours, or 3% of total building power use. The solar panels cover 15,000 square feet on the south-facing roof of the 30,000 square foot building adapted from two warehouses. It has achieved a PUE metric of less than 1.2.
BendBroadband also purchases the remaining power not supplied by the PV panels from renewable sources provided by Pacific Power’s Blue Skye Green-e Energy program.
Air Pear Increases Air Flow
The Air Pear is another development in cooling. Airius founder Ray Avedon, who invented the Air Pear, worked with company engineers to design the air turbine to continuously move a column of air from a high ceiling to the floor and mix the warm air with the cold air near the floor. According to the company, this can lower HVAC operating costs, increase comfort, balance air temperature and humidity, and improve indoor air quality.
Stator blades within each Air Pear increase airflow circulation in both summer and winter months. A typical Airius installation includes a series of units mounted just below the ceiling, evenly spaced throughout a facility, working in concert to improve comfort and reduce HVAC energy consumption.
Taylor Horowitz, a sales representative with Airius, says the Air Pear has been installed in at least one data center and can provide significant benefits there.
Author’s Bio: Lyn Corum is a technical writer specializing in energy topics.