The total cost to set up and build out a single data center now routinely approach one billion dollars for the big players such as Microsoft, Google, Yahoo, Facebook, Financial Institutions etc. Third party “real estate” collocation service providers can set up infrastructure at a lower cost on a continuum that ranges from near as good to the big players to basically a few servers in an office with a technician. Costs to a customer for collocation services are driven equally by technology requirements and the competitive landscape in a particular region.
Whether you operate your own data center or utilize collocation provider operational expenses will continue as a significant cost with the primary cost driver being power. Google has been a pioneer in researching and putting in place methods to increase efficiency and reduce operational cost for data center operations. They have also taken extraordinary steps to share its learning to help others. While the concepts were known to others before, most would agree that the way Google has executed on them has set a high benchmark for others to follow.
Five key areas identified by Google are:
1. Power Usage Efficiency
2. Airflow Management
3. Temperature Settings
4. Free cooling
5. Optimized Power Distribution
Power Usage Efficiency (PUE)
Power usage efficiency simply put is the total power used by the data center divided by the actual power needed to operate the servers. A perfect state would be 1.0. The average for most corporate data centers is> 2.0 while the most efficient Google and Microsoft centers approach operate between 1.1 and 1.2. A key point when using PUE is to measure it as close to real time as possible so that adjustments can be made quickly. Best practice is to integrate PUE measurements into the Building Management System which is continuously monitored.
Best practice is to use Computational Fluid Dynamics (CFD) modeling to identify hot spots, air mixing and leaks. However, even basic measuring tools can help identify areas to improve. A key is to regularly monitor this as almost no data center remains static for long and even simple maintenance can have an unforeseen impact. Three simple relatively low cost solutions that often result are:
1. Installing “meat locker” hangers to control airflow
2. Using blanking panels to seal empty rack space
3. Creating simple sheet metal airflow plenums and separators
4. Creating “hot hut” areas in a particular zone
In 2008, Christian Belady, General Manager of Microsoft Data Center Services and his Microsoft coworker Sean Jeans operated a rack of servers in a tent for eight months with perfect uptime. Examples and experimentation like this as well as recommendation by organizations such as the American Society of Refrigeration, Heating and Air Conditioning Engineers (ASHRAE) have challenged previous norms about temperature and humidity in data centers. Usually most data centers now operate with a wider band of acceptable humidity and at warmer temperatures.
Despite being able to operate warmer temperatures there is still a tremendous amount of heat generated and operating large air conditioners or chiller units is costly. Free cooling means literally trying to find no or low cost ways to provide cooling. Much of this is dependent on the actual location of the datacenter with solutions ranging from:
1. Evaporative cooling
2. Ambient cold air
3. Area water sources
Optimized Power Distribution
The delivery of power from the grid to the servers in a traditional data center goes through several conversions. At each of these conversions even small power losses on a cumulative basis can result in excess operational cost of up to $ 100 per server per year in extreme cases. A benchmark practice is to minimizeize conversion loss by moving UPS directly to the server level. This is easier when it is your own data center and also when you have more active control over he server design. This can potentially save up to $ 50 per server per year. To really maximize savings taking active control of the server design or at least configuration is beneficial. Usually 'off the shelf' servers are not optimized for your specific needs and come with power consuming hardware and features that you do not need.
To make matter worse off the shelf servers generally do not come with higher end Power Protection, Distribution and Monitoring components. This has to do with the usual focus being on processor and memory as well as server makers trying to be cheap where the can. Even if you are not building your own servers it is worth it to confirm with the provider that they are using high end components from a company such as Texas Instruments for critical power features.
Given that most collocation agreements now have actual metered power as the major operational cost element it is important for anyone with a data center operation to continuously monitor operational performance and improvements for PUE, Airflow, Temperature, Cooling and Power Distribution. The benefit will be lower total cost. Also for large organizations the reduced energy footprint is an important metric for a variety of Corporate Social Responsibility measures.