IT Services manages physical buildings on campus which contain the server and network hardware for data and communications, with specialized requirements for power and cooling. Stanford's data center qualifies as a Tier 2 facility, which means it has the backup systems to provide at minimum 99 percent reliability, and it is also extremely energy-efficient. Power consumption is a key metric for data centers, since it takes considerable energy to keep the machines cool, in addition to the energy that is required to run the actual systems, the UPS (Uninterruptable Power Supply) systems, DC (Direct Current) systems, and HVAC (Heating, Ventilation, and Air Conditioning) systems. Using the de facto standard for measuring data center efficiency, Power Usage Effectiveness (PUE), the data center at Forsythe has a efficient
Not surprisingly, the data center is barely able to keep up with demand, which is increasing at a rapid rate. The data center group supports both the needs of the general campus and hospital as well as some research computing, which has five times the power requirements. As a result, there are plans to create new data center spaces capable of supporting higher density systems. Colocation — having third parties host Stanford equipment at their data centers — is another possibility that is under review.
IT Services manages three data center computing facilities. The primary computing facility is located at Forsythe Hall, and contains approximately 17,000 square feet of raised floor. While this three-story facility contains electrical and mechanical elements found in most traditional legacy data centers, there are some notable enhancements that allow for greater redundancy and flexibility during times of maintenance or distress. The typical customers of the Forsythe data center are those that, due to the critical nature of their computing systems, require maximum uptime, support, and security. This facility is staffed around the clock. To meet increased demand, the Data Center group is wrapping up Phase 2 of a renovation project at Forsythe, adding 35 more computing cabinets, six of which will support up to 16kW worth of computing load per cabinet.
A secondary computing facility is located at Sweet Hall, and contains approximately 1,800 square feet of non-raised floor. Because of its location, lack of services, system capacities, and redundancies, its use is limited to non-essential systems and applications. It is operated as a lights-out facility. Continued use of this site is currently under review. There is also an Auxiliary Data Center (ADC), which is located in the Tri-Valley area. The ADC was commissioned in April 2009 to support the University's business continuity needs in the event that a level 3 or greater incident on the Peninsula were to adversely affect the Forsythe data center. This facility is equipped with UPS (Uninterruptible Power Supply) and emergency generator capabilities, and operated as a lights-out environment.
In addition, the Data Center department is also in charge of 13 Electronic Communications Hubs (ECHs), which are distributed throughout the campus and at select off-campus locations. These small facilities are primarily used to support the network and communications infrastructure that distributes voice and data services to the campus and hospital communities. Each of the ECHs is equipped with back-up generator capabilities, and most are outfitted with UPS system protection; all are operated as lights-out facilities.
The Forsythe data center has incorporated temperature control monitoring systems to ensure that computing equipment is receiving appropriate airflow to achieve maximum useful equipment life. This system has allowed for hot spots to be identified and remediated. An under-floor pressure monitoring system is also in place and allows for the measurement and management of the air pressure under the raised floor. To most effectively manage the temperature, air pressure, and electrical and emergency system components, wireless sensors and current transformers (CTs) have been deployed in Forsythe to bring all the data points together into a unified monitoring application. Localized cooling systems, in the form of computer room air handlers (CRAHs), are being used to address hot spots. Installation of variable fan drives (VFDs) has allowed us to reduce the speed of the CRAHs, resulting in substantial chilled-water cost savings; the combination of the two has cut those cooling bills by 40 percent.
In terms of the electrical systems, the use of localized electrical distribution products, such as Starline Track Busway and in-row FDCs (feature-distributed clustering), are being incorporated into the design to minimize cost of branch circuit adds, moves, and changes. In addition, using these types of distribution systems has allowed the data center team to reduce the amount of under-floor electrical cabling that would have otherwise negatively impacted the flow of air under the raised floor. The use of these types of electrical distribution systems have also helped to reduce electrical contractor costs by 60 percent. The use of overhead ladder racking systems to support low-voltage cabling distribution has also reduced under-floor clutter, and has become the standard design for areas being repurposed and renovated.
Aperture, a new system for data center facilities management, is in the process of being installed. It provides asset management, a way to document the floor layout and what is in each rack, forming an important database of details needed for efficient operation. It is a consolidated, web-based platform that will store data being collected by many Computing Services workgroups for inventory of all passive and active equipment that is hosted in the IT Services-managed spaces. It will include links to power and cooling records, so that impacts to electrical and mechanical capacities can be understood before equipment is installed within the data center, provide increased planning and reporting tools to ensure that available resources such as billing, space, and power are being optimized, and provide the ability to accurately predict and provide consistent reporting on available space and utilities to aid in short- and long-range facility planning.
A new Capacity Manager position has recently been filled to focus on capacity planning. As more capacity is deployed and a history of it develops, comparing forecasts with what actually happened will become a powerful tool for future capacity planning. It will also help hone in on the dimensions of data collection, identifying which variables are critical and which are not as important to have, and what level of detail is important to collect.
The Data Center group is about to embark on Phase 3 of its renovation of Forsythe, which will involve building an additional 5,000 square feet of computing space. The intent is to create a space that is well-supported in terms of infrastructure, but allows the researcher to have greater control of their hosted space — an environment as similar as possible to the space they might have in their own schools in terms of computer utilization. Detailed programming of the allocated space will be essential to determine design criteria, the pricing model, and use processes for this set of clients. With this experience, the schools will be in a better position to make the financial decision to continue the low-cost hosting or move up to the more feature-rich, default level of services predominately used by the business community. The goal to increase usage of the enterprise Data Center among campus departments, and encourage them to take advantage of an efficient, cost-effective, sustainable, and technologically current operation. However, many campus organizations have opted for local computer facilities. With concerted and creative effort, the value proposition of IT Services' Data Center and hosting facilities will be so compelling that schools, departments, and hospitals will no longer feel the need for local hosting, and in fact, will prefer to have their needs met by IT Services' offerings.
A new multimillion dollar data center devoted specifically to research computing has also been placed on a fast track. The idea is to use a modular building system that will permit future expansion; each module will contain 180 computing cabinets, with 20kW per cabinet. Thinking ahead, the Data Center group is also going to do a colocation feasibility study: the costs and the network connection provided are two key issues.
In the pursuit of ever-greater energy efficiency, some innovations have recently been implemented. Should they result in substantial energy savings, these technologies will continue to be used. In the Forsythe HVAC systems, motorized louvers will be installed into the CRAH ductwork; they are designed to seal off the floor when a cooling system is turned off so that the cool air will not backdraft into the return air plenum. Another new technology being deployed is computer cabinets with rear door coils; the coils act as radiators do on cars, removing the heat being discharged from the server rather than exhausting it into the room (and raising the temperature).
To help with short- and long-term facility planning, a utilization history of the data center — an archive of previous usage — has been established. Some details are already readily available, and more will be developed as metrics and measurements are put into the everyday process of data center operation. In particular, this information should stress time-of-year and time-of-day maximums and minimums. At the least, this history will include events of a magnitude that impact client services and eventually move to include what remediation was put into place to insure these events do not cause client impact in the future.
A few enhancements that have been considered but are not currently part of the strategic plan. In order to know the electrical capacity and its current state, more details of the power being used in each branch circuit would help in making sound decisions regarding area development and expansion. Costs for implementing a branch circuit power monitoring system remain sufficiently high to make it practically infeasible. To simplify asset management, IT Services at large is considering an RFID (Radio Frequency Identification) system. A revised process for manually scanning equipment in Forsythe was sufficiently easy, so an RFID solution is not deemed immediately necessary but may be implemented in the future.
- Improve compliance-reporting capabilities by developing automated means to perform quarterly audits of "who can get in," so that authorized personnel must re-validate access need at prescribed intervals.
- Continue review process for each facility in order to determine strengths and weaknesses. Develop action plans to remediate weaknesses so that greater fault tolerance can be achieved.
- Solidify partnerships with hospital maintenance teams to improve support at sites that fall under their areas of responsibility.
- Incorporate sound design principles in new and existing computing facilities that address the space, power, and cooling architectures of modern computing equipment, as well as monitoring and historical archiving.
- Develop computing Facility Design Standards (FDS) that clearly articulate design methods and specialized systems for central computing and communications facilities. Based upon programmatic goals, incorporate environment-friendly, modular design principles to maximize flexibility.
- Bring additional clients into the Livermore facility. Additional business continuity requirements will undoubtedly arise and will need to be collected, along with their cost implications.
- Improve utilization history: Identify what details are at hand, what systems can feed this historical data, and the tools and expertise to aid in the making of decisions based on the data available.
- Evaluate the new automated environmental monitoring now used in Forsythe Hall's data center and evaluate suitability for use in Livermore, the main ECHs, and the proposed data center.
- In Phase III, develop a basic hosting service, including a design, use processes, and pricing model that is cost-competitive to running servers in house.
- Generate cost comparisons of hosting in IT Services' data centers vs. hosting in commercial colocation facilities.
- Upgrade infrastructure in the ECHs in preparation for an expected shift in the type of equipment housed there, moving away from legacy telecom equipment.
- Evaluate options for further growth of the administrative data center: additional renovation phases in new parts of Forsythe, reworking the second floor to take advantage of space freed up from some telecommunication shifts in technology, reworking the second floor to increase rack density and improve the server-to-rack ratio.
- Explore chilled water options to best use the cooling capacity it provides.
For the Forsythe Data Center:
- Complete Forsythe Phase 3 renovation project.
- Install motorized louvers on CRAH systems.
- Install overhead ladder racking in operational zones 1 and 2.
- Expand Building Management System (BMS).
- Perform colocation feasibility study.
For Sweet Hall:
- Determine future use of Sweet Hall data center space.
For the ECHs and other IT Services facilities:
- Perform back-up battery refresh at ECH 6 sites.
- Expand UPS system at 1 ECH site.
- Upgrade HVAC system at 1 ECH site.
- Upgrade back-up HAVC system at CATV head-end.
- Upgrade to auto-start generator at 1 ECH site.
- Install auto-start generator at Pine Hall.
For asset management:
- Complete Aperture Phase 1 roll-out in early Fall 2010.
- Perform audit of branch electrical circuit to customer equipment to obtain linkage; incorporate results into Aperture.
Measures of success
- A usable archive of the data center's utilization history, including rack inlet temperatures, under floor air pressure, total critical power in use, PUE trends, chilled water temperatures arriving and leaving, UPS battery status, and other metrics as determined.
- A set of FDS and run books available for many of the infrastructure systems in the data center.
- An up-to-date asset and location database for the Forsythe facility, available online to selected individuals and groups.
- More campus customers wanting to host systems in Forsythe Hall. These customers could be research clients, or from the schools or the hospital, with servers that run specific applications unique to their businesses.