IT Services is focused on delivering a multi-level storage infrastructure, based on a low-cost foundation, which provides a set of storage solutions for the most common needs across the university. There are areas where a central service can provide the maximum benefit for the least cost, leveraging economies of scale. These storage solutions should be available with ranges of pricing, security, reliability, and availability that can be matched to the requirements of the data being stored. Users should be provided with the tools to simplify ordering, using, and managing storage and backups.
Technologies in this section
Every service requires storage, and it is often a major cost component. It is a fundamental need of each school and department to protect their information and intellectual property. Considerable resources are required to do so and the backup demand should be aggregated and satisfied by a central service to achieve economies of scale, consistent polices, and attractive rates. Business Continuity and Disaster Recovery (BCDR) strategies also rely heavily on storage infrastructure.
For small allocations (less than 100GB) of storage without any special requirements, AFS (Andrew File System) file servers are cost-efficient and already integrated with existing services. By continuing to build on that infrastructure, IT Services can improve client experiences with central storage incrementally and without significant investment in deploying new services. The community is expressing a huge demand for low-cost central file services. It is projected to be a new major business for IT Services. Individual schools, departments, and even workgroups in IT Services currently run dedicated file servers; aggregating that demand and satisfying it with a scalable and cost-effective central service will help protect Stanford's information.
Technology trends that IT Services is following as it develops its strategy in this area:
- Server virtualization is becoming the default method of increasing functionality without adding infrastructure, and it drives consolidation from isolated server-based storage to central shared storage.
- Utilizing and leveraging disk rather than tape as a media target through virtual tape libraries (VTL), disk-to-disk replication, and various combinations that leverage faster and more flexible media.
- Data de-duplication is an active area of industry development. It increases storage efficiency and negates the proliferation of duplicate data generated by poor data management practices and virtual server sprawl.
- Pooled and tiered storage within storage systems (e.g., high-speed Fibre Channel (FC) or Serial Attached SCSI (SAS) disks and low-speed Serial ATA (SATA) disks in the same system).
- Utilization improvements through de-duplication, thin provisioning and automated data movement between tiers.
- Convergence on Gbps and 10Gbps Ethernet as the common networking technology.
- NFSv4 continues to fail to deliver on most of its promises; all of the advanced management capabilities that would allow it to duplicate the functionality of AFS still lack significant deployment.
- SMB 2.0 is the latest revision to the CIFS/SMB (Common Internet File System/Server Message Block) protocol offering increased scalability and data integrity. Deploying solutions that fully support the specification is essential.
- Replacement of standalone CIFS/SMB file servers with dedicated appliances.
- Replicate data into redundant data centers and de-duplicate data in each data center, but not between them.
- Build archive and backup services that are attractive to researchers.
- Pursue a centrally funded backup offering combined with file storage.
- Deploy a reporting interface for users to view usage, billing, and settings.
- Make adjustments to the backup service so that data is protected soon after it is changed.
- Work with individual schools and departments to identify data classification and storage optimization issues.
- Evaluate cloud storage options.
- Increase the use of storage efficiency technologies.
- Continue to leverage external providers for desktop and laptop backup.
- Characterize data based on access method, performance, and restore process to best to align services with needs.
- Replace storage platforms that do not offer storage efficiency technologies with ones that do.
- Deploy a central reporting tool to satisfy chargeback, capacity, and performance.
- Advertise WebAFS (FileDrawers) as a web interface to AFS.
- Develop a rate model and charging system for larger AFS quota increases.
- Consistent with the university's information security policy, build separate (or sufficiently segregated) environments for Restricted Data using separate servers only accessible via VPN (virtual private network) or WebAFS (FileDrawers).
- Deploy the new NetApp system with disk-to-disk replication between campus and Livermore.
- Migrate existing clients from CIFS file servers and EMC Celerra onto NetApp.
- Identify use cases for cloud storage evaluation.
Measures of success
- Lower per-unit rates.
- Client feedback on rate changes and reporting tool.
- Results of the BCDR (Business Continuity and Disaster Recovery) test.
- Better utilization will lead to charging back more of the purchased storage, resulting in lower per-unit rates.
- Increased number of clients, including penetration into the research computing arena.
- Usage statistics for WebAFS (FileDrawers), paid AFS space, restricted AFS space, and CIFS space.
- Reduction in IT Services, Computer Resource Consulting, and client-managed file servers.