Petascale Institute Virtualized Lustre Scalability Research
EMSL Project ID
24101
Abstract
For the following proposal, our plan is to run during MPP2 downtimes to not affect other users.Achieving aggregate bandwidth scaling of storage devices with parallel-distributed file systems is a highly desirable goal. Current and future supercomputer clusters will require storage bandwidth sufficient to support thousands to hundreds of thousands of clients or risk being performance limited due to storage bottlenecks.
We propose to re-task the local I/O subsystems on the HPCS2 cluster currently housed in the Environmental Molecular Sciences Laboratory at Pacific Northwest National Laboratory in Richland, Washington. Currently, the cluster has 978 Dual processor HP rx2600 Itanium2 systems, 570 of which have a .5 TB raid file system. We plan to re-configure these systems to run a virtualized Lustre Object Storage Server and client to achieve very wide striping and bandwidth. We plan to test the scalability of this approach, which should be linear, over a range of nodes ranging from small to large. Initial tests indicate we should be able to scale our performance in excess of 100 GB/s.
Project Details
Project type
Limited Scope
Start Date
2007-02-09
End Date
2007-03-12
Status
Closed
Released Data Link
Team
Principal Investigator
Team Members
Related Publications
SC07 Booth Poster