PERCS Productivity Assessment     

links

Wendy Kellogg photoJohn Richards photo Calvin (Cal) Swart photo

PERCS Productivity Assessment - overview


PERCS is a large project spanning multiple divisions at IBM that is building a complete peta-scale computer system, including the hardware, full software stack and associated networking and storage systems. The work is sponsored by the High Productivity Computing Systems (HPCS) program at DARPA.

The system, which will be capable of performing in excess of a quadrillion floating-point operations per second, depends on the parallel programming computing model. Once a niche approach used only in High Performance Computing (HPC), parallel programming is rapidly becoming mainstream as laptops and desktop computers incorporate multiple rather than single processors in their CPUs. The PERCS machine, built out of an excess of 100,000 processors, presents special challenges for programmers and system administrators who will manipulate the machine to solve problems in basic and applied science.

Our focus since 2005 has been to understand human interaction with parallel systems — particularly the writing, porting and optimizing of parallel codes — with an eye towards improving the tools support of the work. We have approached the problem of understanding the work of parallel programmers in a number of ways including fieldwork, empirical studies, and modeling approaches.

Field Work: We began collecting field data in 2005 across a number of facilities in industry, government and academia where parallel programmers work. Government field sites include Lawrence Livermore National Lab (LLNL), Department of Energy's National Energy Research Center for Scientific Computing (NERSC), Los Alamos National Lab (LANL), Oak Ridge National Lab (ORNL), the National Security Agency (NSA), the National Aeronautics and Space Administration (NASA) and Sandia. Academic institutions include the Pittsburgh Supercomputing Center (PSC), the National Center for Supercomputing Applications (NCSA) and the San Diego SuperComputer Center (SDSC). We have also collected data inside IBM across a number of different divisions.

Empirical Studies: We have conducted three comparative empirical studies to understand the challenges of different parallel programming models for both relatively inexperienced parallel programmers and experts. In 2005, working with the PSC, we conducted a study comparing language uptake (C+MPI, UPC, and X10) and effectiveness with student programmers. In 2007 we conducted a study at NERSC looking at experienced parallel programmers using C+MPI. In 2008 we conducted a study at Rice University comparing language and experience levels using C+MPI and X10 primarily with novices. All three studies used a Scalable Compact application (SSCA #1) designed for the HPCS project, providing a means of comparison.

Modeling Work: We have employed modeling approaches in order to get a detailed understanding of the steps involved in the use of various software tools that support writing and using parallel programs. We began this work with an IBM internal approach called Complexity Modeling but have more recently adopted the cognitive modeling approach embodied in CogTool.




Associated Teams

Social Computing Group