Cloud & Data-Intensive Computing       

links

Cloud & Data-Intensive Computing - ESPAS 2012


First International Workshop on

Extreme Scale Parallel Architectures and Systems (ESPAS 2012)

To be held in conjunction with the 7th International Conference on High-Performance and Embedded Architectures and Compilers (HiPEAC 2012)

January 23-25, 2012 | Paris, France

The 1st International Workshop on Extreme Scale Parallel Architectures and Systems (ESPAS) will bring together researchers working on Experimental Infrastructures for Exascale Research and Development. Work in this area includes investigation of experimental components and systems for extreme-scale, simulation methods and tools targeting extreme scale, benchmarking, workload generation tools, and other related experimental systems and methods.

The workshop encourages publication and discussion of disruptive approaches to address the challenges of research and development for systems that do not exist as-of-yet. Another aspect of equal importance is the creation of a palette of scientific methods and experimental infrastructures (in software and/or hardware) to evaluate novel ideas (technologies, algorithms, systems). Some of the pressing issues that need to be addressed in this context are scalability of experiments, validation/extrapolation of scientific results, and characterization of expected workloads and their synthetic generation. Given the high cost of ownership and the limited access to the top-end of parallel systems, it is important to pursue experimental architectures and systems comprising of off-the-shelf components, configured, modified, or enhanced in such a way that they can provide insight into aspects of exascale systems.

Topics of interest include, but are not limited to:

    • Testbed design and evaluation
    • Experimental clusters/systems targeting extreme scale
    • Workload generation and benchmarking
    • Analytical modelling and simulation of systems
    • Techniques for extrapolation of experimental results to extreme-scale
    • Validation of projection/extrapolation techniques
    • Suitability/adaptability of commercial, off-the-shelf components (COTS)
    • Cost, energy, performance and resilience
    • Methodologies and tools

Workshop Program

2:30-2:40: Welcome
Georgios Theodoropoulos, Exascale Systems, IBM Research & Development Lab, Dublin, Ireland

2:40-4:00: Session I

      • Present and future of PAS2P: a tool for performance prediction in high scale machines, Emilio Luque, Computer Architecture and Operating Systems Department, Universitat Aut�noma de Barcelona.
        Abstract: Predicting performance of parallel applications is becoming increasingly complex and the best performance predictor is the application itself, but the time required to run it thoroughly is a onerous requirement. Based on the application's message-passing activity, PAS2P (Parallel Application Signatures for Performance Prediction) is able to identify and extract representative phases, which are the basics components to create the Parallel Application Signature. Through PAS2P, by extracting the application signature, we seek to predict the behaviour of message-passing applications on different systems to know what of them will provide the best application performance. The average accuracy of the provided prediction of the execution times in different parallels computers is over 96%. The future of PAS2P is based on the requirements of the petascale and exascale era, where HPC systems will scale up in compute node and processor core counts.
      • Codesign for Exascale, Georgios Theodoropoulos, Exascale Systems, IBM Research & Development Lab, Ireland.

4:00-4:30: BREAK

4:30-5:50: Session II

      • Scaling To A Million Cores And Beyond: A Basic Understanding Of The Challenges Ahead On The Road To Exascale, Christian Engelmann, System Research, Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN.
        Abstract: On the road toward multi-petascale and exascale HPC, the trend in architecture goes clearly in only one direction. HPC systems will dramatically scale up in compute node and processor core counts. By 2020, an exascale system may have up to 1,000,000 compute nodes with 1,000 cores per node. The substantial growth in concurrency causes parallel application scalability issues due to sequential application parts, synchronizing communication, and other bottlenecks. Investigating parallel algorithm performance properties at this scale and with these architectural properties for HPC hardware/software co-design is crucial to enable extreme-scale computing. The presented work utilizes the Extreme-scale Simulator (xSim) performance investigation toolkit to identify the scaling characteristics of a simple Monte Carlo algorithm from 1 to 16 million MPI processes on different multi-core architecture choices. The results show the limitations of strong scaling and the negative impact of employing more but less powerful cores for energy savings.
      • The Confluence of Exascale and Embedded Computing, Sudip Dosanjh, Extreme-Scale Computing, Sandia National Laboratories, Albuquerque, NM.
        Abstract: Energy is the primary design constraint in high performance computing (HPC) today. Without significant innovation future Exascale systems, which are a thousand-times more powerful than today's Petascale computers, will consume over 100 MW of power. This level of energy consumption is untenable from both a cost and an environmental perspective. As a consequence, the high performance computing community is increasingly seeking to leverage embedded architectures, which have long focussed on minimizing energy consumption. Because both applications and architectures are changing dramatically, the HPC community is also attempting to use a co-design strategy similar to the methodology developed by embedded computing researchers. This presentation will discuss why co-design has previously not been used in HPC and the challenges we are facing in applying these techniques.
      5:50-6:00: Concluding Remarks

 

      Georgios Theodoropoulos, Exascale Systems, IBM Research & Development Lab, Dublin, Ireland

 

Organization

Program Chair

      Georgios Theodoropoulos, Exascale Systems, IBM Research & Development Lab, Dublin, Ireland

Steering Committee

      • Shoukat Ali, Exascale Systems, IBM Research & Development Lab, Dublin, Ireland
      • Kostas Katrinis, Exascale Systems, IBM Research, Ireland
      • Rolf Riesen, Exascale Systems, IBM Research & Development Lab, Dublin, Ireland
      • Georgios Theodoropoulos, Exascale Systems, IBM Research & Development Lab, Dublin, Ireland

Program committee

    • Deepak Ajwani, University College Cork, Ireland.
    • Dorian Arnold, University of New Mexico, USA.
    • David Bader, Georgia Institute of Technology, USA.
    • Pete Beckman, Argonne National Laboratory, USA.
    • Patrick Bridges, University of New Mexico, USA.
    • Wentong Cai, Nanyang Technological University, Singapore.
    • Christian Engelmann, Oak Ridge National Laboratory, USA.
    • Kurt Ferreira, Sandia National Laboratories, USA.
    • Simon Hammond, University of Warwick, Great Britain.
    • Torsten Hoefler, University of Illinois, USA.
    • Tahar Kechadi, University College Dublin, Ireland.
    • Vincent Keller, Ecole Polytechnique Federale de Lausane, Switzerland.
    • Stephen Kirkland, National University of Ireland, Maynooth.
    • Alexey Lastovetsky, University College Dublin, Ireland.
    • James Laros, Sandia National Laboratories, USA.
    • Edgar Leon, Lawrence Livermore National Laboratory, USA.
    • Diego Lugones, Dublin City University, Ireland.
    • Tony Maciejewski, Colorado State University, USA.
    • Muthucumaru Maheswaran, McGill University, Canada.
    • John Morrison, University College Cork, Ireland.
    • Donal O'Mahony, Trinity College Dublin, Ireland.
    • Viktor Prasanna, University of Southern California, USA.