My research interests lie in the area of high performance computing and includes parallel programming languages, parallel algorithms and data structures and high performance system design. Parallel processing machines are increasingly available but efficiently programming these novel architectures continues to be a challenge. The focus of my research is to invent new programming languages, libraries and methodologies to allow users to write efficient parallel applications.
1) Hybrid collective operationsExploit the hierarchical organization of modern supercomputers to provide efficient collective operations through means of composition.
2) Efficient runtime system for PGAS languagesDesign the collectives employed by PGAS languages with emphasis on UPC
3) Proposal for new UPC collectivesI started collaborating with researchers within IBM and Lawrence Berkley National Lab (LBNL) to identify a set of extensions for the collectives currently provided by UPC. The main goals are to improve expressivity and performance. As a result of this collaboration we wrote a proposal for extending the standard of the UPC programming language.
- Gheorghe Almasi, Paul Hargrove, Gabriel Tanase, Yili Zheng, “UPC Collectives Library 2.0 ”, in Proc. of Fifth Conference on Partitioned Global Address Space Programming Models (PGAS 2011), Galveston, Texas, Oct 2011
4) STAPLDuring my graduate studies at Texas A&M University I have made major contributions to the development of a novel, parallel programming library called Standard Template Adaptive Parallel Library (STAPL).I have designed and implemented the STAPL Parallel Container Framework (PCF) for my Ph.D. thesis. The PCF consists of a set of formally defined concepts and a methodology for developing generic parallel containers starting from sequential, STL-like containers. Users, by implementing the appropriate interfaces, can assemble with minimal effort a data structure that will provide methods to build and access a distributed collection of elements.
- Gabriel Tanase, and all, “The STAPL Parallel Container Framework”, in Proc. of ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP), San Antonio, Texas, Feb 2011.
5) AdaptivityWriting portable programs that perform well on multiple platforms or for varying input sizes and types can be very difficult because performance is often sensitive to the system architecture, the run-time environment, and input data characteristics. One way to address this problem is to adaptively select the best parallel algorithm for the current input data and system from a set of functionally equivalent algorithmic options.
- Nathan Thomas, Gabriel Tanase, Olga Tkachyshyn, Jack Perdue, Nancy M. Amato, Lawrence Rauchwerger, “A Framework for Adaptive Algorithm Selection in STAPL”, in Proc. of ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP), Chicago, Illinois, June 2005, pp. 277-288.