FastPath 2018       


 Erik Altman photo Parijat Dube photoZehra Sura photo

FastPath 2018 - overview

FastPath 2018

International Workshop on Performance
Analysis of Machine Learning Systems

April 2, 2018 - Belfast, Northern Ireland, United Kingdom
In conjunction with ISPASS 2018

FastPath 2018 program is available here.

FastPath 2018 brings together researchers and practitioners involved in cross-stack hardware/software performance analysis, modeling, and evaluation for efficient machine learning systems. Machine learning demands tremendous amount of computing. Current machine learning systems are diverse, including cellphones, high performance computing systems, database systems, self-driving cars, robotics, and in-home appliances. Many machine-learning systems have customized hardware and/or software. The types and components of such systems vary, but a partial list includes traditional CPUs assisted with accelerators (ASICs, FPGAs, GPUs), memory accelerators, I/O accelerators, hybrid systems, converged infrastructure, and IT appliances. Designing efficient machine learning systems poses several challenges.

These include distributed training on big data, hyper-parameter tuning for models, emerging accelerators, fast I/O for random inputs, approximate computing for training and inference, programming models for a diverse machine-learning workloads, high-bandwidth interconnect, efficient mapping of processing logic on hardware, and cross system stack performance optimization. Emerging infrastructure supporting big data analytics, cognitive computing, large-scale machine learning, mobile computing, and internet-of-things, exemplify system designs optimized for machine learning at large.



FastPath seeks to facilitate the exchange of ideas on performance analysis and evaluation of machine learning/AI systems and seeks papers on a wide range of topics including, but not limited to:

  • Workload characterization, performance modeling and profiling of machine
    learning applications
  • GPUs, FPGAs, ASIC accelerators
  • Memory, I/O, storage, network accelerators
  • Hardware/software co-design
  • Efficient machine learning algorithms
  • Approximate computing in machine learning
  • Power/Energy and learning acceleration
  • Software, library, and runtime for machine learning systems
  • Workload scheduling and orchestration
  • Machine learning in cloud systems
  • Large-scale machine learning systems
  • Emerging intelligent/cognitive systems
  • Converged/integrated infrastructure
  • Machine learning systems for specific domains, e.g., financial, biological, education, commerce, healthcare



FastPath 2018 Call for Papers is available here.

Prospective authors must submit a 2-4 page extended abstract electronically at:

Authors of selected abstracts will be invited to give a 30-min presentation at the workshop.


Key Dates

  • Submission: March 1, 2018
  • Notification: March 10, 2018
  • Final Materials / Workshop: April 2, 2018



  • General Chair: Erik Altman (IBM)
  • Program Committee Chairs: Zehra Sura (IBM), Parijat Dube (IBM)
  • Publicity Chair: Guojing Cong (IBM)


Program Committee

Ana Varbanescu University of Amsterdam
Andrew Mundy ARM Research
Antoniu Pop University of Manchester
Lieven Eeckhout Ghent University, Belgium
Vijay Saraswat
Goldman Sachs, New York
Xavier Martorell
Universitat Politecnica de Catalunya (UPC)
Barcelona Supercomputing Center (BSC)


Invited Speakers:

1. Christophe Dubach, University of Edinburgh

2. David Gregg, Trinity College Dublin

3. Peter Waggett, IBM Research UK

4. Wendy Belluomini, IBM Research Ireland


Previous Editions

FastPath 2015 was held in conjunction with ISPASS 2015. Half-day with 4 invited speakers.

FastPath 2014 was held in conjunction with ISPASS 2014. Full-day with 3 invited speakers and 4 Regular speaker.

FastPath 2013 was held in conjunction with ISPASS 2013. Full-day with 1 keynote speaker, 6 invited speakers and 1 Regular speaker.

FastPath 2012 was held in conjunction with ISPASS 2012. Half-day with 1 keynote speaker and 3 invited speakers.