CogArch 2016 - The 2nd Workshop on Cognitive Architectures @ ASPLOS 2016 - Invited Speakers


James. E. Smith:

Prof. Jim Smith

Investigating the Brain’s Computational Paradigm

Understanding and implementing the brain’s computational paradigm is the grand challenge facing computer researchers. Not only does it provide computational capabilities far beyond those of conventional computers, its energy efficiency is truly remarkable. I believe strongly that computer architects and engineers have a unique set of skills and a perspective that should be applied to meeting this grand challenge. 

The brain’s neocortex is constructed of massively interconnected neurons that compute and communicate via voltage spikes, and a strong argument can be made that precise spike timing is an essential element of the paradigm. I will describe and illustrate important features of spike-based spatio-temporal computation using a spiking neural network architecture as a case study. And, although this is work in progress, it clearly illustrates the application of a computer architect’s perspective to solving the ultimate computing grand challenge.

James E. Smith is Professor Emeritus with the Department of Electrical and Computer Engineering at the University of Wisconsin-Madison. He received his PhD from the University of Illinois in 1976. He then joined the faculty of the University of Wisconsin-Madison, teaching and conducting research in fault-tolerant computing and in computer architecture. He has been involved in a number of computer research and development projects both as a faculty member at Wisconsin and in industry and has made several telling contributions to the field, notably, the design of superscalar processors. He received the prestigious ACM/IEEE Eckert-Mauchly Award in 1999 for his contributions.

 Currently, Prof. Smith is studying computational models of the brain at his home near Missoula, Montana.

 

Mircea Stan:

Hierarchical Temporal Memory on Micron's Automata Processor

The Hierarchical Temporal Memory (HTM) pioneered by Numenta offers unique benefits: it forms its own representations, no deep domain knowledge required, it dynamically adapts inference model as input data changes, and its prediction/anomaly detection in streaming data is better than that of regular ANNs. This presentation will explore the use of the recently introduced Automata Processor for implementing HTM in hardware, thus providing a valuable research platform for neocortical learning principles. This is possible since there are many correspondences between HTM and AP execution models, where the HTM execution model is stateful and the AP is a state-based non-von Neumann accelerator, and HTM core algorithms can be efficiently realized as automata.

Mircea R. Stan received the Diploma in electronics and communications from the Politechnica University, Bucharest, Romania, in 1984 and the M.S. and Ph.D. degrees in ECE from UMass Amherst, in 1994 and 1996, respectively. Since 1996, he has been with the Charles L. Brown ECE Department, where he is now a Professor. He has more than eight years of industrial experience, was a visiting scholar at UC Berkeley in 2004-2005, a visiting faculty member with IBM in 2000, and with Intel in 2002 and 1999.

Dr. Stan received the NSF CAREER Award in 1997 and was coauthor on Best Paper Awards at ISQED 2008, GLSVLSI 2006, ISCA 2003, and SHAMAN 2002. He was the Chair of the VSA-TC of the IEEE CAS Society for 2006-2007, the General Chair for the 2006 ISLPED and for the 2004 GLSVLSI, the Technical Program Chair for the 2007 NanoNets and the 2005 ISLPED, on technical committees for numerous conferences, and an Associate Editor for the IEEE TCAS I from 2004 to 2007 and for the IEEE TVLSI from 2001 to 2003. He also was a distinguished lecturer for the IEEE SSCS Society from 2007 to 2008, and or the IEEE CAS Society from 2004 to 2005. He is a member of the IEEE, ACM, Eta Kappa Nu, Phi Kappa Phi, and Sigma Xi. He is also the faculty adviser to the UVa student branch of the IEEE

 

Jeff Burns:

Jeff Burns

Towards Cognitive Computers: Practical Considerations

Cognitive Computing is generating tremendous excitement in the industry and in the press, felt by many to be the next major era in information technology.  As cognitive workloads grow and evolve and mature, the question of whether to target new systems, specifically for cognitive computing, arises.  Rather than implementing cognitive workloads on conventional platforms, could cognitive-specific platforms result in dramatically better outcomes, and enable entirely new cognitive capabilities?  This question is compelling to ask as well as complex to answer.  In this talk I will present some of the challenges in designing today’s high-end commercial server systems.  With that as a basis, I will share some views on research challenges and opportunities that need to be addressed to define and implement cognitive computers.

Jeffrey L. Burns received his B.S. in Engineering from UCLA, and his M.S. and Ph.D. in Electrical Engineering from U.C. Berkeley. In 1988 he joined the IBM T.J. Watson Research Center and worked in layout automation and processor design. In 1996 he joined the IBM Austin Research Lab where he worked on the first 1 GHz PowerPC; he then managed the Exploratory VLSI Design group. In 2003 he returned to Watson to work on IBM Research’s annual study into the future of IT. He then managed a program exploring a streaming-oriented supercomputer. From mid-05 until mid-09 he managed the VLSI Design department, focusing on high-end processors, SoC designs, and 3D. Since mid-2009, he has been Director of  Systems Architecture and Design at IBM Watson, where he manages the Division’s activities in VLSI design, design automation, and microprocessor architecture.