IJCAI-17 Tutorial: Energy-based machine learning       

links

Takayuki Osogami photo

IJCAI-17 Tutorial: Energy-based machine learning - Introduction


Introduction

In this tutorial, we will provide an overview with concrete examples of energy-based machine learning (EBML) methods that capture dependencies between variables by associating a scalar energy term, where the objective of learning is to minimize this energy or maximize the likelihood of data. Starting with an introduction to the early foundations of deep learning in EBMLs like Boltzmann machines and their deep variants, we will cover recent work in using EBMLs for unsupervised and supervised learning of temporal sequences as well as deep reinforcement learning.
 
Energy-based models (EBMLs) form the basis of deep learning, and provides a common theoretical framework for many learning models, including traditional discriminative and generative approaches, as well as graph-transformer networks, conditional random fields, maximum margin Markov networks, and several manifold learning methods. EBMLs capture dependencies by associating a scalar energy (as a measure of compatibility) to each configuration of the variables. Inference, i.e., making a prediction or decision, consists of appropriately setting the values of observed variables from data and finding values of the remaining variables that minimize the energy. Learning in EBMLs consists of finding a minimum energy configuration with the use of a suitable loss function, which is minimized during learning. Overall, we aim at introducing EBMLs to the wide AI audience, survey the key elements of the methodology and the most recent developments, and make AI practitioners aware of the set of tools that have been developed for applying EBMLs on real-world problems.

 
Brief Outline
 
We will start with the introduction to the Boltzmann machine as a model for energy-based generative learning with spatial patterns or image data, along with its extensions like restricted Boltzmann machines that form the basis for the early success in deep learning. Although the early Boltzmann machines were less suitable for temporal sequence modeling, recent variants in combination with recurrent neural networks, like recurrent conditional restricted Boltzmann machines have been successful in this domain. Although powerful, EBMLs typically require approximate inference and learning procedures, and such drawbacks will be discussed. We will specifically focus on a recent model called dynamic Boltzmann machines (DyBM) that have been shown to give state-of-the-art results for both generative modeling of time-series as well as for supervised time-series prediction. Specifically, the DyBM provides methods for exact learning with limited computational resources. This will be juxtaposed with previous EBMLs to showcase the advantages and disadvantages compared to older models. Finally, we will give an introduction to reinforcement learning using EBML models as function approximators and discuss their use cases and success in comparison to deep reinforcement learning. 



Tutorial slides

Box folder




Tutorial date & venue

  • Date: August 21
  • Time: PM1 and PM2
  • Venue: Melbourne convention & exhibition center Room 211