Geoffrey Hinton

Monday, January 26

Place: U of C - International House Assembly Hall
Time: 1:00 - 3:45pm
Parking: Street parking, or in the free lot on the corner of 60th St. and Stony Island Avenue.

Geoffrey Hinton University of Toronto

Recent Developments in Learning Deep Networks


I will start by describing an efficient, modular, unsupervised learning procedure for deep generative models that contain millions of parameters and many layers of hidden features. The features are learned one layer at a time without any information about the final goal of the system. This approach leads to excellent generative models of handwritten digits.

I will then describe three recent improvements. First, I will describe a better learning algorithm for the module that is used to learn each layer of features greedily. Then I will describe a more powerful type of generative module that contains multiplicative interactions so that hidden units at one level can switch in pairwise interactions between hidden units at the level below.

Finally I will describe an application to recognizing stereo images of 3-D objects from the NORB database. For this task, deep belief nets outperform the best published results.


Geoffrey Hinton received his BA in experimental psychology from Cambridge in 1970 and his PhD in Artificial Intelligence from Edinburgh in 1978. He did postdoctoral work at Sussex University and the University of California San Diego and spent five years as a faculty member in the Computer Science department at Carnegie-Mellon University. He then became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto. He spent three years from 1998 until 2001 setting up the Gatsby Computational Neuroscience Unit at University College London and then returned to the University of Toronto where he is a University Professor. He holds a Canada Research Chair in Machine Learning. He is the director of the program on “Neural Computation and Adaptive Perception” which is funded by the Canadian Institute for Advanced Research.

Geoffrey Hinton is a fellow of the Royal Society, the Royal Society of Canada, and the Association for the Advancement of Artificial Intelligence. He is an honorary foreign member of the American Academy of Arts and Sciences, and a former president of the Cognitive Science Society. He received an honorary doctorate from the University of Edinburgh in 2001. He was awarded the first David E. Rumelhart prize (2001), the IJCAI award for research excellence (2005), the IEEE Neural Network Pioneer award (1998) and the ITAC/NSERC award for contributions to information technology (1992).

A simple introduction to Geoffrey Hinton’s research can be found in his articles in Scientific American in September 1992 and October 1993. He investigates ways of using neural networks for learning, memory, perception and symbol processing and has over 200 publications in these areas. He was one of the researchers who introduced the back-propagation algorithm that has been widely used for practical applications. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, Helmholtz machines and products of experts. His current main interest is in unsupervised learning procedures for neural networks with rich sensory input.


Mihalis Yannakakis

Tuesday, March 3

Mihalis Yannakakis Columbia University

Equilibria, Fixed Points, and Complexity Classes


Many models from a variety of areas involve the computation of an equilibrium or fixed point of some kind. Examples include Nash equilibria in games; market equilibria; computing optimal strategies and the values of competitive games (stochastic and other games); stable configurations of neural networks; analyzing basic stochastic models for evolution like branching processes and for language like stochastic context-free grammars; and models that incorporate the basic primitives of probability and recursion like recursive Markov chains. It is not known whether these problems can be solved in polynomial time. Despite their broad diversity, there are certain common computational principles that underlie different types of equilibria and connect many of these problems to each other. In this talk we will discuss these common principles and the corresponding complexity classes that capture them.


Mihalis Yannakakis is the Percy K. and Vida L. W. Hudson Professor of Computer Science at Columbia University. Prior to joining Columbia, he was Director of the Computing Principles Research Department at Bell Labs and at Avaya Labs, and Professor of Computer Science at Stanford University. Dr. Yannakakis received his PhD from Princeton University. His research interests include algorithms, complexity, optimization, databases, testing and verification. He has served on the editorial boards of several journals, including as the past editor-in-chief of the SIAM Journal on Computing, and has chaired various conferences, including the IEEE Symposium on Foundations of Computer Science, the ACM Symposium on Theory of Computing and the ACM Symposium on Principles of Database Systems. Dr. Yannakakis is a Fellow of the ACM, a Bell Labs Fellow, and a recipient of the Knuth Prize.


Steve Young

Wednesday, May 13

Place: TTIC Conference Room 526
Time: 2:00pm
Parking: Street parking, or in the free lot on the corner of 60th St. and Stony Island Ave.

Steve Young University of Cambridge

“Statistical Spoken Dialogue Systems”


Current spoken dialogue systems (SDS) typically employ hand-crafted decision networks or flow-charts to determine what action to take at each point in a conversation. The result is a system which is fragile to speech recognition errors and which is unable to adapt and learn from experience.

Modeling a dialogue as a statistical Markov Decision Process potentially offers a way forwards. However, attempts to exploit MDPs in real systems have met with limited success primarily due to the fact that they cannot model the uncertainty which is inherent in all speech-based interactions.

For the last few years, a team in the Cambridge Speech Group has been investigating the use of partially observable Markov Decision Processes (POMDPs) for use in SDS. POMDPs provide a Bayesian model of belief and a principled mathematical framework for modeling uncertainty. They can be trained from real data and they yield policies which can be optimised using reinforcement learning. However, exact belief update and policy optimisation algorithms are intractable and as a result there are many issues inherent in scaling POMDP-based systems to handle real-world tasks.

This talk will briefly summarise the basic mathematics of POMDPs in SDS and explain why exact optimisation is intractable. It will then outline some of the techniques which have been developed to enable real systems to be built. The talk will conclude by presenting some working systems and results from user trials.


Steve Young is Head of Information Engineering at Cambridge University, UK. He received a BA in Electrical Sciences from Cambridge University in 1973 and a PhD in Speech Processing in 1978. He held lectureships at both Manchester and Cambridge Universities before being elected to the Chair of Information Engineering at Cambridge University in 1994. He was a co-founder and Technical Director of Entropic Ltd from 1995 until 1999 when the company was taken over by Microsoft. After short period as an Architect at Microsoft, he returned full-time to the University in January 2001.

Steve Young’s research interests include speech recognition, language modelling, spoken dialogue and multi-media applications. He is the inventor and original author of the HTK Toolkit for building hidden Markov model-based recognition systems (see More recently his prime interest has shifted to statistical dialogue systems and the use of Partially Observable Markov Decision Processes for modelling them.

He has written and edited books on software engineering and speech processing, and he has published as author and co-author, more than 200 papers in these areas. He is a Fellow of the Royal Academy of Engineering, the Institution of Electrical Engineers (IEE), the Institute of Electrical and Electronics Engineers (IEEE) and the RSA. He is also a member of the British Computer Society (BCS). He served as the senior editor of Computer Speech and Language from 1993 to 2004 and is now a member of the editorial board. He is Chair of the IEEE Speech and Language Technical Committee and a member of the IEEE SPS Awards Board. He was the recipient of an IEEE Signal Processing Society Technical Achievement Award in 2004 and in 2008 he was elected Fellow of the International Speech Communication Association.


Time & Place: All talks will be held at TTIC’s new facility 6045 South Kenwood Avenue (intersection of 61st street and Kenwood Avenue), and begin at 2:00 p.m. (unless otherwise noted)