# Seminar: Emtiyaz - Bayesian Principles for Learning-Machines

**Date**September 17, 2021

**Author**Hrvoje Stojic

## Bayesian Principles for Learning-Machines

### Abstract

Humans and animals have a natural ability to autonomously learn and quickly

adapt to their surroundings. How can we design machines that do the

same? In this talk, I will present Bayesian principles to bridge such

gaps between humans and machines. I will show that a wide-variety of

machine-learning algorithms are instances of a single learning-rule

derived from Bayesian principles. The rule unravels a dual perspective

yielding new mechanisms for knowledge transfer in learning machines. My

hope is to convince the audience that Bayesian principles are

indispensable for an AI that learns as efficiently as we do.

### Notes

- References for the first part:
- The Bayesian Learning Rule, (Preprint) M.E. Khan, H. Rue [ arXiv ] [ Tweet ]
- Practical Deep Learning with Bayesian Principles, (NeurIPS 2019) K. Osawa, S. Swaroop, A. Jain, R. Eschenhagen, R.E. Turner, R. Yokota, M.E. Khan. [ arXiv ] [ Code ]
- Conjugate-Computation Variational Inference : Converting Variational Inference in Non-Conjugate Models to Inferences in Conjugate Models, (AIstats 2017) M.E. Khan and W. Lin [ Paper ]

- References for the second part:
- Knowledge-Adaptation Priors, (Preprint) M.E. Khan, Siddharth Swaroop [ arXiv ] [ Slides ] [ Tweet ] [ SlidesLive Video ]
- Continual Deep Learning by Functional Regularisation of Memorable Past, (NeurIPS 2020) P. Pan*, S. Swaroop*, A. Immer, R. Eschenhagen, R. E. Turner, M.E. Khan [ arXiv ] [ Code ] [ Poster ]
- Approximate Inference Turns Deep Networks into Gaussian Processes, (NeurIPS 2019) M.E. Khan, A. Immer, E. Abedi, M. korzepa. [ arXiv ] [ Code ]

**Bio**: Emtiyaz Khan (also known as Emti) is a team leader at the RIKEN center for Advanced Intelligence Project (AIP) in Tokyo where he leads the Approximate Bayesian Inference Team. He is also an external professor at the Okinawa Institute of Science and Technology (OIST). Previously, he was a postdoc and then a scientist at Ecole Polytechnique Fédérale de Lausanne (EPFL), where he also taught two large machine learning courses and received a teaching award. He finished his PhD in machine learning from University of British Columbia in 2012. The main goal of Emti’s research is to understand the principles of learning from data and use them to develop algorithms that can learn like living beings. For the past 10 years, his work has focused on developing Bayesian methods that could lead to such fundamental principles. The approximate Bayesian inference team now continues to use these principles, as well as derive new ones, to solve real-world problems.- Personal website can be found here .

Share

,,