D:\Inetpub\shared\yedion\syllabus\03\2019\0368\0368308001_desc.txt Course description Abstract
“Deep learning” refers to a class of statistical learning algorithms that is experiencing unprecedented success in recent years, largely responsible for the technological breakthroughs referred to in the public as “artificial intelligence”. However, despite its extreme popularity and the vast attention it is receiving, our formal understanding of deep learning is limited – its application in practice is based primarily on conventional wisdom, trial-and-error, and intuition, often leading to suboptimal results (compromising not only effectiveness, but also safety, privacy and fairness). Consequently, deep learning is often viewed more as an art form than a rigorous scientific discipline.
This course will present deep learning from the perspective of its mathematical foundations, focusing on the setting to which it owes its success – supervised learning. Through theoretical analyses and systematic empirical studies, we will formulate the fundamental questions underlying deep learning, review some of the known answers, and discuss problems that are still wide open, standing at the forefront of deep learning research.
Syllabus
The course will cover the following topics:
-
Standard practices in supervised deep learning: neural network architectures, gradient-based training, backpropagation, regularization, dropout
-
Three pillars of statistical learning theory (expressiveness / optimization / generalization) in classical machine learning vs. deep learning.
-
Expressiveness: universal approximation, depth separation, separation of other architectural features, inductive bias
-
Optimization: properties of neural network landscapes, geometric approach for 2 layer networks, trajectory approach for >2 layer networks, role of overparameterization
-
Generalization: naïve complexity measures and their limitations, implicit regularization, modern complexity measures
-
Additional questions: explainability / robustness / fairness
-
Beyond supervised learning: semi-supervised, unsupervised, generative models, reinforcement learning
Lectures will deliver the bulk of course syllabus in a relatively theoretical form; recitations will recap prerequisite theoretical material and will demonstrate empirical phenomena; homework assignments are expected to comprise a mix of theory and experimentation.
|