# diochnos/teaching/learning

## Learning Theory

Tentative version for Spring 2018.

### Course Description

Learning theory is a field that develops models for capturing learning phenomena and ultimately studies the efficiency and the limitations of learning algorithms in these frameworks. The course will start with the notions of concept learning and version spaces and proceed to decision tree learning and artificial neural networks. We will then switch to the probably approximately correct (PAC) model of learning, Occam's razor, VC theory, and ultimately prove the fundamental theorem of learning theory. We will examine the relationship between weak and strong learning (boosting). Subsequently, we will move on to online learning and learning with expert advice in the mistake bound model, where we will see algorithms such as randomized halving, weighted majority, and winnow. We will also cover theoretical aspects of learning that are related to evolution and evolution seen as a learning process drawing results from evolutionary algorithms and Valiant's evolvability framework. (This part of the course related to evolution can also be seen as a special type of local search optimization with PAC guarantees.) Time permitting we will cover additional models of learning such as active learning, or the computational approach to evolution by Adi Livnat and Christos Papadimitriou.

### Prerequisites

Basic logic: and, or, not, xor, etc.

Basic combinatorics: permutations, combinations, binomial coefficients.

Basic probability theory: random variables, expected value, independence vs correlation, geometric random variable, conditional probability.

Graphs: weighted and directed variants, and the related notions of vertices, edges, degree, etc. Representation and traversal (e.g. breadth-first search, depth-first search).

The O notation and being able to analyze running time of simple (potentially recursive) algorithms. Graduate level mathematical maturity. Tools from probability theory will be discussed (briefly) on demand as they arise. Familiarity with basic complexity classes (P, NP, etc.) helps, but is not assumed.

### Syllabus

A copy of the syllabus from the Spring of 2017 is available here.

### References

• Michael J. Kearns and Umesh V. Vazirani. An Introduction to Computational Learning Theory. MIT Press, 1994.
• Tom M. Mitchell. Machine Learning. McGraw Hill Series in Computer Science. McGraw-Hill, 1997.
• Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014.