Our primary source of readings will be Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning. MIT Press, 2012. We will also read papers and learn materials that are not yet in textbooks.

Other recommended (but not required) books:

- Machine Learning: The Art and Science of Algorithms that Make Sense of Data by Peter Flach (ISBN 1107422221)
- Pattern Recognition and Machine Learning by Chris Bishop (ISBN 0387310738)
- Machine Learning by Tom Mitchell (ISBN 0070428077)
- Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani and Jerome Friedman (ISBN 0387952845)
- Information Theory, Inference and Learning Algorithms by David MacKay (ISBN 0521642981)
- An Introduction to Computational Learning Theory by Michael Kearns and Umesh Vazirani (ISBN 0262111934)

For RL, here are some good books that you can consult:

- Markov Decision Processes: Discrete Stochastic Dynamic Programming, by Martin Puterman.
- Reinforcement Learning: An Introduction, by Rich Sutton and Andrew Barto. (draft available online)
- Algorithms of Reinforcement Learning, by Csaba Szepesvari. (pdf available online)
- Neuro-Dynamic Programming, by Dimitri Bertsekas and John Tsitsiklis.

Papers to be discussed will be made available to ahead of time.

Useful inequalities cheat sheet (by László Kozma)

Concentration of measure (by John Lafferty, Han Liu, and Larry Wasserman)

Learning theory (traditional and modern)

PAC learning basics

Boosting theory

PAC learning in neural nets

Latent variable graphical models

Graphical model basics

Spectral methods: matrix/tensor decomposition

Reinforcement learning theory

RL overview: algorithms and analyses

RL theory: sample complexity