Our primary source of readings will be Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning. MIT Press, 2012. We will also read papers and learn materials that are not yet in textbooks.

Other recommended (but not required) books:

- Boosting: Foundations and Algorithms. by Robert E. Schapire, Yoav Freund (ISSN: 0368-492X)
- Machine Learning: The Art and Science of Algorithms that Make Sense of Data by Peter Flach (ISBN 1107422221)
- Pattern Recognition and Machine Learning by Chris Bishop (ISBN 0387310738)
- Machine Learning by Tom Mitchell (ISBN 0070428077)
- Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani and Jerome Friedman (ISBN 0387952845)
- Information Theory, Inference and Learning Algorithms by David MacKay (ISBN 0521642981)
- An Introduction to Computational Learning Theory by Michael Kearns and Umesh Vazirani (ISBN 0262111934)

For RL, here is an ICML 2023 tutorial by John Langford and Alex Lamb

Here are some good RL books that you can consult:

- Markov Decision Processes: Discrete Stochastic Dynamic Programming, by Martin Puterman.
- Reinforcement Learning: An Introduction, by Rich Sutton and Andrew Barto. (draft available online)
- Algorithms of Reinforcement Learning, by Csaba Szepesvari. (pdf available online)
- Neuro-Dynamic Programming, by Dimitri Bertsekas and John Tsitsiklis.

Papers to be discussed will be made available to ahead of time.

Useful inequalities cheat sheet (by László Kozma)

Concentration of measure (by John Lafferty, Han Liu, and Larry Wasserman)