This course page is still under construction. I hope the current information is helpful as you plan and enroll in courses.
Time and location: Mon/Wed 3:30pm–4:45pm, CSI 1121
Instructor: Han Shao, hanshao@umd.edu
TA: TBD
Homework (35%), midterm exam (30%), participation (5%), final exam (30%).
This course is fully lecture based. It focuses on foundational tools in learning theory (e.g., generalization in the offline setting and regret bounds in the online setting) and explores active research directions. Machine learning theory asks questions like: what guarantees can we prove for practical learning methods, and can we design algorithms that achieve these guarantees? what can we say about the inherent ease or difficulty of different learning problems?
Mathematical maturity and comfort with theorems/proofs are required. Familiarity with probability/statistics (e.g., concentration inequalities, union bound) and basic algorithms is expected. No programming is required. All homeworks and exams will consist of proof-based questions.
Han Shao: by appointment (email me), IRB 5132
TA: TBD
| Topic |
|---|
| Logistics & introduction; PAC learning & sample complexity |
| Sample complexity: upper and lower bounds |
| Agnostic learning; uniform convergence and generalization |
| Online learning: mistake bounds; Littlestone dimension |
| Multiplicative weights; regret minimization |
| Linear prediction: perceptron; smoothed analysis |
| Boosting |