CMSC 723, Fall 2025, UMD
Assignments
- HW 0 available here, due 9/17
- Project proposal template available here, due 10/6
- HW 1 available here, due 10/15
- Exam 1 scheduled for 10/20 in-class
- HW 2 available here, due 12/7
- Exam 2 scheduled for 12/10 in-class
- Project report template available here, due 12/20
Schedule
Make sure to reload this page to ensure you're seeing the latest version.
Readings should be done before watching the corresponding lecture videos. See this page for materials (videos / slides / reading) from the Spring 2024 offering.
Week 1 (9/3): introduction, language modeling
-
- Course introduction // [video] // [notes]
- No associated readings!
- HW 0 released here, due 9/17
- Final projects:
Week 2 (9/7-10): language models, neural networks
Week 3 (9/15-17): Neural language models, backpropagation
Week 4 (9/22-24): Transformer language models
Week 5 (9/29-10/1): Training stages, tokenization
Week 6 (10/6): Position encoding & efficient attention
- No class 10/8 (Mohit traveling)
Week 7 (10/15): Scaling laws
- No class 10/13 (Fall break)
Week 8 (10/20-22): In-class midterm, instruction tuning
- Instruction tuning / PEFT // [video] // [notes]
- [reading] Instruction tuning (Wei et al., 2022, FLAN)
- [reading] LoRA: Low-Rank Adaptation of Large Language Models (Hu et al., 2021)
- [optionalreading] Power of Scale for Prompt Tuning (Lester et al., EMNLP 2021)
Week 9 (10/27-29): Post-training, RLHF
Week 10 (11/3-5): RL for LLMs continued
Week 11 (11/3-5): Reasoning models
- Reasoning models (cont'd) // [video] // [notes]
- [reading] s1: Simple test time scaling (Muennighoff et al., 2025)
- [reading] DeepSeek-R1 Thoughtology (Marjanovic et al., 2025)
Week 12 (11/17-19): LLM evaluation / RAG
Week 13 (11/24): LLM agents
Week 14 (12/1-3): LLM security & memorization
- LLM security // [video] // [notes]
- [reading] Misalignment from reward hacking (MacDiarmid et al., 2025)
- [codebase] "God-mode" jailbreak prompts for LLMs
- [reading] Defending against universal jailbreaks (Sharma et al., 2025)
- Memorization & factuality // [video] // [notes]
- [reading] How much do language models memorize? (Morris et al., 2025)
- [reading] FactScore (Min et al., EMNLP 2023)
Week 15 (12/8): Exam review