Learning Decision Making Systems under (Adversarial) Distribution Shifts
IRB 0318
Also on zoom - https://umd.zoom.us/j/96718034173?pwd=clNJRks5SzNUcGVxYmxkcVJGNDB4dz09 Reinforcement learning is an effective way to model an interactive real-time decision-making process and is grounded in applications such as robotics, personalized healthcare systems, autonomous driving systems, and market making systems. In reinforcement learning, the agent has to explore the unknown environment in order to make informed decisions. However, the required lengthy explorations during the learning may expose our agents to danger as they often make suboptimal or even random explorations. In addition, the agent could fail catastrophically under perturbations or even adversarial attacks. In this talk, I will outline some methods for learning to learn, learning to adapt and learning to generalize in RL. I will also present a robust learning paradigm under adversarial perturbations in RL decision making systems.