PhD Defense: Multi-Agent Autonomous Decision Making in Artificial Intelligence

Talk
Saptarashmi Bandyopadhyay
Time: 
05.27.2025 14:00 to 16:00
Location: 

Multi-Agent Autonomous Decision Making, especially Multi-Agent Reinforcement Learning (MARL), is an emerging area of Artificial Intelligence (AI) where autonomous agents interact with each other in real-world problems like Augmented Reality (AR), Recommender Systems, Supply Chains, Climate Conservation, and Self-Driving Cars. Challenges include efficiently scaling multiple agents in coordination and understanding agentic behavior. The PhD Thesis provides formal guarantees for reliable cooperation of socially intelligent AI Agents, defining agentic behavior with game-theoretic notions, and providing stronger bounds on the sample complexity of an AI Agent to select strategies in zero-shot coordination than Imitation Learning. Collaborations of AI Agents in Alignment with humans have real-world applications in AR, where multimodal AI Agents proactively assist humans with timely and helpful interventions. A novel framework gives autonomous vehicle agents a semantic understanding of the world to efficiently control fine-grained actions. A new intrinsic reward improves exploration and maximizes profit by preventing unnecessary interactions among AI Agents for inventory planning in Supply Chain Orchestration. These real-world applications motivate understanding how AI Agents make decisions, which is addressed with Explainable AI (XAI) Agents for recommending XAI methods to serve user goals. As MARL is slow with the increasing scale of agents, JAX-enabled hardware acceleration is up to 12,500x faster over existing libraries in eight popular MARL environments. The effectiveness of AI Agents can be improved by a combination of the proposed MARL, Imitation Learning, and Game Theoretic algorithms in solving problems in real world and simulated environments.