Towards Controllable and Personalized AI Models

Talk
Tianyi Zhou
Talk Series: 
Time: 
11.03.2023 11:00 to 12:00

AI models have achieved impressive progress in the past few years, especially the rise of large language models (LLMs), large multi-modality models, diffusion models, and the embodied agents built upon them. However, it is not always easy for humans to precisely control these models' learning and inference, which suffer from the risks of poor generalization/efficiency, brittleness to low-quality data, hallucinations/toxicity/disinformation in the outputs, and misalignment to human intents. On the other hand, it is still challenging to align those general foundation models with the personal interests, styles, preferences, and tasks of individual users because the model training is data-hungry but a single user usually cannot provide sufficient data.Our lab at UMD is working on building more controllable and personalized AI models that can bridge the gap between humans and machines. This talk will introduce our recent efforts on this front. These include (1) Controllable AI model training via curriculum learning, which aims to automatically design a sequence of training data/tasks selected or generated adaptive to the model in each learning stage, with applications in finetuning LLMs with data recycling, training embodied reinforcement learning (RL) agents, continual/lifelong learning, etc.; (2) Controllable AI model inference or generation via prompt optimization, in-context learning, module-wise distillation, and dynamic alignment with environments; and (3) Personalization of AI models via structured federated/decentralized learning, multi-objective multi-solution transport, and mixture-of-expert.