CS Ph.D. Student Ming Li Named 2026 Apple Scholar
Ming Li, a Ph.D. student in the University of Maryland’s Department of Computer Science, has been named a 2026 recipient of the Apple Scholars in AI/ML PhD Fellowship, a competitive national program that supports doctoral students conducting research in artificial intelligence and machine learning.
Li is among a small group of doctoral students nationwide selected for the fellowship this year. The program recognizes emerging researchers whose work shows strong potential to advance academic research in AI and machine learning, while also supporting their development through funding, mentorship and industry engagement.
The fellowship provides two years of financial support, including tuition and fee coverage and an annual stipend of up to $40,000. Fellows also receive a $5,000 annual travel stipend, as well as opportunities to connect with Apple research mentors and pursue internships with the company. According to the fellowship, Li’s nomination stood out within a highly competitive pool reviewed by Apple’s selection committee.
“I am deeply honored to receive the Apple Fellowship,” Li said. “This recognition is incredibly meaningful to me, as it supports research that bridges theoretical understanding and practical impact in large language models. I am grateful for the mentorship and support from my advisor, Professor Tianyi Zhou, and the research community at UMD. This fellowship motivates me to continue pursuing research that advances trustworthy and effective AI systems.”
Li’s research centers on data-centric AI, with a particular focus on post-training methods and the interpretability of large language models. His work examines how training data is generated, evaluated and analyzed, with the goal of improving how models learn, reason and behave after deployment. By studying the relationships among data, training procedures and model outputs, Li aims to improve the reliability and controllability of modern language models.
Li expressed that his work has broader societal implications that will have real-world effects.
“My work aims to make large language models more reliable, interpretable and aligned with human cognition,” he said. “By improving how data is generated, evaluated and used in training, this research can help reduce unexpected model behaviors and enhance reasoning capabilities. In the long term, these advances can support the responsible deployment of AI systems in high-stakes domains such as education, science and decision-making.”
Looking ahead, Li plans to build on his current research agenda by further exploring post-training methods and interpretability techniques for large language models.
“I plan to continue working on post-training and interpretability for improving large language models,” Li said. “I am interested in developing methods that enable more controllable and transparent AI systems, and in exploring how these techniques can be applied to real-world applications.”
—Story by Samuel Malede Zewdu, CS Communications
The Department welcomes comments, suggestions and corrections. Send email to editor [-at-] cs [dot] umd [dot] edu.
