Furong Huang on Building Trustworthy AI Systems
University of Maryland Associate Professor of Computer Science Furong Huang focuses her research on developing robust and trustworthy artificial intelligence systems, with work spanning machine learning, reinforcement learning and robotics. Her research aims to ensure that AI systems, both digital and physical, operate reliably and remain aligned with human intentions and social values. Her lab recently expanded to include a dedicated physical space with robot arms, quadrupeds and mobile manipulators, allowing her team to explore real-world applications of AI.
In this interview, Huang discusses her career path, current research directions and her perspective on responsible AI development and student preparation in a fast-changing field.
Was there a defining moment that shaped your career path into computer science?
In 2010, I came to the United States from China for my Ph.D., and my background was in electrical engineering. I was working on topics like cognitive radio, wireless communication and resource allocation. In my first year, my advisor suggested that I take a machine learning course. At that time, I had never heard of machine learning and did not expect to move into computer science.
I took a course at the University of California, Irvine, and that experience changed my direction. I became interested in machine learning, which was also connected to statistics and what we now call AI. That was an important moment, but my path has included several pivots. I have moved across areas such as unsupervised learning, reinforcement learning and more recently robotics.
Can you tell me about your current research focus?
My research has several directions, but overall I focus on building robust and trustworthy AI agents. These agents can be digital or physical, and the goal is to ensure they are aligned with human intentions and social values.
With the increasing capabilities of AI, it is important to understand how these systems behave and ensure they do not produce unintended outcomes. That includes making sure models are interpretable and that their actions can be anticipated.
What are you currently working on?
We are working on both digital and physical agents, but one area I am particularly interested in is what we call world models. There is a lot of discussion about world models, often related to generating images or videos. My approach is different.
I think of a world model as a way to anticipate the consequences of actions. It is about understanding how an agent interacts with its environment and predicting what will happen next. It is similar to imagining future outcomes based on current decisions.
We aim to build these models efficiently so they can work with limited data. This has applications in both digital systems and robotics, where predicting outcomes is essential.
Can you tell me more about your lab and the work taking place there?
Recently, I established a physical lab space. Before that, my group operated more virtually, with students working across different locations. Now we have a shared space with robotic platforms, including robot arms, quadrupeds and mobile manipulators.
One project involves using robots to explore unknown environments. For example, if a building has no map, a robot can navigate using sensors and cameras. It collects information and builds a representation of the environment. This work has practical implications. In environments that may be unsafe for humans, robots can be used to gather information. The challenge is enabling the robot to navigate efficiently and interpret what it observes, which connects to world models.
How does your work connect to the broader computer science community or society?
While robotics is one application, my work is more broadly about ensuring AI systems are used responsibly. There are concerns about how advanced AI models could be misused, whether in accessing sensitive systems or being applied in harmful ways.
My goal is to develop systems that are trustworthy and aligned with intended use. This includes preventing misuse and designing systems that behave in predictable ways. These questions are important not only within computer science but also for society, as AI becomes more integrated into daily life.
What inspired you to join the University of Maryland?
UMD has a strong computer science department, particularly in AI-related areas such as computer vision, natural language processing and robotics. It offers opportunities to collaborate across different research areas.
The environment is collaborative, with many researchers working on related problems. That makes it possible to build projects that draw on different perspectives.
What advice would you give to students interested in AI research?
The field is moving very quickly, and many students feel pressure because of that. There is a perception that academia is moving more slowly than industry, which can create uncertainty.
I think the most important skill is the ability to adapt and continue learning. New research is published every day, and it is not possible to follow everything. Students should focus on building strong foundations and learning how to adjust.
It is also important to approach research with flexibility. Sometimes others may publish similar ideas or directions may shift. Being able to rethink your approach and find new angles is part of the process.
—Story by Samuel Malede Zewdu, CS Communications
The Department welcomes comments, suggestions and corrections. Send email to editor [-at-] cs [dot] umd [dot] edu.
