Hal Daumé III Navigates the Risks and Ethical Challenges of Artificial Intelligence
Hal Daumé III, a professor in the University of Maryland's Department of Computer Science, leads the new $20 million National Science Foundation (NSF) Institute for Trustworthy AI in Law and Society (TRAILS). With a personal passion for artificial intelligence (AI) that spans decades, Daumé is driven by the opportunity to explore the complexities of the human mind and the profound impact of AI on individuals.
The institute he leads aims to establish ethical practices and responsible use of AI through a comprehensive approach, which includes broader participation in AI design, technological advancements and informed governance of AI-infused systems.
Growing concerns surround the widespread adoption of AI, as its potential misuse and negative consequences come into focus. As AI technologies become more prevalent, questions arise regarding issues such as bias, fairness, privacy and overall impact on society.
Addressing the growing concerns about AI's potential misuse and negative consequences, TRAILS aims to prioritize trustworthiness and ethical considerations in developing and deploying AI. Daumé, known for his expertise in the field, emphasizes the importance of mitigating the risks associated with AI sooner rather than later.
"To create a safer society with AI, we must first understand the risks by engaging with the people most impacted by the technology," says Daumé, who holds a joint appointment in the University of Maryland Institute for Advanced Computer Studies and a Volpi-Cupal Family Endowed Professorship. “Enhancing people’s ability to make informed decisions about when and how to use AI systems based on a reasonable assessment of the risks is crucial. This involves societal literacy and AI development.”
Taking a multidisciplinary approach, TRAILS will collaborate with experts from UMD, George Washington University and Morgan State University in various fields, including computer science, engineering, social sciences, humanities, education, journalism and law. By integrating diverse perspectives, the institute aims to develop frameworks prioritizing fairness, justice, privacy and transparency.
"AI provides an opportunity to leverage advanced technologies and computational power to tackle complex challenges,” Daumé says. “It has the potential to transform industries, optimize decision-making processes and facilitate breakthroughs that were previously unimaginable."
However, as the scope of AI expands, concerns have arisen regarding its potential misuse and unintended consequences. Daumé emphasizes the need for responsible AI development and usage to prevent adverse outcomes.
"The potential misuse of AI is a significant concern," Daumé says. "We must proactively address issues such as bias, fairness and privacy to ensure responsible development and deployment of AI, safeguarding against unintended consequences."
Daumé highlights two specific misuse cases that require immediate attention as AI becomes more adept at gathering information and making inferences: election interference and hacking. This also raises the risk of spear phishing campaigns becoming more accessible to individuals with less skill, which Daumé considers dangerous.
Such concerns emphasize the importance of the Institute for Trustworthy AI in Law & Society, which is one of the broadest AI institutes in terms of disciplines. Daumé believes interdisciplinary work is crucial for making progress.
By placing ethics and inclusivity at the forefront of AI design, technology development and governance, TRAILS strives to shape a future where artificial intelligence fosters a fair, accountable and transparent society. Through its visionary leadership and commitment to a just and equitable future, Daumé and TRAILS look to set a powerful example for the responsible integration of AI into the lives of millions.
—Story by Samuel Malede Zewdu, CS Communications
The Department welcomes comments, suggestions and corrections. Send email to editor [-at-] cs [dot] umd [dot] edu.