Large Language Models: Helpful Assistants, Romantic Partners or Con Artists?

In a Q&A session at this year's AAAS Annual Meeting, Hal Daumé III spoke about AI's role in society.
Descriptive image for Large Language Models: Helpful Assistants, Romantic Partners or Con Artists?

The Computing Community Consortium (CCC) supported three scientific sessions at this year’s AAAS Annual Conference and a highlight of the event was the Q&A portion of the session, “Large Language Models: Helpful Assistants, Romantic Partners or Con Artists?” This panel, moderated by Professor Maria Gini, CCC Council Member and Computer Science & Engineering professor at the University of Minnesota, featured Managing Director of AI Frontiers at Microsoft Research Ece Kamar, University of Maryland Computer Science Professor Hal Daumé III and University of Southern California Professor Jonathan May

Is AI capable of love? What kinds of impacts might these models have on kids? How does the United States’ AI capabilities stack up? Find out below:

Question: Overrepresentation bias in answers. Do we know where it is coming from? I’m a math guy, and my thoughts go to it is a compounding of rounding errors that is adding bias? If equal representation, I’d imagine it would output equal representation, or would it still be there? 

Daumé: One of the challenges with these models is that there are no narrow AI models anymore. They say they can do anything, so it’s hard to test everything.

Question: You mentioned AI being a tool or a replacement, which way do you see it going?

Daumé: There is more money going into replacement.

Question: I want to hear advice for people not in the field of AI. How can we engage with these tools? What should we know?

Daumé: At the University of Maryland, we are having these conversations a lot. It’s easy for me to say journalism will be different in 5 years, and other fields too. It’s uncomfortable to say that the role of professor will be different in 5 years, but it will. I have colleagues that use different LLM plug-ins for proposals and papers; it is already happening. I regularly have exam questions written by tools, but I have to check for accuracy. Writing exam questions doesn’t bring me joy, so AI can take it off my plate. In higher education, we have to think about it more. How is it transforming our jobs? There are a lot of discussions going on at universities, but not a ton of pooling resources.

Click HERE to read the full article 

The Department welcomes comments, suggestions and corrections.  Send email to editor [-at-] cs [dot] umd [dot] edu.