This AI combo could unlock human-level intelligence
Will computers ever match or surpass human-level intelligence — and, if so, how? When the Association for the Advancement of Artificial Intelligence (AAAI), based in Washington DC, asked its members earlier this year whether neural networks — the current star of artificial-intelligence systems — alone will be enough to hit this goal, the vast majority said no. Instead, most said, a heavy dose of an older kind of AI will be needed to get these systems up to par: symbolic AI.
Sometimes called ‘good old-fashioned AI’, symbolic AI is based on formal rules and an encoding of the logical relationships between concepts. Mathematics is symbolic, for example, as are ‘if–then’ statements and computer coding languages such as Python, along with flow charts or Venn diagrams that map how, say, cats, mammals and animals are conceptually related. Decades ago, symbolic systems were an early front-runner in the AI effort. However, in the early 2010s, they were vastly outpaced by more-flexible neural networks. These machine-learning models excel at learning from vast amounts of data, and underlie large language models (LLMs), as well as chatbots such as ChatGPT.
Now, however, the computer-science community is pushing hard for a better and bolder melding of the old and the new. ‘Neurosymbolic AI’ has become the hottest buzzword in town. Brandon Colelough, a Ph.D. student in the Department of Computer Science at the University of Maryland, has documented the meteoric rise of the concept in academic papers (see ‘Going up and up’). These reveal a spike of interest in neurosymbolic AI that started in around 2021 and shows no sign of slowing down.
Plenty of researchers are heralding the trend as an escape from what they see as an unhealthy monopoly of neural networks in AI research, and expect the shift to deliver smarter and more reliable AI.
A better melding of these two strategies could lead to artificial general intelligence (AGI): AI that can reason and generalize its knowledge from one situation to another as well as humans do. It might also be useful for high-risk applications, such as military or medical decision-making, says Colelough. Because symbolic AI is transparent and understandable to humans, he says, it doesn’t suffer from the ‘black box’ syndrome that can make neural networks hard to trust.
Click HERE to read the full article
The Department welcomes comments, suggestions and corrections. Send email to editor [-at-] cs [dot] umd [dot] edu.
