- AI can excel at specific narrow tasks such as playing chess but it struggles to do more than one thing well.
- While AI still has a long way to go before anything like human-level intelligence is achieved, it hasn't stopped the likes of Google, Facebook and Amazon investing billions of dollars.
Machines are getting smarter and smarter every year, but artificial intelligence is yet to live up to the hype that's been generated by some of the world's largest technology companies.
AI can excel at specific narrow tasks such as playing chess but it struggles to do more than one thing well. A seven-year-old has far broader intelligence than any of today's AI systems, for example.
"AI algorithms are good at approaching individual tasks, or tasks that include a small degree of variability," Edward Grefenstette, a research scientist at Meta AI, formerly Facebook AI Research, told CNBC.
Get top local stories in DFW delivered to you every morning. Sign up for NBC DFW's News Headlines newsletter.
"However, the real world encompasses significant potential for change, a dynamic which we are bad at capturing within our training algorithms, yielding brittle intelligence," he added.
AI researchers have started to show that there are ways to efficiently adapt AI training methods to changing environments or tasks, resulting in more robust agents, Grefenstette said. He believes there will be more industrial and scientific applications of such methods this year that will produce "noticeable leaps."
While AI still has a long way to go before anything like human-level intelligence is achieved, it hasn't stopped the likes of Google, Facebook (Meta) and Amazon investing billions of dollars into hiring talented AI researchers who can potentially improve everything from search engines and voice assistants to aspects of the so-called "metaverse."
Money Report
Anthropologist Beth Singler, who studies AI and robots at the University of Cambridge, told CNBC that claims about the effectiveness and reality of AI in spaces that are now being labeled as the metaverse will become more commonplace in 2022 as more money is invested in the area and the public start to recognize the "metaverse" as a term and a concept.
Singler also warned that there could be "too little discussion" in 2022 of the effect of the metaverse on people's "identities, communities, and rights."
Gary Marcus, a scientist who sold an AI start-up to Uber and is currently executive chairman of another firm called Robust AI, told CNBC that the most important AI breakthrough in 2022 will likely be one that the world doesn't immediately see.
"The cycle from lab discovery to practicality can take years," he said, adding that the field of deep learning still has a long way to go. Deep learning is an area of AI that attempts to mimic the activity in layers of neurons in the brain to learn how to recognize complex patterns in data.
Marcus believes the most important challenge for AI right now is to "find a good way of combining all the world's immense knowledge of science and technology" with deep learning. At the moment "deep learning can't leverage all that knowledge and instead is stuck again and again trying to learn everything from scratch," he said.
"I predict there will be progress on this problem this year that will ultimately be transformational, towards what I called hybrid systems, but that it'll be another few years before we see major dividends," Marcus added. "The thing that we probably will see this year or next is the first medicine in which AI played a substantial role in the discovery process."
DeepMind's next steps
One of the biggest AI breakthroughs in the last couple of years has come from London-headquartered research lab DeepMind, which is owned by Alphabet.
The company has successfully created AI software that can accurately predict the structure that proteins will fold into in a matter of days, solving a 50-year-old "grand challenge" that could pave the way for better understanding of diseases and drug discovery.
Neil Lawrence, a professor of machine learning at the University of Cambridge, told CNBC that he expects to see DeepMind target more big science questions in 2022.
Language models โ AI systems that can generate convincing text, converse with humans, respond to questions, and more โ are also set to improve in 2022.
The best-known language model is OpenAI's GPT-3 but DeepMind said in December that its new "RETRO" language model can beat others 25 times its size.
Catherine Breslin, a machine learning scientist who used to work on Amazon Alexa, thinks Big Tech will race toward larger and larger language models next year.
Breslin, who now runs AI consultancy firm Kingfisher Labs, told CNBC that there will also be a move toward models that combine vision, speech and language capability, rather than treat them as separate tasks.
Nathan Benaich, a venture capitalist with Air Street Capital and the co-author of the annual State of AI report, told CNBC that a new breed of companies will likely use language models to predict the most effective RNA (ribonucleic acid) sequences.
"Last year we witnessed the impact of RNA technologies as novel covid vaccines, many of them built on this technology, brought an end to nation-wide lockdowns," he said. "This year, I believe we will see a new crop of AI-first RNA therapeutic companies. Using language models to predict the most effective RNA sequences to target a disease of interest, these new companies could dramatically speed up the time it takes to discover new drugs and vaccines."
Ethical concerns
While a number of advancements could be around the corner, there are major concerns around the ethics of AI, which can be highly discriminative and biased when trained on certain datasets. AI systems are also being used to power autonomous weapons and to generate fake porn.
Verena Rieser, a professor of conversational AI at Heriot-Watt University in Edinburgh, told CNBC that there will be a stronger focus on ethical questions around AI in 2022.
"I don't know whether AI will be able to do much 'new' stuff by the end of 2022 but hopefully it will do it better," she said, adding that this means it would be fairer, less biased and more inclusive.
Samim Winiger, an independent AI researcher who used to work for a Big Tech firm, added that he believes there will be revelations around the use of machine learning models in financial markets, spying, and health care.
"It will raise major questions about privacy, legality, ethics and economics," he told CNBC.