More than 13,000 artificial intelligence mavens flocked to Vancouver this week for the world’s leading academic AI conference, NeurIPS. The venue included a maze of colorful corporate booths aiming to lure recruits for projects like software that plays doctor. Google handed out free luggage scales and socks depicting the colorful bikes employees ride on its campus, while IBM offered hats emblazoned with “I ❤️A👁.”
Tuesday night, Google and Uber hosted well-lubricated, over-subscribed parties. At a bleary 8:30 the next morning, one of Google’s top researchers gave a keynote with a sobering message about AI’s future.
Blaise Aguera y Arcas praised the revolutionary technique known as deep learning that has seen teams like his get phones to recognize faces and voices. He also lamented the limitations of that technology, which involves designing software called artificial neural networks that can get better at a specific task by experience or seeing labeled examples of correct answers.
“We’re kind of like the dog who caught the car,” Aguera y Arcas said. Deep learning has rapidly knocked down some longstanding challenges in AI—but it doesn’t immediately seem well suited to many that remain. Problems that involve reasoning or social intelligence, such as weighing up a potential hire in the way a human would, are still out of reach, he said. “All of the models that we have learned how to train are about passing a test or winning a game with a score, [but] so many things that intelligences do aren’t covered by that rubric at all,” he said.
Hours later, one of the three researchers seen as the godfathers of deep learning also pointed to the limitations of the technology he had helped bring into the world. Yoshua Bengio, director of Mila, an AI institute in Montreal, recently shared the highest prize in computing with two other researchers for starting the deep learning revolution.
But he noted that the technique yields highly specialized results; a system trained to show superhuman performance at one videogame is incapable of playing any other. “We have machines that learn in a very narrow way,” Bengio said. “They need much more data to learn a task than human examples of intelligence, and they still make stupid mistakes.”
Bengio and Aguera y Arcas both urged NeurIPS attendees to think more about the biological roots of natural intelligence. Aguera y Arcas showed results from experiments in which simulated bacteria adapted to seek food and communicate through a form of artificial evolution. Bengio discussed early work on making deep learning systems flexible enough to handle situations very different from those they were trained on, and made an analogy to how humans can handle new scenarios like driving in a different city or country.
The cautionary keynotes at NeurIPS come at a time when investment in AI has never been higher. Venture capitalists sunk nearly $40 billion into AI and machine learning companies in 2018, according to Pitchbook, roughly twice the figure in 2017.
Discussion of the limitations of existing AI technology are growing too. Optimism from Google and others that self-driving taxi fleets could be deployed relatively quickly has been replaced by fuzzier and more restrained expectations. Facebook’s director of AI said recently that his company and others should not expect to keep making progress in AI just by making bigger deep learning systems with more computing power and data. “At some point we’re going to hit the wall,” he said. “In many ways we already have.”
Some people at NeurIPS are working to climb or burrow under that wall. Jeff Clune, a researcher at Uber who will join nonprofit institute OpenAI next year, welcomed Bengio’s high profile call to think beyond the recent, narrow, successes of deep learning.
There are practical as well as scientific reasons to do so, he says. More general and flexible AI will help autonomous robots or other systems be more reliable and safe. “There’s a great business case for it,” he says.
Clune was due to present Friday on the idea of making smarter AI by turning the technology in on itself. He’s part of an emerging field called metalearning concerned with crafting learning algorithms that can devise their own learning algorithms. He has also created systems that generate constantly changing environments to challenge AI systems and prod them to extend themselves.
Like Aguera y Arcas, Clune says AI researchers should see the way nature generates endless new variety as an inspiration and a benchmark. “We as computer scientists don’t know any algorithms that you would want to run for a billion years and would still do something interesting,” Clune says.
As thousands of AI experts shuffled away from Bengio’s packed talk Wednesday, Irina Rish, an associate professor at the University of Montreal also affiliated with Mila, was hopeful his words would help create space and support for new ideas at a conference that has become dominated by the success of deep learning. “Deep learning is great, but we need a toolbox of different algorithms,” she says.
Rish recalls attending an unofficial workshop on deep learning at the 2006 edition of the conference, when it was less than one-sixth its current size and organizers rejected the idea of accepting the then-fringe technique in the program. “It was a bit of a religious meeting—believers gathered in a room,” Rish recalls, hoping that somewhere at NeurIPS this year are early devotees of ideas that can take AI to broader new heights.
More Great WIRED Stories
- The strange life and mysterious death of a virtuoso coder
- Teaching self-driving cars to watch for unpredictable humans
- Wild juxtapositions of Saudi Arabia modern and ancient
- A journey to Galaxy’s Edge, the nerdiest place on earth
- Burglars really do use Bluetooth scanners to find laptops and phones
- 👁 Will AI as a field “hit the wall” soon? Plus, the latest news on artificial intelligence
- ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers.