AI Researcher Anca Dragan on Helping Robots Understand Humans

When humans and robots cross paths, the results aren’t just frustrating—the autonomous car, say, that’s too shy to turn left—they can also be fatal. Consider last year’s Uber crash, in which the self-driving algorithms weren’t coded to yield to an unexpected human jaywalker.

At the WIRED25 conference Friday, Anca Dragan, a professor who studies human-robot interaction at UC Berkeley, spoke about what it takes to avoid those kinds of problems. Her interest is in what happens when robots graduate beyond virtual worlds and wide-open test tracks, and start dealing with unpredictable humans.

“It turns out that really complicates matters,” she says.

The issues go beyond simply teaching robots to treat humans as obstacles to be avoided. Instead, robots need to be given a predictive model of how humans behave. That isn’t easy; even to each other, humans are basically black boxes. But the work done in Dragan’s lab revolves around a fundamental insight: “Humans are not arbitrary, because we’re actually intentional beings,” she says. Her group designs algorithms that help robots figure out our goals: that we’re trying to reach that door or pass on the freeway or take that turn. From there, a robot can begin to infer what actions you’ll take to get there and how best to avoid cutting you off.

It’s like that song, Dragan says: “Every step you take; every move you make” reveals your desires and intentions, and also the next moves you might take or make to get there.

Still, sometimes it’s impossible for robots and humans to figure out what the other will do next. Dragan gives the example of a robot driver and a human one pulling up to an intersection at the same exact moment. How do you avoid a stalemate or crash? One potential fix is to teach robots social cues. Dragan might have the robocar inch back a bit—a signal to the human driver that it’s OK for them to go first. It’s one step toward getting us all to play a bit nicer.


Read More