Google’s AI Chief Wants to Do More With Less (Data)

Whatever the future role of computers in society, Jeff Dean will have a powerful hand in the outcome. As the leader of Google’s sprawling artificial intelligence research group, he steers work that contributes to everything from self-driving cars to domestic robots to Google’s juggernaut online ad business.

WIRED talked with Dean in Vancouver at the world’s leading AI conference, NeurIPS, about his team’s latest explorations—and how Google is trying to put ethical limits on them.

WIRED: You gave a research talk about building new kinds of computers to power machine learning. What new ideas is Google testing?

Jeff Dean: One is using machine learning for the placement and routing of circuits on chips. After you’ve designed a bunch of new circuitry you have to put it on the chip in an efficient way to optimize for area and power usage and lots of other parameters. Normally human experts do that over many weeks.

You can have a machine learning model essentially learn to play the game of chip placement, and do so pretty effectively. We can get results on par or better than human experts. We’ve been playing with a bunch of different internal Google chips, things like TPUs [Google’s custom machine learning chips].

W: More powerful chips have been central to much recent progress in AI. But Facebook’s head of AI recently said this strategy will soon hit a wall. And one of your top researchers this week urged the field to explore new ideas.

JD: There’s still a lot of potential to build more efficient and larger scale computing systems, particularly ones tailored for machine learning. And I think the basic research that has been done in the last five or six years still has a lot of room to be applied in all the ways that it should be. We’ll collaborate with our Google product colleagues to get a lot of these things out into real-world uses.

But we also are looking at what are the next major problems on the horizon, given what we can do today and what we can’t do. We want to build systems that can generalize to a new task. Being able to do things with much less data and with much less computation is going to be interesting and important.

W: Another challenge getting attention at NeurIPS is ethical questions raised by some AI applications. Google announced a set of AI ethics principles 18 months ago, after protests over a Pentagon AI project called Maven. How has AI work at Google changed since?

JD: I think there’s there’s much better understanding across all of Google about how do we go about putting these principles into effect. We have a process by which product teams thinking of using machine learning in some way can get early opinions before they have designed the entire system, like how should you go about collecting data to ensure that it’s not biased or things like that.

We’ve also obviously continued to push on the research directions that are embodied in the principles. We’ve done quite a lot of work on bias and fairness and privacy and machine learning.

W: The principles rule out work on weapons but allow for government business—including defense projects. Has Google started any new military projects since Maven?

JD: We’re happy to work with military or other government agencies in ways that are consistent with our principles. So if we want to help improve the safety of Coast Guard personnel, that’s the kind of thing we’d be happy to work on. The cloud teams tend to engage in that, because that’s really their line of business.

W: Mustafa Suleyman, a cofounder of DeepMind, the London AI startup that’s part of Alphabet and a major player in machine learning research, recently moved over to Google. He said he’ll be working with you and Kent Walker, Google’s top legal and policy executive. What will you work on with Suleyman?

JD: Mustafa has a broad perspective on AI policy related issues. He’s been pretty involved in Google’s AI principles and review process as well, so I think he’s going to focus most of his time on that: AI ethics and policy related work. I’d really rather Mustafa comment on what he’s going to be doing specifically.

One area Kent’s group is working on is how we should refine the AI principles to give a bit more guidance to teams that are thinking about using something, say facial recognition, in a Google product.

W: You gave a keynote this week on how machine learning can help society respond to climate change. What are the opportunities? What about the sometimes large energy use of machine learning projects themselves?

JD: There are lots of opportunities to apply machine learning to different aspects of this problem. My colleague John Platt was one of more than 20 authors on a recent paper that explores those—it’s more than 100 pages long. Machine learning could help improve efficiency in transportation, for example, or make climate modeling more accurate because conventional models are very computationally intensive and that limits the spatial resolution.

I am concerned in general about carbon emissions and machine learning. But it is a relatively modest part of total emissions [and] some of the papers on machine learning energy use I’ve seen don’t consider the source of the energy. In Google data centers, our energy usage throughout the year for all our computing needs is 100 percent renewable.

W: Outside of climate change, what research areas will your team be expanding their work in next year?

JD: One is multimodal learning: Tasks that have different kinds of modalities such as video and text or video and audio. We haven’t as a community done all that much there and it is likely to be more important in the future.

Machine learning research for health care is also something that we’re doing a fair amount of work in. Another is making on-device machine learning models better so that we can get more interesting features into phones and other kinds of devices that our hardware colleagues build.


More Great WIRED Stories

Read More