AI Is Biased. Here’s How Scientists Are Trying to Fix It

Computers have learned to see the world more clearly in recent years, thanks to some impressive leaps in artificial intelligence. But you might be surprised—and upset—to know what these AI algorithms really think of you. As a recent experiment demonstrated, the best AI vision system might see a picture of your face and spit out a racial slur, a gender stereotype, or a term that impugns your good character.

Now the scientists who helped teach machines to see have removed some of the human prejudice lurking in the data they used during the lessons. The changes can help AI to see things more fairly, they say. But the effort shows that removing bias from AI systems remains difficult, partly because they still rely on humans to train them. “When you dig deeper, there are a lot of things that need to be considered,” says Olga Russakovsky, an assistant professor at Princeton involved in the effort.

The project is part of a broader effort to cure automated systems of hidden biases and prejudices. It is a crucial problem because AI is being deployed so rapidly, and in ways that can have serious impacts. Bias has been identified in facial recognition systems, hiring programs, and the algorithms behind web searches. Vision systems are being adopted in critical areas such as policing, where bias can make surveillance systems more likely to misidentify minorities as criminals.

In 2012, a project called ImageNet played a key role in unlocking the potential of AI by giving developers a vast library for training computers to recognize visual concepts, everything from flowers to snowboarders. Scientists from Stanford, Princeton, and the University of North Carolina paid Mechanical Turkers small sums to label more than 14 million images, gradually amassing a vast data set that they released for free.

When this data set was fed to a large neural network, it created an image-recognition system capable of identifying things with surprising accuracy. The algorithm learned from many examples to identify the patterns that reveal high-level concepts, such as the pixels that constitute the texture and shape of puppies. A contest launched to test algorithms developed using ImageNet shows that the best deep learning algorithms correctly classify images about as well as a person. The success of systems built on ImageNet helped trigger a wave of excitement and investment in AI, and, along with progress in other areas, ushered in such new technologies as advanced smartphone cameras and automated vehicles.

But in the years since, other researchers have found problems lurking in the ImageNet data. An algorithm trained with the data might, for example, assume that programmers are white men because the pool of images labeled “programmer” were skewed that way. A recent viral web project, called Excavating AI, also highlighted prejudices in the labels added to ImageNet, from such as “radiologist” and “puppeteer” to racial slurs like “negro” and “gook.” Through the project website (now taken offline) people could submit a photo and see terms lurking in the AI model trained using the data set. These exist because the person adding labels might have added a derogatory or loaded term in addition to a label like “teacher” or “woman.”

The ImageNet team analyzed their data set to uncover these and other sources of bias, and then took steps to address them. They used crowdsourcing to identify and remove derogatory words. They also identified terms that project meaning onto an image, for example “philanthropist,” and recommended excluding the terms from AI training.

The team also assessed the demographic and geographic diversity in the ImageNet photos and developed a tool to surface more diverse images. For instance, ordinarily, the term “programmer” might produce lots of photos of white men in front of computers. But with the new tool, which the group plans to release in coming months, a subset of images that shows greater diversity in terms of gender, race, and age can be generated and used to train an AI algorithm.

The effort shows how AI can be reengineered from the ground up to produce fairer results. But it also highlights how dependent AI is on human training and shows how challenging and complex the problem of bias often is.

“I think this is an admirable effort,” says Andrei Barbu, a research scientist at MIT who has studied ImageNet. But Barbu notes that the number of images in a data set affects how much bias can be removed, because there may be too few examples to balance things out. Stripping out bias could make a data set less useful, he says, especially when you are trying to account for multiple types of bias, such as race, gender, and age. “Creating a data set that lacks certain biases very quickly slices up your data into such small pieces that hardly anything is left,” he says.

Russakovsky agrees that the issue is complex. She says it isn’t even clear what a truly diverse image data set would look like, given how different cultures view the world. Ultimately, though, she reckons the effort to make AI fairer will pay off. “I am optimistic that automated decision making will become fairer,” she says. “Debiasing humans is harder than debiasing AI systems.”


More Great WIRED Stories

Read More