What Sci-Fi Can Teach Computer Science About Ethics

This story is part of a series on how we learn—from augmented reality to music-training devices.

The protagonist of Rebecca Roanhorse’s short story “Welcome to Your Authentic Indian ExperienceTM” is a bit of a sad sack. A guide for a VR tourism company in Sedona, Arizona, he leads “vision quests” in a digital guise taken straight from Little Big Man. He’s Native American in our corporal realm as well, just not the sort tourists wish to commune with, he argues—until one does, stealing his job and his life story. Heartbreaking yet ambiguous, the story won a bunch of top sci-fi honors, including a Nebula and a Hugo.

For the students in Emanuelle Burton’s ethics class, the story is tricky to grok. “They’re like, you gotta grow a spine, man!” Burton says. Then, maybe, the conversation turns to Instagram. They talk about the fraught relationship between influencers and authenticity. They wander further afield, into the design choices people make when they build cyber worlds and how those worlds affect the bodies who labor within them. By the time class is up, Burton, a scholar of religion by training, hopes to have made progress toward something intangible: defining the emotional stakes of technology.

That’s crucial, Burton says, because most of her students are programmers. At the University of Illinois-Chicago, where Burton teaches, every student in the computer science major is required to take her course, whose syllabus is packed with science fiction. The idea is to let students take a step back from their 24-hour hackathons and start to think, through narrative and character, about the products they’ll someday build and sell. “Stories are a good way to slow people down,” Burton says. Perhaps they can even help produce a more ethical engineer.

There’s a long, tangled debate over how to teach engineers ethics—and whether it’s even worth doing. In 1996, a group of researchers wrote a call in the prominent journal Communications of the ACM for ethics in comp-sci courses. In the next issue, a letter to the editor appeared from a pair of computer scientists arguing the opposite. “Ethical and social concerns may be important, but as debating the morality of nuclear weapons is not doing physics, discussing the social and ethical impact of computing is not doing computer science,” they wrote. This was the position that, in the main, took hold.

But Team Ethics is making a comeback. With the morality of Big Tech again called into question, schools like MIT, Carnegie Mellon, and Stanford have launched new ethics courses with fanfare. In some cases, students are even demanding such an education, says Casey Fiesler, a professor at the University of Colorado who teaches computer ethics and studies how it’s taught. An internship at Facebook, once regarded as plum, is now just as likely to raise eyebrows. Students are looking for a little moral guidance.

Those who teach ethics don’t need to look far for lessons. Every day there’s fresh scandal: Google is in hot water for how it handles political bias; Amazon listens in as you shout at Alexa. There’s also the growing canon of case studies on which even your totally-offline grandfather could deftly hold court: ProPublica’s investigation of bias in recidivism algorithms that kept black men in jail longer, or the scraping of Facebook of user data by Cambridge Analytica. To make sense of all this, many believe engineering students need a classic humanities education, grounded in philosophy. (Just don’t replicate your biases in the classroom—Ethics Twitter recently bristled over an MIT course on AI bias built around the works of dead white guys.)

There’s a case to be made for stepping out of real life, says Judy Goldsmith, a computer science professor at the University of Kentucky. Goldsmith started teaching science fiction a decade ago, after students complained about an exam assignment. She gave them the option of analyzing a work of science fiction instead. That gave rise to a course of its own, “Science Fiction and Computer Ethics.” Her own background was in the abstract mathematics of algorithms, not philosophy. “I had remarkably little clue what I was doing,” she says. Feeling somewhat adrift, she eventually wrangled Burton, who had completed a dissertation on ethics and The Chronicles of Narnia, to help revamp the course.

Gregory Barber covers cryptocurrency, blockchain, and artificial intelligence for WIRED.

The mathematician and the religion scholar soon realized they approached sci-fi differently. Goldsmith saw in fiction a way to spool today’s technology forward, to have her students imagine dilemmas that would arise out of coming advances in things like killer drones and carebots that tend to the elderly. Sci-fi, in other words, as an exercise in prediction, a way to prepare us for what soon may come. Many people find that a useful frame. Just ask the army of futurist consultants who trot into corporate boardrooms to engage executives in world-building exercises.

Burton argues there’s not much point in trying to be predictive—especially if engineers aren’t equipped to handle the quandaries right in front of them. “There’s all kinds of hideous shit that Facebook has gotten up to that even they probably realized was a little out of balance,” she says. “But I think it’s easy to mistake how easy it is, when you are in that place, to talk yourself into the idea that what you’re doing is normal.” Sci-fi, yes, offers some distance from the headlines, as well as a sustained interest in the pitfalls of innovation. But the point of fiction, Burton says, is to crack open existing human problems. Basically, it boils down to empathy. Ken Liu’s short story “Here-and-Now” might launch a debate about digital privacy; Martin Shoemaker’s “Today I am Paul” speaks to robot-human relations.

Fiesler, the University of Colorado professor, strives for a middle road. She favors sci-fi with a close tether to the real world—like Black Mirror. “You can still see the thread from here to there,” she says. And she pairs it with real-life case studies, believing the blend of real and speculative guides her students to actionable insights about the nature and risks of working in tech. Even better, she’d have them learn ethics in the same courses where they learn programming, so that they learn to spot moral questions, and potential solutions, in the context of code.

The ultimate question, of course, is whether any of this sticks. Will students instructed in ethics get better at both recognizing technological bias and deploying the tools of code to fix it? Do squishy notions of empathy and conflict in narrative fiction make comp-sci students more sensitive programmers? Burton says it’s not just about identifying a specific coding problem; ethics touches on what it means to be a person at the mercy of a large company and the forces of technological progress. Perhaps exposure to something outside the goal-driven mentality of code—to be immersed, for one semester at least, in a mode of thinking that’s enriched and complicated by human substance—might do an intangible good, make us more engaged, critical employees. As Liu wrote about the genre in Nature in 2017, “although science fiction isn’t much use for knowing the future, it’s underrated as a way of remaining human in the face of ceaseless change.”

When I studied computer science not so long ago (but long enough that Google still said “Don’t be evil”), my college relied on Philosophy 101 to do the job of ethical training. There was merit, I’m sure, in having us learn to write and argue, in our exposure to students from other departments. Perhaps I still have Plato and Descartes filed away in some neural Siberia. But it never occurred to me that the classics would be useful as a programmer.

Instead, I think back to a course in which I was the only computer science student in the room: Introduction to Media Theory. We read McLuhan and Foucault. There were case studies, now long forgotten. I do recall a film. It imagined cross-border labor in the age of VR, a haunting meditation about what happens when technology erases the need for a physical body. I was working in a lab that depended on Mechanical Turk, a service that labels data for researchers. For the first time, I considered what it was—not a service, but workers.


More Stories on How We Learn

Read More