Hackers Can Use Lasers to ‘Speak’ to Your Amazon Echo or Google Home

In the spring of last year, cybersecurity researcher Takeshi Sugawara walked into the lab of Kevin Fu, a professor he was visiting at the University of Michigan. He wanted to show off a strange trick he’d discovered. Sugawara pointed a high-powered laser at the microphone of his iPad—all inside of a black metal box, to avoid burning or blinding anyone—and had Fu put on a pair of earbuds to listen to the sound the iPad’s mic picked up. As Sugawara varied the laser’s intensity over time in the shape of a sine wave, fluctuating at about 1,000 times a second, Fu picked up a distinct high-pitched tone. The iPad’s microphone had inexplicably converted the laser’s light into an electrical signal, just as it would with sound.

Six months later Sugawara—visiting from the Tokyo-based University of Electro-Communications—along with Fu and a group of University of Michigan researchers have honed that curious photoacoustic quirk into something far more disturbing. They can now use lasers to silently “speak” to any computer that receives voice commands—including smartphones, Amazon Echo speakers, Google Homes, and Facebook’s Portal video chat devices. That spy trick lets them send “light commands” from hundreds of feet away; they can open garages, make online purchases, and cause all manner of mischief or malevolence. The attack can easily pass through a window, when the device’s owner isn’t home to notice a telltale flashing speck of light or the target device’s responses.

“It’s possible to make microphones respond to light as if it were sound,” says Sugawara. “This means that anything that acts on sound commands will act on light commands.”

In months of experimentation that followed Sugawara’s initial findings, the researchers found that when they pointed a laser at a microphone and changed the intensity at a precise frequency, the light would somehow perturb the microphone’s membrane at that same frequency. The positioning didn’t need to be especially precise; in some cases they simply flooded the device with light. Otherwise, they used a telephoto lens and a geared tripod to hit their mark.

As a result, the microphone interpreted the incoming light into a digital signal, just as it would sound. The researchers then tried changing the intensity of the laser over time to match the frequency of a human voice, aiming the beam at the microphones of a collection of consumer devices that accept voice commands.

Video: The University of Electro-Communications; The University of Michigan 

When they used a 60 milliwatt laser to “speak” commands to 16 different smart speakers, smartphones, and other voice activated devices, they found that almost all of the smart speakers registered the commands from 164 feet away, the maximum distance they tested. Smartphones proved trickier: An iPhone was only susceptible from a range of around 33 feet, and two Android phones could only be controlled from within around 16 feet.

In a second experiment, the researchers tested the power and range limits of their technique, downgrading to a 5 milliwatt laser—equivalent to a cheap laser pointer—and moving 361 feet away from their targets in a hallway. While their tests mostly failed at that range, they nonetheless found that they could still control a Google Home and a first-generation Echo Plus. In another test, they successfully transmitted their laser commands through a window onto a Google Home’s microphone inside a nearby building nearly 250 feet away.

Video: The University of Electro-Communications; The University of Michigan 

The “voice” commands carried on that laser beam, the researchers point out, would be entirely silent. An observer might notice a flashing blue spot on their microphone—if they were even home to see it. “Your assumptions about blocking sound aren’t true about blocking light,” says Daniel Genkin, a professor at the University of Michigan who co-led the team. “This security problem manifests as a laser through the window to your voice activated system.”

Photograph: The University of Electro-Communications; The University of Michigan 

For even more stealth, the researchers suggest that a voice-assistant hacker could use an infrared laser, which would be invisible to the naked eye. (They tested an infrared laser and found that it worked to control Echo and Google Home speakers at close range, but didn’t try longer ranges for fear of burning or blinding someone.) And while voice assistants typically give an audible response, a hacker could send an initial command that turns the volume down to zero. While they haven’t tested this specifically, the researchers also suggest that an attacker could use light commands to trigger Amazon’s “whisper mode,” which allows a user to speak commands and receive answers in a hushed tone.

When it comes to the actual physics of a microphone interpreting light as sound, the researchers had a surprising answer: They don’t know. In fact, in the interest of scientific rigor, they refused to even speculate about what photoacoustic mechanics caused their light-as-speech effect.

Video: The University of Electro-Communications; The University of Michigan 

But at least two different physical mechanisms might be producing the vibrations that make the light commands possible, says Paul Horowitz, a professor emeritus of physics and electrical engineering at Harvard and the co-author of The Art of Electronics. First, a pulse of laser light would heat up the microphone’s diaphragm, which would expand the air around it and create a bump in pressure just as sound would. Alternately, Horowitz posits that if the components of the target devices aren’t fully opaque, the laser’s light may get past the microphone and directly hit the electronic chip that interprets its vibrations into an electrical signal. Horowitz says this could result in the same photovoltaic effect that occurs in diodes in solar cells and at the ends of fiberoptic cables, turning light into electricity or electrical signals. He says this could easily cause the laser to be processed as a voice command.

“There’s no dearth of theories, one or more of which is happening here,” Horowitz says.

The potential havoc encompasses everything from triggering smart home controls like door locks and thermostats to remotely unlocking cars. “It’s the same threat model as any voice system, but with an unusual distance effect,” says Fu. Or as University of Michigan researcher Sara Rampazzi puts it: “You can hijack voice commands. Now the question is just how powerful your voice is, and what you’ve linked it to.”

A Google spokesperson told WIRED in a statement that it was “closely reviewing this research paper. Protecting our users is paramount, and we’re always looking at ways to improve the security of our devices.” Apple declined to comment, and Facebook didn’t immediately respond. An Amazon spokesperson wrote in a statement that “we are reviewing this research and continue to engage with the authors to understand more about their work.”

An Apple Homepod with voice assistant, Siri

Some devices do offer authentication protections that might foil a laser-wielding hacker. iPhones and iPads require a user to prove their identity with TouchID or FaceID before, say, making a purchase. And the researchers acknowledge that for most smartphone voice assistants, the “wake words” that begin a voice command must be spoken in the voice of the phone’s owner, which makes their laser attack far more difficult to pull off. But they note that an attacker who obtains or reconstructs just those words—like “hey Siri” or “OK Google” could then “speak” those words in the target user’s own voice as the preamble to their voice commands.

Smart speakers like the Echo and Google Home, however, have none of that voice authentication. And given the physical nature of the vulnerability, no software update may be able to fix it. But the researchers do suggest some less-than-ideal patches, such as requiring a spoken PIN number before voice assistants carry out the most sensitive commands. They also suggest future tweaks to the devices designs to protect them from their attack, such as building light shielding around the microphone, or listening for voice commands from two different microphones on opposite sides of the device, which might be tough to hit simultaneously with a laser.

Until those fixes or design changes arrive, Michigan’s Genkin suggests a simple if counterintuitive rule of thumb for anyone concerned by the attack’s implications: “Don’t put a voice-activated device in line of sight of your adversary,” he says. If they can so much as see your Echo or Google Home through a window, they can talk to it too.


More Great WIRED Stories

Read More