Did you know that the painter Rockwell Kent, whose splendorous Afternoon on the Sea, Monhegan hangs in San Francisco’s de Young Museum, worked on murals and advertisements for General Electric and Rolls-Royce? I did not, until I visited Gallery 29 on a recent Tuesday afternoon, phone in hand.
Because the de Young’s curators worked with Google to turn some of the informational placards that hang next to paintings into virtual launchpads, any placard that includes an icon for Google Lens—the name of the company’s visual search software—is now a cue. Point the camera at the icon and a search result pops up, giving you more information about the work. (You can access Google Lens on the iPhone within the Google search app for iOS or within the native camera app on Android phones.)
The de Young’s augmented-reality add-ons extend beyond the informational. Aim your camera at a dot drawing of a bee in the Osher Sculpture Garden and a quirky video created by artist Ana Prvacki plays—she attempts to pollinate flowers herself with a bizarrely decorated gardening glove.
It wasn’t so long ago that many museums banned photo-taking. And smartphones and tablets were disapproved of in classrooms. But technology is winning, and the institutions of learning and discovery are embracing screens. AR, with its ability to layer digital information on top of real-world objects, makes that learning more engaging.
Of course, these ARtistic addenda don’t pop out in the space in front of you; they’re not volumetric, to borrow a term from VR. They appear as boring, flat web pages in your phone’s browser. Using Google Lens in its current form in a museum, I discovered, requires a lot of looking up, looking down, looking up, looking down. AR isn’t superimposing information atop the painting yet.
Then again, Lens isn’t just for museums; you can use it anywhere. Google’s AR spans maps, menus, and foreign languages. And Google’s object-recognition technology is so advanced, the thing you’re scanning doesn’t need a tag or QR code—it is the QR code. Your camera simply ingests the image and Google scans its own database to identify it.
SIGN UP TODAY
Sign up for the Gadget Lab newsletter for news and reviews you can use.
Apple, loath to be outdone by Google, has been hyping AR capabilities via the iPhone and iPad, though not directly in its camera. Instead, Apple has created ARKit, an augmented-reality platform for app makers who want to plug camera-powered intelligence into their own creations. The platform has turned into an early-stage playground for educational apps. Take Froggipedia, which lets teachers lead students through a frog dissection without having to explain the senseless death of the amphibian. Or Plantale, which allows a student to explore the vascular system of a plant by pointing their iPad camera at one.
Katie Gardner, who teaches English as a second language at Knollwood Elementary in Salisbury, North Carolina, says her kindergarten students “just scream with excitement” when they see their drawings come to life in the iPad app AR Makr. It takes a 2D drawing and renders it as a 3D object that can be placed in the physical world, as viewed through the iPad’s camera. Gardner uses the app for story-retelling exercises: The kids listen to a tale like Sneezy the Snowman and then use AR Makr on their iPads to illustrate a snippet of the narrative. In the real classroom, there is nothing on the table in the corner. But when the kids point their iPads at the table, their creations appear on it.
It’s too early to say how well we learn things through augmented reality. AR lacks totality by definition—unlike VR, it enhances the real world but doesn’t replace it—and it’s hard to say what that means for memory retention, says Michael Tarr, a cognitive science researcher at Carnegie Mellon University. “There is a difference between the emotional and visceral responses that happen when something is experienced as a real event or thing and when something is experienced as a digital or pictorial implementation of a thing,” he says.