Another day in the laboratory, and all was quiet apart from the scritch-scratch of styluses on graphics tablets. On dozens of screens, glowing in the low light, were the various components of a human body: the dislocated sphere of an eyeball, the strange topography of skin in extreme close-up, a thicket of sprouting hair follicles. At one particular coffee-cup-strewn workstation, an artist studied the essence, the key to the team’s efforts—a clip of the ’90s sitcom The Fresh Prince of Bel-Air.
It was 2018, and the crew at the New Zealand-based visual effects studio Weta Digital was hard at work manufacturing Hollywood’s hottest new talent, ahead of his big-screen debut a year later. He’s a new species of actor, with unswerving focus, superhuman strength, and total commitment to the role. He doesn’t take breaks or require the services of hair and makeup. And he doesn’t need a trailer, since he lives on a hard drive. They call him Junior or, sometimes, “the asset”: the most ambitious computer-generated human ever made for a movie. He’s also the spitting image of a 23-year-old Will Smith.
In June of this year, in a postproduction facility in Manhattan, a crew member shows off the nearly complete asset. Up onscreen is a shot of the real-life, 49-year-old Will Smith as he looked on the set of Gemini Man, wearing a facial-capture headset, his face and neck specked with tracking dots. The film, set to be released in October, is a sci-fi action thriller directed by Ang Lee that follows a retired assassin, Henry (Will Smith), who finds himself in the sniper-scope sights of another, younger assassin (digital Will Smith), who has been forged out of Henry’s own DNA. It’s a story of a man trying to outwit himself, of weather-worn wisdom pitted against cocky youth. It’s also a cautionary tale about humans hubristically meddling with awesome tech.
A click of the mouse and suddenly Smith is transformed … well, “beyond recognition” isn’t the right phrase. The squinting eyes, the jut of the chin, the precise tessellation of the lower lip and upper lip stay the same. But the eyebrows thin, the cheekbones and jawline firm up, the brow unfurrows—headset and facial hair vanishing along with more than a quarter-century of wear and tear. It’s one hell of a face-lift.
The Gemini Man screenplay, originally written by Darren Lemke, has been kicking around Hollywood in various forms for around 20 years, waiting for the tech to catch up to its central conceit. At different times, Harrison Ford, Jon Voight, and Mel Gibson were expected to star, and Disney did some promising experiments with digital face technology in the early 2000s before abruptly aborting the project. In 2012, the director Joe Carnahan shared a trailer he mocked up as a pitch for the job, of Gran Torino-era Clint Eastwood facing off against Dirty Harry-era Clint Eastwood. (It’s still available to view on YouTube.) Even then, the feeling in the visual effects industry was: Not yet.
Is it time now? Bill Westenhofer thinks so. As Gemini Man‘s lead visual effects supervisor, he oversaw the combined efforts of around 500 artists at six visual effects studios, including Weta Digital, who were charged with conjuring Junior. Westenhofer, who is 51 years old, has the physical presence of a set builder rather than a pixel pusher, with the easygoing air of someone who’s made a living out of realizing impossible things. “My job,” he says, “is like coming to the edge of a cliff with a sheet and some rope, jumping off, and building a parachute on the way down.”
In the mid-’90s, after completing a master’s degree in computer science at George Washington University, Westenhofer began his career at the Los Angeles-based visual effects studio Rhythm & Hues. He carved out a niche working with animals: the chatty critters of Babe: Pig in the City and Stuart Little, which led to Cats & Dogs and the beastly besties of The Golden Compass. Rhythm & Hues’ work on the majestic lion Aslan, from The Chronicles of Narnia: The Lion, the Witch and the Wardrobe, got the studio the gig for Life of Pi, which would prominently star a digital Bengal tiger. “Our process was just to believe it was possible,” Westenhofer says, “to get from something that’s a little pathetic-looking to being completely convincing and real.”
Life of Pi earned Westenhofer and the Rhythm & Hues team an Academy Award in 2013. The recognition was bittersweet: Eleven days before the ceremony, the studio had filed for bankruptcy. Accepting his award, Westenhofer was in the middle of calling for more recognition of visual effects workers as artists when he was drowned out by the menacing strains of the Jaws theme. Accepting his own Best Director trophy later that evening for Life of Pi, Ang Lee neglected to mention the crucial role visual effects played in the film—but the director didn’t forget about Westenhofer. In May 2017, Lee came aboard the Gemini Man project; Westenhofer, a roving freelancer, signed on three months later.
For years, there have been considerable advances in the software, the hardware, and the effects industry’s underlying understanding of human physiology. But a speaking, interacting digital human in a starring role is still a huge leap from the relatively fleeting digital human cameos of Rogue One and other films—and a far more fearsome undertaking than a 500-pound cat. To pull it off would take a multiyear, brute-force effort. “I was confident we could push the technology the rest of the way there,” Westenhofer says. Plus, there was a gigantic dare involved. “A digital person is something that’s been spoken about since I started visual effects 26 years ago,” he says. “It’s the holy grail.”
When the Gemini Man teaser trailer dropped in April, set to a moody rendition of “Forever Young,” many assumed Smith had simply been “de-aged” in the same fashion as several performers in recent Marvel movies (most notably Samuel L. Jackson’s Nick Fury in Captain Marvel). Those radically rejuvenated visages were achieved through after-the-fact photographic manipulation—extreme airbrushing of the actors as they appeared on set.
But Lee shot Gemini Man at ultrahigh resolution and an ultrahigh frame rate, part of his mission to create cinema that looks and feels more like reality. At that crispness of image, even an actor’s stubble can be distracting onscreen; a digitally dermabrased Junior would have looked as if he were wearing kabuki makeup. Instead, Junior would have to be pure data, manufactured from the ground up, driven by the real Will Smith through motion capture.
The young Will Smith of Gemini Man is not the fresh Prince you probably remember. He is physically bulkier, having been raised in a militaristic household by a shadowy character played by Clive Owen. Also, he lacks the signature mustache—the discussions over that decision were “intense,” Westenhofer says. And Smith is now a better, more serious actor than the kid in the fluorescent cap who jeered “Smell ya later!”
In an interview, the human Will Smith seems to be, if anything, invigorated by the passage of time. “Man,” he tells me, “life is delicious.” He was intrigued that the behind-the-scenes process of making Gemini Man mirrored the cloning plotline of the film itself. “I’ve always loved science fiction, and what’s really interesting on this one is that there’s a certain amount of science fiction within the film becoming science fact outside of it,” Smith says. “The guys are actually doing it.”
Back in early 2018, Weta scanned Smith with an array of high-resolution cameras while he performed a predetermined set of facial calisthenics, compiling a vast database of the full expressive repertoire of Smith’s face. A model of Smith’s body was fitted with a core digital skeleton and muscle system, the biomechanical chassis Weta uses for nearly all of its computer-generated humanoids, creating a basic digital replica—or clone, if you will—of modern-day Will Smith. In the primordial nothingness of Weta’s hard drives, Junior was born. The next task: reverse-engineer by hand that digital 49-year-old into the shape and form of a 23-year-old.
For visual reference, the team amassed stills and clips of early- to mid-’90s Will Smith. Bad Boys, Six Degrees of Separation, and, of course, later episodes of Fresh Prince of Bel-Air were pored over like sacred texts. The team unsentimentally scrutinized old childhood photographs and even sourced a relatively crude facial cast of the actor that came from Independence Day.
Around mid-2018, the team tried an experiment they called the Pepsi Challenge, dropping the Junior-in-progress into the middle of a scene from Bad Boys, which was made in 1995, and animating it manually. It was a little wobbly, but it convinced them they were on the right track.
Throughout the rest of 2018, Junior continued to evolve into something resembling fleshy, organic life, one facial and bodily component at a time. One team modeled Smith’s teeth—not only the enamel on the outside but the dentin on the inside—while another worked on the way the lips compress. They mapped out the pores, so that the skin creased and wrinkled along its natural fault lines. They studied the balance of the pigments of the skin and the particular way black skin refracts light at a subsurface level. They even chiseled away at the skull. And they spent a lot of time gazing deeply into Smith’s eyes: the sclera, the cornea, the inky film on the inside of the eye called the choroid. They modeled how the cornea interacts with the iris and how the iris interacts with the pupil. They modeled the conjunctiva, the thin transparent membrane that envelops the surface of the eye. Then there was an ocular breakthrough. “We’ve always done a relatively spherical eye,” says Guy Williams, one of the heads of the Junior team at Weta, who reports to Westenhofer. “We realized that a real eye, when it’s sitting inside the socket, is actually squished into shape.” So, yes, they proceeded to squish Will Smith’s eyes.
SIGN UP TODAY
Sign up for our Longreads newsletter for the best features and investigations on WIRED.
The team at Weta was building on its own extensive research on the dynamics of skin, blood flow, breathing, and more, which the effects artists largely taught themselves. “God knows we’re not saving lives here or anything like that,” Williams says. “But you have to treat it almost like medicine. You have to be clinical and objective about it.” Cold dispassion meant representing Smith’s face honestly in its human imperfection, however slight. “If he has a quirk about his face—and I won’t go into those out of respect—you have to have that in there.” (Look out for Smith’s one misaligned mandibular lateral incisor, dutifully replicated in CG.) As the months wore on, Williams says, the refinements and adjustments became comically pedantic. “There are times in this job where you have a roomful of mature, middle-aged people sitting around a monitor, looking at a screen, talking about the amount of shine a pimple should have.”
The team eventually got Junior to a point where he looked eerily authentic, at least in still images, but he remained an empty vessel in want of a performer to animate him—a “soulless corpse,” as Williams put it. Ultimately it was up to Smith to make Junior move. For scenes featuring both Henry and Junior, Smith shot what the crew called the A side (playing Henry) on a real-world set, and then shot the B side (playing Junior) months later on a motion-capture stage in Budapest, wearing bulky headgear while an array of cameras recorded his every movement, large and small. “It’s really difficult with that amount of technology,” Smith says. “Rigs on my head. The camera has to be really close. You can’t move a lot.”
Still, Junior wasn’t ready for his close-up. As sophisticated as motion capture is, and despite the massive trove of measurements taken of Smith’s every gesture and movement, it still cannot record the full richness and depth of human behavior—the subcutaneous subtleties and minute movements, the microexpressions, the difficult-to-pinpoint qualities that comprise humanness. So, right up until August of this year, Weta’s animators studied Smith’s emoting frame by frame, beat for beat, and then tried to manually massage and coax the same out of Junior, while striving to maintain the illusion of unposed spontaneity. “What we’re finding is that to get past this uncanny valley it’s not really one single thing,” Westenhofer says. “It’s the symphony of all the different things happening at the same time. If any one of them fails, you’re left with something that doesn’t look real.”
With the difference in features and proportions, there’s an element of creative interpretation in the work, but the idea is that, viewed side by side, the digital model of Junior would replicate Smith’s original performance in all its physiognomic nuance. For some sequences, that fine-tuning took as long as 12 months; rendering a Junior shot alone took hours or days. The stakes were high. This was, as Westhofer said, the holy grail, and a failure to produce a believable Junior would rain down mockery. “I can’t go into the finances of it,” Williams says, “but it was as expensive as any other asset we’ve ever done.” Lee has been less circumspect, happily divulging at a press event earlier this year that the digital Will Smith cost twice as much as the real one (though how much the real one costs, the studio wouldn’t say).
By early 2019, Westenhofer felt they had produced sequences that finally felt in sync. Then, the dawning realization—it’s alive. “It’s an exciting feeling,” he says. “When that first render comes out, there’s a little bit of this sense of ‘I’ve created life.’”
Double Take
Synthespian, virtual actor, vactor, cyberstar, de-aged, youthified: Whatever you want to call them, a new breed of performer is on the rise. They’re increasingly digital—and decreasingly lifelike. –Caitlin Harrington
Columbia Pictures/Getty Images
2001 Final Fantasy: The Spirits Within (Entire Cast)
For years, digital humans were little more than disposable alternatives to flesh-and-blood actors, mercilessly dispatched to cop a mauling from a T. rex in Jurassic Park, be drowned in Titanic, or get blown up in Pearl Harbor. The 2001 film Final Fantasy: The Spirits Within was notable for attempting to have digital humans in starring roles. But it wasn’t until 2008’s The Curious Case of Benjamin Button, making use of new photographic techniques to study and simulate human skin, that a digital human pulled off a sustained and convincing performance: an aged Brad Pitt, computer-generated from the neck up. Still, the movie’s director, David Fincher, is said to have complained at the time that the tech “sandblasts the edges off of the performance.”
Tron: Legacy followed, with a young, somewhat glassy-eyed Jeff Bridges; a passable digital Arnold Schwarzenegger stomped into Terminator Genisys. Some of the most important technical developments, though, were made in the service of nonhuman characters. Weta’s work with Caesar and the simian cast of the new Planet of the Apes films, as well as the orcs of 2016’s Warcraft (overseen by Westenhofer), achieved expressive performances, thanks in part to “path tracing,” a more sophisticated simulation of how light bounces, penetrates, and reflects off surfaces and materials. When the actor Paul Walker died during production of Furious 7, Weta was able to complete his performance. In the film, more than 250 shots of a digital Paul Walker were integrated with existing footage. Gemini Man represents the next step in the evolution of the technology. (Martin Scorsese’s The Irishman, slated for release by Netflix in the fall, is expected to feature Robert De Niro, Al Pacino, and Joe Pesci wearing their digital—not merely de-aged—younger mugs like hockey masks.)
The crux of the challenge: Humans are tricky to dupe when it comes to human faces, our neurons firing alarm signals at the slightest indication that something is off. “We’ve evolved to be experts in the most subtle things in the face that tell you this person’s bullshitting you, or this person’s sick,” Westenhofer says. “That’s where the uncanny valley term comes from. If you’re not quite there, it’s actually disturbing to look at. We’re averse to impostors.” For that reason, a digital doppelgänger of a familiar actor will be regarded with particular scrutiny. Also, some faces are simply harder to replicate. In Rogue One, the craggy countenance of Peter Cushing proved less of a problem for the artists at Industrial Light & Magic than the pure-as-snow visage of a young Carrie Fisher. But even more fundamentally, moviegoers have a tendency to regard any CG actor warily, waiting for the anomalous jolt, the telltale glitch. As Smith himself says, “You’re always trying to see some part of it that could break the suspension of disbelief.”
Smith’s mind, at least, is sufficiently blown by Weta’s achievement in Gemini Man. “There’s a completely digital 23-year-old version of myself that I can make movies with now,” he said at a screening of the movie in July. “I’m gonna get really fat and really overweight. ‘Use Gemini Junior!’” The audience laughed. But really, credible human performances will get easier, cheaper, and more efficient to counterfeit, and when that happens, there ought to be no shortage of job opportunities for a perennially ageless Will Smith. Not just for more stories involving clones, time travel, identical twins, and time-traveling identical clone twins, but also for dramatic roles that mostly eluded him in his twenties (or, you know, a Fresh Prince reboot).
“If there’s a role in a movie that I would have been perfect for 25 years ago, and if we have a photorealistic 25-, 27-year-old version of myself—it’s like, why not?” Smith tells me later. He would personally love to see Junior in a romantic comedy, he says, perhaps alongside a digitally resurrected performer from a bygone era. “I had this weird dream about five years ago. I saw a movie with me and Audrey Hepburn. I don’t know where that came from, but I was like, oh my God. And now the technology exists!”
Smith predicts, too, that he could more easily assume different body types; instead of physically transforming for roles, as he did for 2001’s Ali, he could wear a beefier digital body. Or the likeness of a historical figure. And who’s to say that the Will Smith avatar always needs to be piloted by the actual Will Smith? He could authorize another actor to do the job or entrust the performance entirely to the director and animators. Already, there are discussions in the industry about whether machine-learning software—an evolution of the AI software that was developed to create convincing “intelligent” crowds in the Lord of the Rings films—could be programmed to analyze past Will Smith performances and simulate a new one.
SUBSCRIBE
Subscribe to WIRED and stay smart with more of your favorite writers.
By unsettling coincidence, ersatz humans are on the rise outside of the movies. In the past couple of years, “deepfake” technology, inexpensive and accessible to anyone, has been used to create bogus videos of everyone from Scarlett Johansson and Tom Cruise to Barack Obama. In June, Smith unwittingly starred in a deepfake of his own, a viral video of his face transplanted onto the body of Cardi B being interviewed on The Tonight Show.
Compared with the two years’ worth of work done by hundreds of professionals, the quality of the Cardi B video, along with other deepfakes, is awfully crude. Still, the drive to fool the viewer is the same, as is the worry: Weta’s breakthroughs give filmmakers the ability to depict performers, living or deceased, saying or doing things they never said or did. That might not seem like such a problem in the film industry, where creating illusions is the whole point. But it does suggest that performers will have to start being vigilant about safeguarding the rights to their image if they submit to a scanner.
For the rest of us, these technologies continue to stretch the boundaries of believability—onscreen and off. “The public needs to be educated as to the kinds of things they can expect,” says Darren Hendler, director of the Digital Human Group, a special department inside the visual effects studio Digital Domain. Decades ago, people were alarmed to realize that photographs could be retouched or manipulated and could no longer be seen as an unbiased record of reality. As Hendler puts it, “We’re all going to have to change our understanding of what we believe is real.”
On my last visit to the postproduction facility in Manhattan, Ang Lee stops by to check how the shots are coming along. In a theater equipped with projectors originally built for military flight simulators, we watch the scene where Henry first comes face to face with Junior. “You don’t know shit,” Junior says, his digital eyes welling with digital tears. “Kid,” Henry replies, “I know you inside and out and backwards.”
Lee is a restless filmmaker who has moved from genre to genre. He also invests digital filmmaking and visual effects with special significance. “People call it technology,” he says. “Fuck no. It’s art.” It’s an art that he hopes to see at the service of better, more human stories. “What we do here is basically imitating God’s work,” he says—echoing the divine theme of the quest for the holy grail. “The creation of something that looks alive, that looks like it has a mind of its own.”
With only a couple of months left for postproduction, Lee isn’t entirely satisfied with the scene we just saw; it’s humbling, he says, to spend two years puzzling over why a one-second shot of a digital human just isn’t jelling. “If you think about it, being here, being healthy, just being alive—that’s a miracle. God’s work,” he says again.
“That isn’t really fair,” Westenhofer chimes in. “We only had two years. God had 13 billion years.”
The paradox is that Junior isn’t intended to be a distractingly dazzling effect but, rather, an invisible one. The goal is for average moviegoers to forget, at least temporarily, that they’re watching a CG actor and be immersed in the storytelling. “The ultimate success,” Westenhofer says, “is to work ourselves out of any recognition.”
One way or another, there’ll be considerable talk about Junior—debates about his realness and hushed predictions about his future. Paramount, the studio behind Gemini Man, declined to clarify Smith’s rights regarding his digital counterpart. Similarly, Weta was not able to disclose what would happen to the amassed petabytes of precious Junior data. The actor, for his part, confessed he didn’t even get to keep a copy of Junior on a thumb drive. “Unfortunately, the digital companies still control the assets,” Smith says. “Legally I control my likeness; they control the actual information.” For now, Junior will lie in a patient, dreamless sleep, entombed in some vast antipodean storage system until his next big role. When he’s inevitably plugged in and powered up again, good as new and fresh as ever, maybe for a Gemini Man sequel or Breakfast at Tiffany’s 2, perhaps he’ll think to himself: Life is delicious.