'Avatar: The Way of Water' Interview: Joe Letteri VFX supervisor
Alerts & Newsletters

By providing your information, you agree to our Terms of Use and our Privacy Policy. We use vendors that may also process your information to help provide our services. This site is protected by reCAPTCHA Enterprise and the Google Privacy Policy and Terms of Service apply.

The ‘Avatar’ Architect: A Conversation with the Performance Capture Guru Joe Letteri

The VFX supervisor, who has led Weta’s cutting-edge advances since “Lord of the Rings," sits down with IndieWire for a far-ranging conversation about his 10-year journey through “The Way of Water.”
Avatar: The Way of Water
"Avatar: The Way of Water"
Courtesy of 20th Century Studios
 IndieWire The Craft Top of the Line

Director James Cameron and Wētā FX’s senior visual effects supervisor Joe Letteri didn’t waste any time laying the groundwork for “The Way of Water.” After the game-changing “Avatar” revolutionized virtual production and performance capture in 2010, Cameron and Letteri got together at a retreat to assess what they could do better. Fortunately, Wētā was well underway in refining its performance capture and facial animation capabilities, along with writing the software for its physically based rendering engine, Manuka, and real-time pre-lighting tool, Gazebo, innovations that greatly benefited the “Planet of the Apes” and “Hobbit” franchises, as well as “Alita: Battle Angel” and “Gemini Man.”

What Cameron had in mind for “The Way of Water” required even greater innovative tech from Wētā and his own Lightstorm studio. The director wanted the look of a NatGeo doc for his sci-fi sequel, which would expand the world of Pandora to include the Metkayina reef clan and their magnificent underwater culture. For this, Cameron and Letteri partnered on their most ambitious undertaking to date. While the director spearheaded the design of a 180-pound 3D Sony Venice camera to shoot first-time underwater performance capture, the VFX team at Wētā expanded its workflow solutions to ingest the massive amount of assets to be animated back in New Zealand. At the same time, Letteri realized that he needed to overhaul Wētā’s facial animation software to meet the realistic demands of the actors’ performances. So the VFX studio created a revolutionary new muscle-based facial system called APFSA (Anatomically Plausible Facial System). This replaced the outmoded blend-shape solver (FACS) with something more animator-friendly by manipulating the muscles directly on the models to get more nuanced performances.

Letteri’s leadership didn’t stop there. In addition, Wētā rebuilt its entire simulation approach using a global methodology that enabled a new level of realism and interaction for hair, cloth, skin, and hard surfaces. This also encompassed a new FX simulation tool called Loki for water and fire. Cameron had very specific demands for the look and behavior of the water, both on the surface and deeper, and wanted the CG fire to look as close as possible to practical fire. This required a greater understanding of combustion on the part of the tech team, and Letteri had them begin with cracking a candlelight test before tackling the more complex destruction caused by flame throwers.

The five-time Oscar-winning Letteri (“Avatar,” “King Kong,” “The Lord of the Rings: The Return of the King” and “The Two Towers,” plus a shared Wētā Sci-Tech award for skin rendering) discussed this and more in a wide-ranging conversation with IndieWire shortly before the release of “The Way of Water.”

IndieWire: When did you technically start work on “Avatar: The Way of Water“?

Joe Letteri: Once we started looking at the script and developing the assets, I think it was about 2013. The heavy work with the turnovers coming in and everything was probably 2017. Although, after [“Avatar”], we did this whole postmortem — Jim [Cameron] wanted to know, ‘What could we do better next time?’ So we’ve really been working since then, developing the whole virtual production system, writing the software.

We wrote Manuka for the rendering engine that we’ve used since the third “Hobbit” and “Apes” — that’s been out there for a while. But, also, at the same time, we wrote Gazebo, which was the real-time counterpart to it, that could be used onstage and give you visual parity, and the shaders would work the same way. All that’s been underway since 2010 — trying to build a system that would integrate what Lightstorm was doing onstage with what we’re doing.

How helpful was the work you continued to do for movies like “Apes,” “Alita: Battle Angel,” and “Gemini Man”?

Always really helpful because it allowed us to just progress and progress and progress with what we were doing.

After “Alita,” I realized the whole facial system we had been doing was end-of-line, so we needed to change that. So we wrote a new facial system for [“The Way of Water”: APFSA]. We used a neural network to figure out a basis for the muscles — that is the engine now that’s driving everything. We can try to solve for the actor’s muscles, transfer it to the character, and then the animators have direct control of the muscles.

They also have higher-level pose controls. It let us work at any level the animator needs to work at, which the old FACS system couldn’t. You could only work with the blend shapes, so if you’re having to modify performance — because the solvers are never 100 percent correct anyway, animators always have to do something — or putting two performances together — this is a much better tool.

It’s always an iterative process. You’re always building on what you had before, but then you’re building the new things you don’t have, and those become the tools you use the next time.

Avatar: The Way of Water
“Avatar: The Way of Water”Courtesy of 20th Century Studios

Would you say this is as much a game-changer as the first “Avatar”?

The first one really felt more like a game-changer, only because, for me, coming off “Lord of the Rings” and “King Kong,” it was a chance to say, ‘OK, now we need to build the whole world. What do we do?’ We took stock of everything we did and integrated it with Jim’s ideas of the virtual camera. It was sort of like, ‘OK, this is a new platform.’

Here, we’re building on that. If you need to shoot underwater or integrate characters with live-action, a lot of the techniques we came up with — like the depth compositing — they would be really helpful. Some directors will use that, some won’t. But, for us, it’s completely what we need to do these films.

The main thing in “The Way of Water” is that you introduce the Metkayina and the underwater world. 

The characters were interesting because we try to treat it like they’re another species of the same creature. Maybe they’re not even a different species: They’re Na’vi, but they have their own differences because they’ve adapted to water. They’ve got those bigger, stronger tails, the nictitating membranes, the extra membranes on the skin to make them more hydrodynamic — we call them ‘strakes.’ We wanted to differentiate them to show that they were really adapted to this environment, but also because it enhances the differences between the Sully family [and the Metkayina]. [They’re] basically immigrating to this place and seeking refuge, knowing that they’re [Laughs.] fish out of water.

What were some of the boxes Cameron wanted to tick off for this, based on the script? 

I think for Jim, the most important thing was doing the water and the water capture because he did not want to do this one in a way where you had a bunch of pantomime, and then we just animated on top of it. He really wanted the performance to feel natural in the water.

AVATAR: THE WAY OF WATER, (aka AVATAR 2), director James Cameron, on set, 2022. ph: Mark Fellman /© Walt Disney Studios Motion Pictures / Courtesy Everett Collection
Behind the scenes of “Avatar: The Way of Water”Mark Fellman/Walt Disney Studios Motion Pictures/Courtesy of Everett Collection

What was involved with that? 

That was all handled on the Lightstorm side because they now own Giant Studios — so you had Ryan Champney [Lightstorm virtual production supervisor] and those guys there working on it. And I think a lot of it would’ve come down to we needed two volumes: One below and one above the water. They had to work simultaneously. And just working things like, “What spectrum of light do we use for the cameras and for the reflectors?” Back on “Apes,” we came up with this idea of using infrared lights so we could record out in daytime. And that turned out to be a very useful technique for capture in general because it’s less prone to stray reflections and things like that. You can’t use infrared light underwater because the red light gets absorbed almost instantly. So they switched to using an ultra-blue light for the underwater but infrared above, and then they had to put the two together in real-time so that you’re seeing the virtual performance come through.

How did integrating the live-action underwater footage into the simulation work?

Some of that was underwater — like with Spider [the human boy played by Jack Champion] swimming — but a lot of that happened at the surface. They built these rocks on the edge of the tank and had diggers doing the waves. So when Spider’s going in and out of the shore with the other characters, we had that all. But this depth-compositing system that we built turned out to actually work pretty well for water.

On the first film, we created this concept of depth-compositing — because it was a stereo film, working in layers was impossible. Plus moving through the jungle, trying to break that stuff down into traditional layer-based compositing, was just too hard. So we had this idea of depth compositing where every pixel carries its own depth information, so the compositing is done pixel by pixel. And that became a standard technique.

For this film, we used deep learning and built a neural network to analyze the scene as it was being shot. We added a Basler stereo [camera] pair on top of the main camera, and that allowed us to solve for depth in real time, pixel by pixel. So that anything we composited — like if Edie Falco was talking to Quaritch [Stephen Lang], we had her pixel depth, so Quaritch could walk around her, she could walk around him, and Jim or whoever was operating could see that in the video feed. And that actually worked for water as well. When we had characters in the water when we were shooting live, we could actually track the water surface in-depth. It really helped us integrate the water when we had to blend between the live-action foreground water and all the surrounding water and the performance-capture characters.

I understand you overhauled simulation as part of the process, including water and fire. And Jim, who understands underwater as well as anyone, had ideas on how he wanted the water to perform. Tell me about that collaboration and turning that into simulated water — I think it’s called Loki? 

[Loki is] a suite of solvers that all work together. Because water kind of exists at all these different scales: You’ve got the big bulk water, but then if you’ve got a splash, it starts to break up — then it breaks up further into spray, and then it breaks up into mist. Physically, they all have different ways of interacting with air and drag.

What Loki does is we have a thing called the state machine that analyzes the states that the water’s in and handles the transition, and then basically passes off, based on the state, to a sub-solver that is optimized for each of those states and then reintegrates them so that you get one pass that has everything in it. But it’s all taking into account the physics of what each scale of water should be doing.

Avatar : The Way of Water
“Avatar: The Way of Water”Courtesy of 20th Century Studios

Tell me about the simulated environments for the jungles and the whole undersea world.

They both had a similar approach because you’ve got to build all the pieces. Jim art-directed a lot of it on the capture stage, putting things where he wanted when he was viewing it through the virtual camera.

A lot of it comes down to the interaction: You’ve got wind or you’ve got characters moving through the jungle, interacting with things. So you’re doing a lot of collisions and dynamics to make that work. But underwater, you have a current — like the waves induce a current under the water that goes from top to bottom. So not a big current like, say the Gulf Stream — not something on a global scale, but locally the waves will affect what’s going on underneath it. So we simulated that as well.

We generated a current field from the wave pattern, and that would cause the movement of the undersea corals. But we also used that to affect the performers because they had to be reacting the same way: If you’re swimming into the current, you’re going to have a different motion than if you’re swimming outside of the current. And that current didn’t always match up what was happening in the tank. So we would take the motion and modify it to make it fit the proper current flow. And the same thing with the camera. If you’ve got a camera operator underwater, they’re getting carried along by the same current — they’re not detached from it. So we needed to make sure the camera did the same thing so that everything was integrated.

On the animation side, being able to grow the fauna and flora underneath the water — did you use an existing system? On “Apes,” you had the organic tree tool, Totara.

We adapted that system to do the coral. New things were written for how coral grows. But we were able to use the same basic components because when we built it, we already had all that in mind.

And the same for the jungle and the village?

It was a combination. The village was different. The jungle, you have a combination of art-directed trees that are basically sculpted, like they would be on the stage, and you had ones that were grown. But then in the village, other than the trees, it was all built out of this woven material. [Wētā] Workshop did these big miniatures that they wove using traditional weaving techniques, and they built the whole thing and built a frame, and we could use that not only for lighting studies but to understand how to create these woven materials. Same went for the costumes, and then we wrote new software that would mimic the weaving patterns for the baskets, for what they were wearing, for the walkway that they’re on, for the walls. That’s all woven out of thousands and thousands of strands of a hemp fiber kind of thing.

Lighting, too, is important. For Russell Carpenter’s cinematography, he had to spend a lot of time with the virtual lighting and matching. And then you’ve got all the bioluminescent lighting under underwater. I know you had the Phys-Light system for simulating on-set lighting — did you build on that? 

It was pretty much using that. I think the real trick was, because Jim was doing the virtual camera working out the lighting as it went, we were all following what he laid down, whether it was live-action with Russell or us for all CG or if it was a mix.

But the advantage there is we had written Gazebo, which meant that whatever lights they were using onstage were physically accurate to a first approximation. Obviously, you take a lot of shortcuts to get real-time, but at least we knew that there was parity there. So that helped us, and it also helped Russell because the lights weren’t doing too much weird that he couldn’t get a similar look by studying that and lighting it for real. I think that’s where most of that came from: The fact that we had built this integrated platform that everyone was working from.

Talk more about the integrated platform for cinematography because it can be confusing: What is considered traditional cinematography, what is virtual cinematography, and how do they merge?

The way the process works is Jim starts designing as he’s writing. He’s got an art department that’s doing sketches and building CG models and trying to get something ready for the virtual shoot. Jim does what we call a template, where he shoots a virtual version of the movie to begin with. That starts off with something called a scout, which is very much like a traditional scout where you stand up the stage for the first time, you get all the elements in the environment that should be there, and there’s a troupe of stand-in actors that are doing the performance capture, and it’s all done for blocking. So you do a run-through to make sure that ‘Yes, these are the right assets, this is the way the set should look,’ call out any changes, and it goes back for more detail.

Once that’s all done, they build a physical version of it using these like Lego building blocks on the stage, and the performance captures them on that. So the actors are working on a set that mimics what is going to be in the final virtual world. And then Jim takes all of that and starts selecting the performances he likes from the actors. There’s no camera at that point other than a general kind of blocking camera, so you know which side of the line you’re supposed to be on, so editorial’s got something to start working with.

Then Richie Baneham [Lightstorm’s visual effects creative supervisor] takes all that from Jim’s selects and pieces them all together and does what we call an RCP, which is a rough camera pass. That is now what Jim thinks he’s going to shoot, like the angles that they’ve talked about. The editors are right there: The stuff gets fed into the Avid as it’s happening, and the editors are calling out, “Yeah, this works,” or “We might need another angle from here.” So they’re working with Richie to block it out to make sure they get all the coverage. Then that goes to Jim, and if it’s all good, then Jim knows what he’s got to start shooting. And he’ll go in and put his cameras on and do the final touches — any adjustments he wants to make, selects a take or art direction or lighting or anything. He does that in that final step, giving us the template. He works out the whole movie that way.

Before then, we start to break down what’s going to be live-action, what’s going to be all CG, and where the mix is going to happen. It gives everyone a common platform to work from. We all have something that’s more than a storyboard that gives us what this movie is going to need.

And this integration, which is part of the virtual production, encompasses all the departments.

It does. And the software that we’ve been writing with and for Lightstorm ever since 2010, we call it Atlas, which is sort of a combination scene description and an evaluation engine. So you can build your scenes, your environment, your layout, but you can also have animation integrated with it, which is obviously important when you’re doing anything live because you’re capturing animation as well as where things are.

How would you describe what Jim was like during all of this? 

We had a good grounding from the first film, and we’ve been working on this for almost 10 years together, so I think we all had a good understanding, going into it, of what the strengths would be, what was yet to be cracked. And you know, the thing with Jim is, he likes visual effects and he knows visual effects. So he is happy to get into the weeds with you if there’s a problem and try to figure out what’s going on and how we might solve it. It’s great because he understands when we hit a wall, why we hit a wall

Were the biggest issues ways of making greater realism?

No, the biggest issue was probably the water. When we’re shooting with the virtual camera, we’re trying to do everything as fast as possible because we want it to feel like live-action filmmaking. So there’s a fast representation of the water’s surface. But if we know something like a creature’s going to splash in the water, they’ll just take a card with like a splash movie on it and drop that in to indicate, “OK, splash happens here. Jim thinks it’s going to be about this big and this wide,” and that’s the coverage in the frame. Then when you do the actual simulation, it likely will come out different. You can’t really know like how a big creature is going to splash into the water. So you find out suddenly that wave is changing your composition because it’s covering the action that you needed to see for the shot. All this is so well planned that if something like that breaks the shot, then the sequence breaks.

That was probably the hardest thing to figure out: How do we tame the simulation and data to do what we want? You’re doing things like cheating velocity, you’re cheating mass, but sometimes it really took changing the camera. We just had to say to Jim, “This is not going to work.” Because to push it as far as we need to, to keep all the shot elements in there, the water’s going to look fake.

AVATAR: THE WAY OF WATER, (aka AVATAR 2), director James Cameron (center), on set, 2022. ph: Mark Fellman /© Walt Disney Studios Motion Pictures / Courtesy Everett Collection
Behind the scenes of “Avatar: The Way of Water”Mark Fellman/Walt Disney Studios Motion Pictures/Courtesy Everett Collection

I understand Jim really put them through their paces with wanting to emulate real fire and saying, “I’m not sure you can do this. Let’s take baby steps from a candle through flame throwers.”

Well, the candle was my idea, not Jim’s, because that was more of an, “OK, how do we really understand fire?” Well, you’ve got to go down to the smallest level. And it turns out even a candle is amazingly complex. There’s like thousands of chemical reactions that happen within a nanosecond. It’s like, “All right, we’re never gonna simulate that. So how do we still get the right look that we want?”

But it was very instructive to do that because that helped us then build the bigger system. We did do a flame thrower test on a big chunk of a woven marui, because we wanted to see what that would look like. But those tests that we did with the candle helped inform even something that big. There was a lot of talk about trying to do that live-action, but it just wasn’t practical. I thought we could crack it. Having that one shoot as really good reference did the job for us.

Where are you at with the other movies?

Three has been shot. And really for us, we’re just getting started on three. There was some plan that we were going to start earlier on three, but the reality is you always finish the one you’re on first.

And four?

A little bit of four was shot. Because story-wise, there was some overlap with the sets that needed to be built. But we’re going to focus on three first.

Are you still developing software?

Always.

Are there going to be new areas of Pandora to be explored besides underwater?

You know who to ask that question of. [Laughs.] Isn’t me.

Jim teased that the last one could take place on Earth.

All I can say is I’ve read them all. They’re good stories.

Daily Headlines
Daily Headlines covering Film, TV and more.

By providing your information, you agree to our Terms of Use and our Privacy Policy. We use vendors that may also process your information to help provide our services. This site is protected by reCAPTCHA Enterprise and the Google Privacy Policy and Terms of Service apply.

Must Read
PMC Logo
IndieWire is a part of Penske Media Corporation. © 2024 IndieWire Media, LLC. All Rights Reserved.