Zootopia 2

## Table of Contents

Disney Animation’s movie for 2025 is Zootopia 2, which is the studio’s 64th animated feature film. Zootopia 2 picks up where the first film left off, taking us deeper into the wonderful and wild animal world of the city. One of the really fun things about Zootopia projects is that each one expands the world further. The first film introduced the setting, the Zootopia+ series on Disney+ offered fun character vignettes to expand that world, and Zootopia 2 now takes us deep into the city’s history and to places both familiar and brand new. I’ve had a great time working on Zootopia 2 for the past two years!

From a technology perspective, sequels are always interesting to work on because they give us the ability to evaluate where our filmmaking capabilities presently stand compared against a known past benchmark; we know roughly what it takes to make a Zootopia movie already, and so we can see how much better we have gotten at it in the intervening years. I think Zootopia 2 is an especially interesting case because of how important the first Zootopia (2016) was in the history of Disney Animation’s technology development. For a bit of context: the decade of Disney Animation films leading up to Zootopia (2016) was a time when the studio was rapidly climbing a steep learning curve for making CG movies. Every film had technical challenges that called for the studio to overcome unprecedented obstacles. Zootopia (2016) similarly presented an enormous list of challenges, but upon completing the film I felt there was a stronger sense of confidence in what the studio could achieve together. A small anecdote about Zootopia (2016) that I am very proud of is that at SIGGRAPH 2017, I heard from a friend at a major peer feature animation studio that they were blown away and had absolutely no idea how we had made Zootopia.

Ever since then, the sense in the studio has always been “this movie will be hard to make, but we know how to make it.” This isn’t to say that we don’t have interesting and difficult challenges to overcome in each movie we make; we always do! But, ever since Zootopia (2016)’s completion, I think we’ve been able to approach the challenges in each movie with greater confidence that we will be able to find solutions.

The major technology challenges on Zootopia 2 ultimately were pretty similar to the challenges on Zootopia (2016): everything is about detail and scale [Burkhard et al. 2016]. The world of Zootopia is incredibly detailed and visually rich, and that detail has to hold up at scales ranging from a tiny shrew to the tallest giraffe. Most characters are covered in detailed fur and hair, and because the setting is a modern city, shots can have hundreds or even thousands of characters on screen all at the same time, surrounded by all of the vehicles and lights and zillions of other props and details one expects in a city. Almost every shot in the movie has some form of complex simulation or FX work, and the nature of the story takes us through every environment and lighting scenario imaginable, all of which we have to be able to render cohesively and efficiently. Going back and rewatching Zootopia (2016), I still notice how much incredible geometry and shading detail is packed into every frame, and in the nine years since, our artists have only pushed things even further.

To give an example of the amazing amount of detail in Zootopia 2: at one point during production, our rendering team noticed some shots that had incredibly detailed snow with tons of tiny glints, so out of curiosity we opened up the shots to see how the artists had shaded the snow, and we found that they had constructed the snow out of zillions upon zillions of individual ice crystals. We were completely blown away; constructing snow this way was an idea that Disney Research had explored shortly after the first Frozen movie was made [Müller et al. 2016], but at the time it was purely a theoretical research idea, and a decade later our artists were just going ahead and actually doing it. The result in the final film looks absolutely amazing, and on top of that, instead of needing a specialized technology solution to make this approach feasible, in the past decade both our renderer and computers in general have gotten so much faster and our artists have improved their workflows so much that a brute-force solution was good enough to achieve this effect without much trouble at all.

One of the largest rendering advancements we made on Zootopia (2016) was the development of the Chiang hair shading model, which has since become the de-facto industry standard for fur/hair shading and is implemented in most major production renderers. For Zootopia 2, we kept the Chiang hair shading model [Chiang et al. 2016] as-is, but instead put a lot of effort into improving the accuracy and performance of our hair ray-geometry intersection algorithms. Making improvements to our ray-curve intersector actually took a large amount of close iteration with our Look Development artists. This may sound surprising since we didn’t change the fur shader at all, but the final look of our fur is an effect that arises from extensive multiple-scattering between fur strands, for which small energy differences that arise from inaccuracies in ray-curve intersection can multiply over many bounces into pretty significant overall look differences. In an original film, if the look of a character’s hair drifts slightly during early preproduction due to underlying renderer changes, generally these small visual changes can be tolerated and factored in as the look of the film evolves, but in a sequel with established characters that have a known target look that we must meet, we have to be a lot more careful.

I’ve been lucky enough to have gotten to work on a wide variety of types and scales of projects over the past decade at Disney Animation, and for Zootopia 2 I got to work on two of my absolute favorite types of projects. The first type of favorite project is the ones where we get to work on a custom solution for a very specific visual need in the film; these are the projects where I can point out a specific thing in final frames that is there because I wrote the code for it. My second type of favorite project is ones where we get to take something super bleeding edge from pure research and take it all the way through to practical, wide production usage. Getting to do both of these types of projects on the same film was a real treat! On Zootopia 2, working on the water tubes sequence was the first project type, and working closely with Disney Research Studios to widely deploy our next-generation path guiding system was the second project type. Hopefully we’ll have a lot more to present on both of these at SIGGRAPH/DigiPro 2026, but in the meantime here’s a quick summary.

One of the big projects I worked on for Moana 2 was a total, from-scratch rethink of our entire approach to rendering water. For the most part the same system we used on Moana 2 proved to be equally successful on Zootopia 2, but for the sequence where Nick, Judy, and Gary De’Snake zoom across the city in a water tube transport system, we had to extend the water rendering system from Moana 2 a little bit further. During this sequence, our characters are inside of glass tubes filled with water moving at something like a hundred miles per hour, with the surrounding environment visible through the tubes and whizzing by. In order to achieve the desired art direction, the tubes had to be modeled with actual water geometry inside since things like bubbles and sloshes and murk and such had to be visible, so going from inside to outside the geometry we had to render was characters inside of water inside of double-sided glass tubes set in huge complex forest and city environments. To both give artists the ability to efficiently model this setup and efficiently render these shots, we wound up building out a customized version of the standard nested dielectrics solution [Schmidt and Budge 2002]. Normally nested dielectrics is pretty straightforward to implement in a simple academic renderer (I’ve written about implementing nested dielectrics in my hobby renderer before), but implementing nested dielectrics to work correctly with the myriad of other advanced features in a production renderer while also remaining performant and robust within the context of a wavefront path tracing architecture proved to require a bit more work compared with in a toy renderer.

During Moana 2’s production, we started work with Disney Research|Studios on a next-generation path guiding system in Hyperion that supports both volumes and surfaces (unlike our previous path guiding system, which only supported surfaces); this new system is built on top of the excellent and state-of-the-art Open Path Guiding (OpenPGL) library [Herholz and Dittebrandt 2022]. Zootopia 2 is the first film where we’ve been able to deploy our next-generation path guiding on a wide scale, rendering about 12% of the entire movie using this system. We presented a lot of the technical details of this new system in our course on path guiding [Reichardt et al. 2025] at SIGGRAPH 2025, but a lot more work beyond what we presented in that course had to go into making path guiding a really production scalable renderer feature. This effort required deep collaboration between a handful of developers on the Hyperion team and a bunch of folks at Disney Research|Studios, to the point where over the past few years Disney Research|Studios has been using Hyperion essentially as one of their primary in-house research renderer platforms and Disney Research staff have been working directly with us on the same codebase. Having come from a more academic rendering background, I think this is one of the coolest things that being part of the larger Walt Disney Company enables our team and studio to do. Our next-generation path guiding system proved to be a really valuable tool on Zootopia 2; in several parts of the movie, entire sequences that we had anticipated to be extraordinarily difficult to render saw enormous efficiency and workflow improvements thanks to path guiding and wound up going through with relative ease!

One particularly fun thing about working on Zootopia 2 was that my wife, Harmony Li, was one of the movie’s Associate Technical Supervisors; this title means she was one of the leads for Zootopia 2’s TD department. Harmony being a supervisor on the show meant I got to work closely with her on a few things! She oversaw character look, simulation, technical animation, crowds, and something that Disney Animation calls “Tactics”, which is essentially optimization across the entire show ranging from pipeline and workflows all the way to render efficiency. As part of Zootopia 2’s Tactics strategy, the rendering team was folded more closely into the asset building process than in previous shows. Having huge crowds of thousands of characters on screen meant that every single individual character needed to be as optimized as possible, and to that end the rendering team helped provide guidance and best practices early in the character modeling and look development process to try to keep everything optimized while not compromising on final look. However, render optimization was only a small part of making the huge crowds in Zootopia 2 possible; various production technology teams and the TD department put enormous groundbreaking work into developing new ways to efficiently author and represent crowd rigs in USD and to interactively visualize huge crowds covered in fur inside of our 3D software packages. All of this also had to be done while, for the first time on a feature film project, Disney Animation switched from Maya to Presto for animation, and all on a movie which by necessity contains by far the greatest variety of different rig types and characters in any of our films (possible in any animated film, period). Again, more on all of this at SIGGRAPH 2026, hopefully.

I think all of the things I’ve written about in this post are just a few great examples of why I think having a dedicated in-house technology development team is so valuable to the way we make films- Disney Animation’s charter is to always be making animated films that push the limits of the art form, and making sure our films are the best looking films we can possibly make is a huge part of that goal. As an example, while Hyperion has a lot of cool features and unique technologies that are custom tailored to support Disney Animation’s needs and workflows, in my opinion the real value Hyperion brings at the end of the day is that our rendering team partners extremely closely with our artists and TDs to build exactly the tools that are needed for each of our movies, with maximum flexibility and customization since we know and develop the renderer from top to bottom. This is true of every technology team at Disney Animation, and it’s a big part of why I love working on our movies. I’ve written only about the projects I worked directly on in this post, which is a tiny subset of the whole of what went into making this movie. Making Zootopia 2 took dozens and dozens of these types of projects to achieve, and I’m so glad to have gotten to be a part of it!

On another small personal note, my wife and I had our first kid during the production of Zootopia 2, and our baby’s name is in the credits in the production babies section. What a cool tradition, and what a cool thing that our baby will forever be a part of!

Below are some beautiful frames from Zootopia 2. Every last detail in this movie was hand-crafted by hundreds of artists and TDs and engineers out of a dedication to and love for animation as an art form, and I promise this movie is worth seeing on the biggest theater screen you can find!

All images in this post are courtesy of and the property of Walt Disney Animation Studios.

References

Nicholas Burkard, Hans Keim, Brian Leach, Sean Palmer, Ernest J. Petti, and Michelle Robinson. 2016. From Armadillo to Zebra: Creating the Diverse Characters and World of Zootopia. In ACM SIGGRAPH 2016 Production Sessions. Article 24.

Matt Jen-Yuan Chiang, Benedikt Bitterli, Chuck Tappan, and Brent Burley. 2016. A Practical and Controllable Hair and Fur Model for Production Path Tracing. Computer Graphics Forum (Proc. of Eurographics) 35, 2 (May 2016), 275-283.

Sebastian Herholz and Addis Dittebrandt. 2022. Intel® Open Path Guiding Library.

Thomas Müller, Marios Papas, Markus Gross, Wojciech Jarosz, and Jan Novák. 2016. Efficient Rendering of Heterogeneous Polydisperse Granular Media. ACM Transactions on Graphics (Proc. of SIGGRAPH Asia) 35, 6 (Nov. 2016), Article 168.

Lea Reichardt, Brian Green, Yining Karl Li, and Marco Manzi. 2025. Path Guiding Surfaces and Volumes in Disney’s Hyperion Renderer- A Case Study. In ACM SIGGRAPH 2025 Course Notes: Path Guiding in Production and Recent Advancements. 30-66.

Charles M. Schmidt and Brian Budge. 2002. Simple Nested Dielectrics in Ray Traced Images. Journal of Graphics Tools 7, 2 (Jan. 2002), 1–8.

SIGGRAPH 2025 Course Notes- Path Guiding Surfaces and Volumes in Disney's Hyperion Renderer- A Case Study

This year at SIGGRAPH 2025, Sebastian Herholz from Intel organized a followup to 2019’s Path Guiding in Production Course [Vorba et al. 2019]. This year’s edition of the course includes presentations by Sebastian on Intel’s Open Path Guiding library and on general advice for integrating path guiding techniques into a unidirectional path tracing renderer, a presentation by Martin Šik on how Chaos’s Corona Renderer uses advanced photon guiding techniques in their caustics solver, and a presentation by Lea Reichardt and Marco Manzi on the work Disney Animation and DisneyResearch|Studios have put into Hyperion’s second-generation path guiding system for surfaces and volumes. I strongly encourage checking out the whole course, but wanted to highlight Lea and Marco’s presentation in particular; they put a ton of care and effort into what I think is a really cool and unique look into what it takes to bridge cutting edge research into a production rendering environment. The course notes were written by the four presenters above, in addition to Brian Green and myself from the Hyperion development team.

The course will be presented on Tuesday August 12th, startng at 3:45 PM.

Figure 1 from the paper. A production scene from Moana 2, rendered using path guiding in Disney’s Hyperion Renderer. From left to right: reference baseline, 64 SPP without path guiding, 64 SPP with path guiding, and visualization of the path guiding spatio-directional field at 256 SPP. © 2025 Disney.

Here is the abstract:

We present our approach to implementing a second-generation path guiding system in Disney’s Hyperion Renderer, which draws upon many lessons learned from our earlier first-generation path guiding system. We start by focusing on the technical challenges associated with implementing path guiding in a wavefront style path tracer and present our novel solutions to these challenges. We will then present some powerful visualization and debugging tools that we developed along the way to both help us validate our implementation’s correctness and help us gain deeper insight into how path guiding performs in a complex production setting. Deploying path guiding in a complex production setting raises various interesting challenges that are not present in purely academic settings; we will explore what we learned from solving many of these challenges. Finally, we will look at some concrete production test results and discuss how these results inform our large scale deployment of path guiding in production. By providing a comprehensive review of what it took for us to achieve this deployment on a large scale in our production environment, we hope that we can provide useful lessons and inspiration for anyone else looking to similarly deploy path guiding in production, and also provide motivation for interesting future research directions.

The paper and related materials can be found at:

All of the technical details are in the paper and presentation (and with 80 pages of course notes, of which 36 pages is the Disney Animation / DisneyResearch|Studios chapter, there are technical details for days!), so this blog post is just some personal thoughts on this project.

As mentioned in the abstract, what we’re presenting in this course is our second-generation path guiding system. Disney Animation and DisneyResearch|Studios have a long history of working on path guiding; one of landmark papers in modern path guiding was Practical Path Guiding (PPG) [Müller et al. 2017], which came out of DisneyResearch|Studios, and Hyperion was one of the first production renderers to implement PPG [Müller 2019]. We’ve used path guiding on a limited number of shots on most movies starting with Frozen 2, but as the course notes go into more detail on, for a variety of reasons our first generation path guiding system never gained widespread adoption. Several years ago, based on a research proposal drafted by Wei-Feng Wayne Huang while he was still at Disney Animation, we kicked off of a large scale project to further improve path guiding and bring it to a point where we could get widespread adoption and provide significant benefits to production. This project is a collaboration between DisneyResearch|Studios, Disney Animation, Pixar, ILM, and Sebastian Herholz from Intel; the second-generation path guiding system in Hyperion is a product of this project.

Personally, this project is one of my all-time favorite projects that I’ve gotten to be a part of, and I think this project really highlights what an incredible research organization DisneyResearch|Studios is. Getting to collaborate with rendering colleagues from across multiple labs and studios is always fun and interesting, and I think for projects like this, the huge amount of production and engineering expertise that the various Disney studios bring combined with the world-class research talent at DisneyResearch|Studios and ETH Zürich (which DisneyResearch|Studios is academically partnered with) gives unique perspectives and capabilities for tackling difficult problems. On top of the production experience from three cutting edge studios, DisneyResearch|Studios also has deep access to both source code and the engineering teams for not one but two extremely mature production rendering systems, Hyperion [Burley et al. 2018] and RenderMan [Christensen et al. 2018]; I don’t think there’s anything else quite like this setup in our industry, and I think it’s insanely cool that we get to work together like this!

One of the largest focus points for our second-generation path guiding efforts was to find a way to guide jointly for both surfaces and volumes; our first-generation PPG based system only supported surfaces. Over the past decade our artists have made heavier and heavier use of volumetrics in each successive project, to the point where now almost every shot in our movies contains some form of volumetrics, ranging from subtly atmospherics all the way to enormously complex setups like the storm battle at the end of Moana 2. We already knew from past experience that extending PPG to volumes wasn’t as easy as it might look, and a second-generation path guiding system would likely need to be a significant departure from PPG. Towards the start of this project we learned that Sebastian Herholz at Intel was working in a similar direction and had incorporated a wide swath of recent path guiding research [Müller et al. 2017, Herholz et al. 2019, Ruppert et al. 2020, Xu et al. 2024] into Intel’s open source OpenPGL library; at this point the project was expanded to include a collaboration with Sebastian. This collaboration has been extraordinarily fruitful, with work from the Disney side of things helping inform development for OpenPGL and expertise from Sebastian helping us build on top of OpenPGL.

An interesting aspect of our next-generation path guiding project is that this project has been both an academic research project and a production engineering project; over the past several years, this project has spawned a series of cool research papers, but has also included a huge effort to get all of this research implemented in Hyperion and RenderMan and into artists’ hands to actually make movies with, which means solving tons of practical production problems that sit outside of the usual research focus. On the research side, so far the project has spawned three papers. Dodik et al. [2022] improved upon PPG by using spatio-directional mixture models to improve guiding’s ability to learn and product sample arbitrarily oriented BSDFs. Xu et al. [2024] introduces a way to guide volume scattering probaiblity (which is indirectly related to distance sampling) in volume rendering, which historically has been a major missing piece in path guiding in volumes. The most recent paper, Rath et al. [2025], looks at how to incorporate GPU based neural forms of path guiding into existing CPU based renderers. Each of these research papers tackles a major challenge we’ve found while working towards making path guiding practical in production.

To bridge between the research work and building a practical production system, we’ve put a lot of work into solving both more architectural technical challenges, and more artist facing user experience challenges. One of the largest architectural technical challenges has been fitting path guiding, which learns from full path histories, into Hyperion’s wavefront rendering architecture [Eisenacher et al. 2013], in which path histories are not kept beyond the current bounce for each path (RenderMan XPU [Christensen et al. 2025] is also a wavefront system, so the challenges there are similar). Artist facing user experience challenges involve the reality that production renderers include many features that break physics to allow for better artistic control and more predictable results, which are difficult to account for when developing path guiding techniques in a purely academic renderer. Solving these engineering and user experience challenges in order to build a practical production system is the focus of our part of the course this year. What we’re presenting in the course is really a snapshot of where we were at the beginning of the year; the material in the course represents enormous progress towards a robust practical system, but this project is very much still in progress and we’ve made additional advancements since we finished writing the course materials; hopefully we can present even more next year!

My favorite part of working on these projects is always getting to work with and learn from really cool people. On the DisneyResearch|Studios side, this project has been led by Marco Manzi and Marios Papas, with significant contributions by Alexander Rath, Tiziano Portenier, and engineering support from Lento Manickathan. On the Disney Animation side, Lea Reichardt, Brian Green, and I have been working closely with Marco to build out the production system. Of course we have a long list of artists and TDs to thank on the production side for supporting and trying out this project; a full list of acknowledgements can be found on my project page and in the course notes. Pushing path guiding forward in research and production together has been the type of project where everyone on the project learns a lot from each other, the studios learn a lot and gain incredibly powerful and useful new tools, and the wider research community benefits as a whole. Investing in these types of large scale, longer term research projects is always a daunting task, and the fact that our studio leadership has given so much support this project and given us the time and resources to really make it a big success is just another testament towards the commitment Disney as a whole has towards making the best movies we can possibly make!

References

Brent Burley, David Adler, Matt Jen-Yuan Chiang, Hank Driskill, Ralf Habel, Patrick Kelly, Peter Kutz, Yining Karl Li, and Daniel Teece. 2018. The Design and Evolution of Disney’s Hyperion Renderer. ACM Transactions on Graphics. 37, 3 (2018), Article 33.

Per H. Christensen, Julian Fong, Jonathan Shade, Wayne L Wooten, Brenden Schubert, Andrew Kensler, Stephen Friedman, Charlie Kilpatrick, Cliff Ramshaw, Marc Bannister, Brenton Rayner, Jonathan Brouillat, and Max Liani. 2018. RenderMan: An Advanced Path Tracing Architecture for Movie Rendering. ACM Transactions on Graphics. 37, 3 (2018), Article 30.

Per H. Christensen, Julian Fong, Charlie Kilpatrick, Francisco Gonzalez Garcia, Srinath Ravichandran, Akshay Shah, Ethan Jaszewski, Stephen Friedman, James Burgess, Trina M. Roy, Tom Nettleship, Meghana Seshadri, and Susan Salituro, 2025. RenderMan XPU: A Hybrid CPU+GPU Renderer for Interactive and Final-Frame Rendering. Computer Graphics Forum (Proc. of High Performance Graphics) 44, 8 (Jun. 2025), Article e70218.

Ana Dodik, Marios Papas, Cengiz Öztireli, and Thomas Müller. 2022. Path Guiding Using Spatio-Directional Mixture Models. Computer Graphics Forum (Proc. of Eurographics) 41, 1 (Feb. 2022), 172-189.

Christian Eisenacher, Gregory Nichols, Andrew Selle, and Brent Burley. 2013. Sorted Deferred Shading for Production Path Tracing. Computer Graphics Forum. 32, 4 (2013), 125-132.

Sebastian Herholz, Yangyang Zhao, Oskar Elek, Derek Nowrouzezahrai, Hendrik P. A. Lensch, and Jaroslav Křivánek. 2019. Volume Path Guiding Based on Zero-Variance Random Walk Theory. ACM Transactions on Graphics (Proc. of SIGGRAPH) 38, 3 (Jun 2019), Article 25.

Thomas Müller, Markus Gross, and Jan Novák. 2017. Practical Path Guiding for Efficient Light-Transport Simulation. Computer Graphics Forum (Proc. of Eurographics Symposium on Rendering) 36, 4 (Jun. 2017), 91-100.

Thomas Müller. 2019. Practical Path Guiding in Production. In ACM SIGGRAPH 2019 Course Notes: Path Guiding in Production. 37-50.

Alexander Rath, Marco Manzi, Farnood Salehi, Sebastian Weiss, Tiziano Portenier, Saeed Hadadan, and Marios Papas. 2025. Neural Resampling with Optimized Candidate Allocation. In Proc. of Eurographics Symposium on Rendering (EGSR 2025). Article 20251181.

Lukas Ruppert, Sebastian Herholz, and Hendrik P. A. Lensch. 2020. Robust Fitting of Parallax-Aware Mixtures for Path Guiding. ACM Transactions on Graphics (Proc. of SIGGRAPH) 39, 4 (Aug 2020), Article 147.

Jiří Vorba, Johannes Hanika, Sebastian Herholz, Thomas Müller, Jaroslav Křivánek, and Alexander Keller. 2019. Path Guiding in Production. In ACM SIGGRAPH 2019 Courses. Article 18.

Kehan Xu, Sebastian Herholz, Marco Manzi, Marios Papas, and Markus Gross. 2024. Volume Scattering Probability Guiding. ACM Transactions on Graphics (Proc. of SIGGRAPH Asia) 43, 6 (Nov. 2024), Article 184.

SIGGRAPH 2025 Talk- A Texture Streaming Pipeline for Real-Time GPU Ray Tracing

This year at SIGGRAPH 2025, Mark Lee, Nathan Zeichner, and I have a talk about a GPU texture streaming system we’ve been working on for Disney Animation’s in-house real-time GPU ray tracing previsualization renderer. Of course, GPU texture streaming systems are not exactly something novel; pretty much every game engine and every GPU-based production renderer out there has one. However, because Disney Animation’s texturing workflow is 100% based on Ptex, our texture streaming system has to be built to support Ptex really well, and this imposes some interesting design requirements and constraints on the problem. We thought that these design choices would make for an interesting talk!

Nathan will be presenting the talk at SIGGRAPH 2025 in Vancouver as part of the “Real-Time and Mobile Techniques” session on Sunday August 10th, starting at 9am.

A higher-res version of Figure 1 from the paper. A scene from Moana 2, rendered using our real-time GPU ray tracer (B, C, D) and compared with the final frame (A) from Disney’s Hyperion Renderer. Rendering without textures (D) is not a useful previsualization for (A). Without streaming, only the lowest resolution MIP tile per Ptex face can fit on the GPU (C). With our texture streaming, we handle 1.5 TB of Ptex files on disk using only 2 GB of GPU VRAM to achieve a result (B) that matches the texture detail of (A) while maintaining >95% of the average performance of (D), without stalls.

Here is the paper abstract:

Disney Animation makes heavy use of Ptex [Burley and Lacewell 2008] across our assets [Burley et al. 2018], which required a new texture streaming pipeline for our new real-time ray tracer. Our goal was to create a scalable system which could provide a real-time, zero-stall experience to users at all times, even as the number of Ptex files expands into the tens of thousands. We cap the maximum size of the GPU cache to a relatively small footprint, and employ a fast LRU eviction scheme when we hit the limit.

The paper and related materials can be found at:

As usual, all of the technical details are in the paper and presentation, so this blog post is just my personal notes on this project.

We’ve been working on this project for a pretty long while, and what we’re presenting in this talk is actually the second generation our team has built of a GPU texture streaming system with Ptex support. The earlier first prototype of our GPU Ptex system was largely written by Joe Schutte, who we are very indebted to for paving the way and proving out various ideas, such as the use of cuckoo hashing [Erlingsson 2006] for storing keys. We learned a ton of lessons from that first prototype, which informed the modern incarnation of the system. The core of the modern system was primarily written by Mark, with a lot of additional work from Nathan to generalize the system to support both our CUDA/Optix based GPU ray tracing previsualization renderer, and our in-house fork of Hydra’s Storm rasterizer. My role on the project was pretty small; I essentially was just a consultant contributing to some ideas and brainstorming, so I’m very grateful to Mark and Nathan for having allowed me to contribute to this talk!

One of the biggest lessons I’ve learned during my professional career has been the value of building systems twice- Chapter 11 of the famous Mythical Man Month book by Frederick Brooks is all about the value of building a first version to throw away, because much of what is required to build an actually good solution can only be learned through the process of attempting to build a solution in the first place. A lot of the design choices that went into the system described in our talk draws from both the earlier prototype that was built, and also upon past experience building texture streaming systems in other renderers. For example, one big lesson that Mark and I both learned independently is that texture filtering is extremely hard (and it’s famously even harder in Ptex due to the need to filter across faces with potentially very different resolutions), and in a stochastic ray tracing renderer, a better solution is often to just point sample and lineraly interpolate between the two closest MIP levels. Mark learned this on Moonray [Lee et al. 2017], and I’ve written about learning this on the blog before. I think this project is a great example of both learning from previous attempts at the same general problem domain, but also avoiding the second-system effect; what we have today is really fast, really robust, and given how hard texture streaming generally is as a problem domain, I think Mark and Nathan did an impressive job in keeping the actual implementation compact, elegant, and easy to work with.

Of course we need to acknowledge that there have in fact been previous attempts at implementing Ptex on the GPU, with varying degrees of success. McDonald & Burley [2011] was the first demonstrated implementation of Ptex on the GPU, but required a preprocessing step and had to deal with various complications imposed by using OpenGL/DirectX hardware texturing; this early implementation also didn’t support texture streaming. Our implementation is built primarily in CUDA and bypasses all of the traditional graphics API stuff for texturing; we deal with the per-face textures at the raw memory block level, which allows us to have zero preprocessing steps and have robust fast streaming from the CPU to the GPU. Kim et al. [2011]’s solution was to pack all of the individual per-face textures into a single giant atlased texture; back when I worked on Pixar’s Optix-based preview path tracer (called RTP) this essentially was the solution that we used. However, this solution faces major problems with MIP mapping, since faces that are next to each other in the atlas but non-adjacent in the mesh topology can bleed into each other while filtering to generate each level in the MIP chain for the single giant atlas. By streaming the original per-face textures to the GPU and using exactly the same data as what’s in the CPU Ptex implementation, we avoid all of the issues with atlasing.

An interesting thing that I think our system demonstrates is that some of the preexisting assumptions about Ptex that float around in the wider industry aren’t necessarily true. For some time now there’s been an assumption that Ptex cannot be fast for incoherent access; while it is true that Hyperion gains performance advantages from coherent shading [Eisenacher et al. 2013] and therefore coherent Ptex reads, I don’t think this is really a property of Ptex itself (as hinted at by PBRT’s integration of Ptex). One notable thing about our interactive GPU path tracing use case is that the Ptex access pattern is totally incoherent for secondary bounces- we use depth-first integrators in our previz path tracer. The demo video we included with the talk doesn’t really show this since in the demo video we just show a headlight-shaded view for the sake of clearly illustrating the texture streaming behavior, but in actual production usage our texture streaming system serves multi-bounce depth-first path tracing at interactive rates without a problem.

A final note on the demo video- unfortunately I had to capture the demo video over remote desktop at the time, so there are a few frame hitches and stalls in the video. Those hitches and stalls come entirely from recording over remote desktop, not from the texture streaming system; in Nathan’s presentation, we have some better demo videos that were recorded via direct video capture off of HDMI, and in those videos there are zero frame drops or stalls even when we force-evict the entire contents of the on-GPU texture cache.

I want to thank both the Hyperion development and Interactive Visualization development teams at Disney Animation for supporting this project, and of course we thank Brent Burley and Daniel Teece for their feedback and assistance with various Ptex topics. Finally, thanks again to Mark and Nathan for being such great collaborators. I’ve had the pleasure of working closely with Mark on a number of super cool projects over the years and I’ve learned vast amounts from him. Nathan and I go back a very long way; we first met in school and we’ve been friends for around 15 years now, but this was the first time we’ve actually gotten to do a talk together, which was great fun!

That’s all of my personal notes for this talk. If this is interesting to you, please go check out the paper and catch the presentation either live at the conference or recorded afterwards!

References

Frederick P. Brooks, Jr. 1975. The Mythical Man-Month: Essays on Software Engineering, 1st ed. Addison-Wesley.

Brent Burley and Dylan Lacewell. 2008. Ptex: Per-face Texture Mapping for Production Rendering. Computer Graphics Forum (Proc. of Eurographics Symposium on Rendering) 27, 4 (Jun. 2008), 1155-1164.

Brent Burley, David Adler, Matt Jen-Yuan Chiang, Hank Driskill, Ralf Habel, Patrick Kelly, Peter Kutz, Yining Karl Li, and Daniel Teece. 2018. The Design and Evolution of Disney’s Hyperion Renderer. ACM Transactions on Graphics. 37, 3 (2018), Article 33.

Christian Eisenacher, Gregory Nichols, Andrew Selle, and Brent Burley. 2013. Sorted Deferred Shading for Production Path Tracing. Computer Graphics Forum (Proc. of Eurographics Symposium on Rendering) 32, 4 (Jul. 2013), 125-132.

Ulfar Erlingsson, Mark Manasse, and Frank Mcsherry. 2006. A Cool and Practical Alternative to Traditional Hash Tables. Microsoft Research Tech Report.

Sujeong Kim, Karl Hillesland, and Justin Hensley. 2011. A Space-efficient and Hardware-friendly Implementation of Ptex. In ACM SIGGRAPH Asia 2011 Sketches. Article 31.

Mark Lee, Brian Green, Feng Xie, and Eric Tabellion. 2017. Vectorized Production Path Tracing. In Proc. of High Performance Graphics (HPG 2017). Article 10.

John McDonald and Brent Burley. 2011. Per-Face Texture Mapping for Real-time Rendering. In ACM SIGGRAPH 2011 Talks. Article 10.

Photography Show at Disney Animation

The inside of Disney Animation’s Burbank building is basically one gigantic museum-quality art gallery that happens to have an animation studio embedded within, and one really cool thing that the studio does from time to time is to put on an internal art show with work from various Disney Animation employees. The latest show is a photography show, and I got to be a part of it and show some of my photos! The show, titled HAVE CAMERA, WILL TRAVEL, was coordinated and designed by the amazing Justin Hilden from Disney Animation’s legendary Animation Research Library, and features work from seven Disney Animation photographers: Alisha Andrews, Rehan Butt, Joel Dagang, Brian Gaugler, Ashley Lam, Madison Kennaugh, and myself. My peers in the show are all incredible photographers whose work I find really inspiring; I encourage checking out their photography work online! The show will be up inside of Disney Animation’s Burbank studio for several months.

Ever since my dad gave my brother and me a camera when I was in high school, photography has been a major hobby of mine. Today I have several cameras, a bunch of weird and fun and interesting lenses that I have collected over the years, and I take a lot of photos every year (which has only ramped up even more after I became a dad myself). However, I rarely, if ever, post or share my photos publicly; for me, my photography hobby is purely for myself and my close friends and family. Participating in a photography show was a bit of a leap of faith for me, even within the restricted domain of inside of my workplace instead of in the general public. I think I’m a passable photographer at this point, but certainly nowhere near amazing. However, one advantage of having taken tens of thousands of photos over the past 15 years is that even if only a tiny percentage of my photos are good enough to show, a tiny percentage of tens of thousands is still enough to pull together a small collection to show.

I thought I’d share the photos I have in the show here on my blog as well. There isn’t really a coherent theme; these are just photos I’ve taken that I liked from the past several years. Some are travel photos, some are of my family, and others are just interesting moments that I noticed. I won’t go into my photography and editing process and whatnot here; I’ll save that for a future post.

I color grade my photos for both SDR and HDR; if you are using a device/browser that supports HDR1 , a toggle will appear below giving the ability to enable HDR on this page. Give it a try!

HDR is not supported on this browser/display.

I wrote a small artist’s statement for the show:

To me, a camera is actually a time machine. Taking photos gives me a way to connect back to moments and places in the past; for this reason I take a lot of photos mostly for my own memory, and every once in a rare while one of them is actually good enough to show other people!

I shoot with whatever camera I happen to have on me at the moment. Sometimes it’s a big fancy DSLR, sometimes it’s the phone in my pocket, sometimes it’s something in between. I learned a long time ago that the best camera is just whatever one is in reach at the moment.

Thanks to Harmony for her patience every time I fumbled a lens in my backpack.

Here are my photos from the show, presented in no particular order:

Los Angeles, California | Nikon Z8 | Smena Lomo T-43 40mm ƒ/4 | Display Mode: SDR

Denver, Colorado | Nikon Z8 | Zeiss Planar T* 50mm ƒ/1.4 C/Y | Display Mode: SDR

Mammoth, California | iPhone 14 Pro | Telephoto Lens 77mm ƒ/2.8 | Display Mode: SDR

Burbank, California | Nikon Z8 | Zeiss Kipronar 105mm ƒ/1.9 | Display Mode: SDR

Shanghai, China | Nikon Z8 | Nikon Nikkor Z 24-120mm ƒ/4 S | Display Mode: SDR

Burbank, California | Nikon Z8 | Asahi Pentax Super-Takumar 50mm ƒ/1.4 | Display Mode: SDR

Philadelphia, Pennsylvania | iPhone 5s | Main Lens 29mm ƒ/2.2 | Display Mode: SDR

Shanghai, China | Nikon D5100 | Nikon AF-S DX NIkkor 18-55mm ƒ/3.5-5.6 | Display Mode: SDR

Burbank, California | Nikon Z8 | Nikon Nikkor Z 24-120mm ƒ/4 S | Display Mode: SDR

Additionally, there were a few photos that I had originally picked out for the show but didn’t make the cut in the end due to limited wall space. I thought I’d include them here as well:

Hualien, Taiwan | Nikon Z8 | Nikon Nikkor Z 24-120mm ƒ/4 S | Display Mode: SDR

Burbank, California | Fujifilm X-M1 | Fujifilm Fujinon XF 27mm ƒ/2.8 | Display Mode: SDR

Los Angeles, California | iPhone 5s | Main Lens 29mm ƒ/2.2 | Display Mode: SDR

Here’s some additional commentary for each of the photos, presented in the same order that the photos are in:

  1. The south hall of the Los Angeles Convention Center, taken while walking between sessions at a past SIGGRAPH.
  2. My wife, Harmony Li, at Meow Wolf’s Convergence Station art installation. The lens flares were a total happy accident.
  3. The Panorama Gondola disappearing into a quickly descending blizzard near the top of Mammoth, taken while we were getting off of the mountain as quickly as we could. It doesn’t look like it, but this is actually a color photograph.
  4. Our then-four-month-old daughter hanging out with her grandparents in our backyard. This was the day she held a flower for the first time.
  5. Someone taking a photo from inside of Shanghai’s Museum of Art Pudong. I wonder if I’m in his photo too.
  6. Our half border collie / half golden retriever, Tux, in a Santa hat for a Christmas shoot. I think my wife actually took this one, but she insisted that I include it in the show.
  7. My then-girlfriend now-wife shooting a video project when we were in university. This was in Penn’s Singh Center for Nanotechnology building.
  8. A worker hanging a chandelier in Shanghai’s 1933 Laoyangfang complex. This place used to be a municipal slaughterhouse but now contains creative spaces.
  9. The Los Angeles skyline, as seen from the Stough Canyon trail above Burbank. The tiny dot in the center of the frame is actually a plane on landing approach to LAX.
  10. My friend Alex stopping to take in the waves as a storm was approaching the eastern coast of Taiwan.
  11. Looking past the Roy O. Disney building towards the Team Disney headquarters building on Disney’s Burbank studio lot.
  12. A past SIGGRAPH party somewhere in the fashion district in downtown Los Angeles.

Finally, here’s a few snapshots of what the show looks like, towards the end of the show’s opening. The opening had a great turnout; thanks to everyone that came by!

Justin's awesome logo for the show. | Display Mode: SDR

Crowds dying down towards the end of the show's opening. | Display Mode: SDR

The gallery hallway looking in the other direction. | Display Mode: SDR

My pieces framed and on the wall. | Display Mode: SDR


Footnotes

1 At time of posting, this post’s HDR mode makes use of browser HDR video support to display HDR pictures as single-frame HDR videos, since no browser has HDR image support enabled by default yet. The following devices/browsers are known to support HDR videos by default:

  • Safari on iOS 14 or newer, running on the iPhone 12 generation or newer, and on iPhone 11 Pro.
  • Safari on iPadOS 14 or newer, running on the 12 inch iPad Pros with M1 or M2 chip, and on all iPad Pros with M4 chip or newer.
  • Safari or Chrome 87 or newer on macOS Big Sur or newer, running on the 2021 14 and 16 inch MacBook Pros or newer, or any Mac using a Pro Display XDR or other compatible HDR monitor.
  • Chrome 87 or newer, or Edge 87 or newer, on Windows 10 or newer, running on any PC with a compatible DisplayHDR-1000 or higher display (examples: any monitor on this list). You may also need to adjust HDR settings in Windows.
  • Chrome 87 or newer on Android 14 or newer, running on devices with an Android Ultra HDR compatible display (examples: Google Pixel 8 generation or newer, Samsung Galaxy S21 generation or newer, OnePlus 12 or newer, and various others).

On Apple devices without HDR-capable displays, iOS and macOS’s EDR system may still allow HDR imagery to look correct under specific circumstances. keyboard_return

New Unified Site Design

Over the past month or so, I’ve undertaken another overhaul of my blog and website, this time to address a bunch of niggling things that have annoyed me for a long time. In terms of pure technical change, this round’s changes are not as extensive as the ones I had to make to implement a responsive layout a few years ago. Most of this round was polishing and tweaking and refining things, but enough things were touched that in the aggregate this set of change represents the largest number of visual updates to the site in a long time. Broadly things still look similar to before, but everything is a little bit tighter and more coherent and the details are sweated over just a little bit more. The biggest change this round of updates brings is that the blog and portfolio halves of my site now have a completely unified design, and both halves are now stiched together into one cohesive site instead of feeling and working like two separate sites. So, in the grand tradition of writing about making one’s website on one’s own website, here’s an overview of what’s changed and how I approached building the new unified design.

One unusual quirk of my site is that the portfolio half of the site and the blog half of the site run on completely different tech stacks. Both halves of the site are fundamentally based on static site generators, but pretty much everything under the hood is different, down to the very servers they are hosted on. The blog is built using Jekyll and served from Github Pages, fronted using Cloudflare. The portfolio, meanwhile, is built using a custom minimal static site generator called OmjiiCMS. When I say minimal, I really do mean minimal- OmjiiCMS is essentially just a fancy script that takes in hand-written HTML files containing the raw content of each page and simply glues on the sitewide header, footer, and nav menu. Calling it a CMS is a misnomer because it really doesn’t do any content management at all- the name is a holdover from back when my personal site and blog both ran on a custom PHP-based content management and publishing system that I wrote in high school. I eventually moved my blog to Wordpress briefly, which I found far too complicated for what I needed, and then landed on Blogger for a few years, and then in 2013 I moved to Ghost for approximately one week because Ghost had good Markdown support before I realized that if I wanted to write Markdown files, I should just use Jekyll. The blog has been powered by Jekyll ever since. As a bonus, moving to a static site generator made everything both way faster and way easier. Meanwhile, the portfolio part of the site has always been a completely custom thing because the portfolio site has a lot of specific custom layouts and I always found that building those layouts by hand was easier and simpler than trying to hammer some pre-existing framework into the shape I wanted. Over time I stripped away more and more of the underlying CMS until I realized I didn’t need one at all, at which point I gutted the entire CMS and made the portfolio site just a bunch of hand-written HTML files with a simple script to apply the site’s theming to every page before uploading to my web server. This dual-stack setup has stuck for a long time now because at least for me it allows me to run a blog and personal website with a minimal amount of fuss; the goal is to spend far more time actually writing posts than mucking around with the site’s underlying tech stack.

However, one unfortunate net result of these two different evolutionary paths is that while I have always aimed to make the blog and portfolio sites look similar, they’ve always looked kind of different from each other, sometimes in strange ways. The blog and portfolio have always had different header bars and navigation menus, even if the overall style of the header was similar. Both parts of the site always used the same typefaces, but in different places for different things, with completely inconsistent letter spacing, sizing, line heights, and more. Captions have always worked differently between the two parts of the site as well. Even the responsive layout system worked differently between the blog and portfolio, with layout changes happening at different window widths and different margins and paddings taking effect at the same window widths between the two. These differences have always bothered me, and about a month ago they finally bothered me enough for me to do something about it and finally undertake the effort of visually aligning and unifying both sites, down to the smallest details. Before breaking things down, here’s some before and afters:

Figure 1: Main site home page, before (left) and after (right) applying the new unified theme. For a full screen comparison, click here.

Figure 2: Blog front page, before (left) and after (right) applying the new unified theme. For a full screen comparison, click here.

The process I took to unify the designs for the two halves was to start from near scratch on new CSS files and rebuild the original look of both halves as closely as possible, while resolving differences one by one. The end result is that the blog didn’t just wholesale take on the details of the portfolio, or vice versa- instead, wherever differences arose, I thought about what I wanted the design to accomplish and decided on what to do from there. All of this was pretty easy to do because despite running on different tech stacks, both parts of the site were built using as much adherence to semantic HTML as possible, with all styling provided by two CSS files; one for each half. To me, a single CSS file containing all styling separate from the HTML is the obvious way to build web stuff and is how I learned to do CSS over a decade ago from the CSS Zen Garden, but apparently a bunch of popular alternative methods exist today such as Tailwind, which directly embeds CSS snippets in the HTML markup. I don’t know a whole lot about what the cool web kids do today, but Tailwind seems completely insane to me; if I had built my site with CSS snippets scattered throughout the HTML markup, then this unifying project would have taken absolute ages to complete instead of just a few hours spread over a weekend or two. Instead, this project was easy to do because all I had to do was make new CSS files for both parts of the site and I barely had to touch the HTML at all, aside from an extra wrapper div or two.

The general philosophy of this site’s design has always been to put content first and keep things information dense, all with a modern look and presentation. The last big revision of the site added responsive design as a major element and also pared back some unneeded flourishes with the goal of keeping the site lightweight. For the new unified design, I wanted to keep all of the above and also lean more into a lightweight site and improve general readability and clarity, all while keeping the site true to its preexisting design.

Here’s the list of what went into the new unified design:

  • Previously the blog’s body text was fairly dense and had very little spacing between lines, while the portfolio’s body text was slightly too large and too spaced out. The unified design now defines a single body text style with a font size somewhere in between what the two halves previously had, and with line spacing that grants the text a bit more room to breathe visually for improved readability while still maintaining relatively high density.
  • Page titles, section headings, and so on now use the same font size, color, letter spacing, margins, etc. between both halves.
  • I experimented with some different typefaces, but in the end I still like what I had before, which is Proxima Nova for easy-to-read body text and Futura for titles, section headings, etc; previously how these two typefaces were applied was inconsistent though, and the new unified design makes all of this uniform.
  • Code and monospaced text is now typeset in Berkeley Mono by US Graphics Company.
  • Image caption styles are now the same across the entire site and now do a neat trick where if they fit on a single line, they are center aligned, but as soon as the caption spills over onto more than one line, the caption becomes left aligned. While the image caption system does use some simple Javascript to set up, the line count dependent alignment trick is pure CSS. Here is a comparison:
Figure 3: Image caption, before (left) and after (right) applying the new unified theme. Before, captions were always center aligned, whereas now, captions are center aligned if they fit on one line but automatically become left aligned if they spill onto more than one line. For a full screen comparison, click here.

  • The blog now uses red as its accent color, to match the portfolio site. The old blue accent color was a holdover from when the blog’s theme was first derived from what is now a super old variant of Ghost’s Casper theme.
  • Links now are underlined on hover for better visibility.
  • Both sites now share an identical header and navigation bar. Previously the portfolio and blog had different wordmarks and had different navigation items; they now share the same “Code & Visuals” wordmark and same navigation items.
  • As part of unifying the top navigation bars, the blog’s Atom feed is no longer part of the top navigation but instead is linked to from the blog archive and is in the site’s new footer.
  • The site now has a footer, which I found useful for delineating the end of pages. The new footer has a minimal amount of information in it: just copyright notice, a link to the site’s colophon, and the Atom feed. The footer always stays at the bottom of the page, unless the page is smaller than the current browser window size, in which case the footer floats at the bottom of the browser window, and the neat thing is that this is implemented entirely using CSS with no Javascript.
  • Responsive layouts now kick in at the same window widths for both parts of the site, and the margins and various text size changes applied for responsive layouts are the same between both halves as well. As a result, the site now looks the same across both halves at all responsive layout widths across all devices.
  • All analytics and tracking code has been completely removed from both halves of the site.
  • The “About” section of the site has been reorganized with several informational slash pages. Navigation between the various subpages of the About section is integrated into the page headings.
  • The “Projects” section of the site used to just be one giant list of projects; this list is now reorganized into subpages for easier navigation, and navigation is also integrated into the Project section’s page headings.
  • Footnotes and full screen image comparison pages now include backlinks to where they were linked to from main body text.
  • Long posts with multiple subsections now include a table of contents at the beginning.

Two big influences on how I’ve approached building and designing my site over the past few years have been Tom Macwright’s site and Craig Mod’s site. From Tom Macwright’s site, I love the ultra-brutalist and super lightweight design, and I also like his site navigation, choice of sections, and slash pages. From Craig Mod’s site, I really admire the typography and how the site presents his various extensive writings with excellent readability and beautiful layouts. My site doesn’t really resemble those two sites at all visually (and I wouldn’t want it to; I like my own thing!), but I drew a lot of influence from both of those sites when it comes to how I thought about an overall approach to design. In addition to the two sites mentioned above, I regularly draw inspiration from a whole bunch of other sites and collections of online work; I keep an ongoing list on my about page if you’re curious.

Hee’s a brief overview of how the portfolio half of the site has changed over the years. The earliest 2011 version was just a temporary site I threw together while I was applying to the Pixar Undergraduate Program internship (and it worked!); in some ways I kind of miss the ultra-brutalist utilitarian design of this version. I actually still keep that old version around for fun. The 2013 version was the first version of the overall design that continues to this day, but was really heavy-handed with both a header and footer that hovered in place when scrolling. The 2014 version consolidated the header and footer into just a single header that still hovered in place but shrunk down when scrolling. The 2017 version added dual-column layouts to the home page and project pages, and the 2018 version cleaned up a bunch of details. The 2021 version was a complete rebuild that introduced responsive design, and the 2022 version was a minor iteration that added things like a image carousel to the home page. The latest version rounds out the evolutionary history up to now:

Figure 4: Evolution of the portfolio half of the site from 2011 to today.

Meanwhile, the blog has actually seen less change overall. Unfortuantely I don’t have any screenshots or a working version of the codebase for the pre-2011 version of the blog anymore, but by the 2011 version the blog was on Blogger with a custom theme that I spent forever fighting against Blogger’s theming system to implement; that custom theme is actually the origin of my site’s entire look. The 2013 version was a wholesale port to Jekyll and as part of the port I built a new Jekyll theme that carried over much of the previous design. The 2014 version of the blog added an archive page and Atom feed, and then the blog more or less stayed untouched until the 2021 version’s responsve design overhaul. This latest version is the largest overhaul the blog has seen in a very long time:

Figure 5: Evolution of the blog half of the site from 2011 to today.

I’m pretty happy with how the new unified design turned out; both halves of the site now feel like one integrated, cohesive whole, and the fact that the two halves of the site run different tech stacks on different webservers is no longer made obvious to visitors and readers. I named the new unified site theme Einheitsgrafik, which translates roughly to “uniform graphic” or “standard graphic”, which I think is fitting. With this iteration, there are no longer any major things that annoy me every time I visit the site to double check things; hopefully that means that the site is also a better experience for visitors and readers now. I think that this particular iteration of the site is going to last a very long time!

Moana 2

## Table of Contents

This fall marked the release of Moana 2, Walt Disney Animation’s 63rd animated feature and the 10th feature film rendered entirely using Disney’s Hyperion Renderer. Moana 2 brings us back to the beautiful world of Moana, but this time on a larger adventure with a larger canoe, a crew to join our heroine, bigger songs, and greater stakes. The first Moana was at the time of its release one of the most beautiful animated films ever made, and Moana 2 lives up to that visual legacy with frames that match or often surpass what we did in the original movie. I got to join Moana 2 about two years ago and this film proved to be an incredibly interesting project!

While we’ve now used Disney’s Hyperion Renderer to make several sequels to previous Disney Animation films, Moana 2 is the first time we’ve used Hyperion to make a sequel to a previous film that also used Hyperion. From a technical perspective, the time between the first and second Moana films is filled with almost a decade of continual advancement in our rendering technology and in our wider production pipeline. At the time that we made the first Moana, Hyperion was only a few years old and we spent a lot of time on the first Moana fleshing out various still-underdeveloped features and systems in the renderer. Going into the second Moana, Hyperion is now an extremely mature, extremely feature rich, battle-tested production renderer with which we can make essentially anything we can imagine. Almost every single feature and system in Hyperion today has seen enormous advancement and improvement over what we had on the first Moana; many of these advancements were in fact driven by hard lessons that we learned on the first Moana! Compared with the first Moana, here’s a short, very incomplete laundry list of improvements made over the past decade that we were able to leverage on Moana 2:

  • Moana 2 uses a completely new water rendering system that represents an enormous leap in both render-time efficiency and easier artist workflows compared with what we used on the first Moana; more on this later in this post.
  • After the first Moana, we completely rewrote Hyperion’s previous volume rendering subsystem [Habel 2017] from scratch; the modern system is a state-of-the-art delta-tracking system that required us to make foundational research advancements in order to implement [Kutz et al. 2017, Huang et al. 2021].
  • Our traversal system was completely rewritten to better handle thread scalability and to incorporate a form of rebraiding to efficiently handle gigantic world-spanning geometry; this was inspired directly by problems we had rendering the huge ocean surfaces and huge islands in the first Moana [Burley et al. 2018].
  • On the original Moana, ray self-intersection with things like Maui’s feathers presented a major challenge; Moana 2 is the first film using our latest ray self-intersection prevention system that notably does away with any form of ray bias values.
  • We introduced a limited form of photon mapping on the first Moana that only worked between the sun and water surfaces [Burley et al. 2018].; Moana 2 uses an evolved version of our photon mapper that supports all of our light types, many or our standard lighting features, and even has advanced capabilities like a form of spectral dispersion.
  • We’ve made a number of advancements [Burley et al. 2017, Chiang et al. 2016, Chiang at al. 2019, Zeltner et al. 2022] to various elements of the Disney BSDF shading model.
  • Subsurface scattering on the first Moana was done using normalized diffusion; since then we’ve moved all subsurface scattering to use a state-of-the-art brute force path tracing approach [Chiang et al. 2016].
  • Eyes on the first Moana used our old ad-hoc eye shader; eyes on Moana 2 use our modern physically plausible eye shader that includes state-of-the-art iris caustics calculated using manifold next event estimation [Chiang & Burley 2018].
  • The emissive mesh importance sampling system that we implemented on the first Moana and our overall many-lights sampling system has seen many efficiency improvements [Li et al. 2024].
  • Hyperion has gained many more powerful features granting artists an enormous degree of artistic control both in the renderer and post-render in compositing [Burley 2019, Burley et al. 2024].
  • Since the first Moana, Hyperion’s subdivision/tessellation system has gained an advanced fractured mesh system that makes many of the huge-scale effects in the first Moana movie much easier for us to create today [Burley & Rodriguez 2022].
  • We’ve introduced path guiding into Hyperion to handle particularly difficult light transport cases [Müller et al. 2017, Müller 2019].
  • The original Moana used our somewhat ad-hoc first-generation denoiser, while Moana 2 uses our best-in-industry, Academy Award winning1 second-generation deep learning denoiser jointly developed by Disney Research Studios, Disney Animation, Pixar, and ILM [Vogels et al. 2018, Dahlberg et al. 2019].
  • Even Hyperion’s internal architecture has changed enormously; Hyperion originally was famous for being a batched wavefront renderer, but this has evolved significantly since then and continues to evolve.

There are many many more changes to Hyperion that there simply isn’t room to list here. To give you some sense of how far Hyperion has evolved between Moana and Moana 2: the Hyperion used on Moana was internally versioned as Hyperion 3.x; the Hyperion used on Moana 2 is internally versioned as Hyperion 16.x, with each version number in between representing major changes. In addition to improvements in Hyperion, our rendering team has also been working for the past few years on a next-generation interactive lighting system that extensively leverages hardware GPU ray tracing; Moana 2 saw the widest deployment yet of this system. I can’t say much more on this topic yet, but we’ve started to publish bits and pieces of work from this project, such as how we’ve created a new realtime ray tracing GPU Ptex implementation [Lee et al. 2025].

Of course, there are also still parts of Hyperion that have more or less remained exactly the same as they were during the original Moana; these parts of the renderer have stood the test of time and proven to be reliable foundational pieces of the Disney Animation filmmaking process. A great example is the fur/hair shading model [Chiang et al. 2016] that was originally developed for Zootopia and used on human characters for the first time in Moana (2016). Even though our hair simulation continues to advance with every movie [Kaur et al. 2025], it turns out that the Chiang fur/hair model has been so reliable thaat we haven’t really had to change it since, and in fact it has since become a de-facto standard across the entire graphics industry!

Outside of the rendering group, literally everything else about our entire studio production pipeline has changed as well; the first Moana was made mostly on proprietary internal data formats, while Moana 2 was made using the latest iteration [Zhuang 2025] of our cutting-edge modern USD pipeline [Miller et al. 2022, Vo et al. 2023, Li et al. 2024]. The modern USD pipeline has granted our pipeline many amazing new capabilities and far more flexibility, to the point where it became possible to move our entire lighting workflow to a new DCC [Endo et al. 2025, Joseph and Butt 2025] for Moana 2 without needing to blow up the entire pipeline. Our next-generation interactive lighting system is similarly made possible by our modern USD pipeline. The modern pipeline has also allowed us to continue to push the scale of our films ever larger, with the ever-growing complexity of our crowds [Ros and Berriz 2025, Ros and Sullivan 2025] being a particular standout.

While I get to work on every one of our feature films and get to do fun and interesting things every time, Moana 2 is the most direct and deep I’ve worked on one of our films probably since the original Moana. There are two specific projects I worked on for Moana 2 that I am particularly proud of: a completely new water rendering system that is part of Moana 2’s overall new water FX workflow, and the volume rendering work that was done for the storm battle in the movie’s third act.

On the original Moana, we had to develop a lot of custom water simulation and rendering technology because commercial tools at the time couldn’t quite handle what the movie required. On the simulation side, the original Moana required Disney Animation to invent new techniques such as the APIC (affine particle-in-cell) fluid simulation model [Jiang et al. 2015] and the FAB (fluxed animated boundary) method for integrating procedural and simulated water dynamics [Stomakhin and Selle 2017]. Disney Animation’s general philosophy towards R&D is that we will develop and invent new methods when needed, but will then aim to publish our work with the goal of allowing anything useful we invent to find its way into the wider graphics field and industry; a great outcome is when our publications are adopted by the commercial tools and packages that we build on top of. APIC and FAB were both published and have since become a part of the stock toolset in Houdini, which in turn allowed us to build more on top of Houdini’s built-in SOPs for Moana 2’s water FX workflow.

On the rendering side, the main challenge on the original Moana for rendering water was the need to combine water surfaces from many different sources (procedural, manually animated, and simulated) into a single seamless surface that could be rendered with proper refraction, internal volumetric effects, caustics, and so on. Our solution to combine different water surfaces on the original Moana was to convert all input water elements into signed distance fields, composite all of the signed distance fields together into a single world-spanning levelset, and then mesh that levelset into a triangle mesh for ray intersection [Palmer et al. 2017]. While this process produced great visual results, running this entire world-spanning levelset compositing and meshing operation at renderer startup for each frame proved to be completely untenable due to how slow it made interaction for artists, so an extensive system for pre-caching ocean surfaces overnight to disk had to be built out. All in all, the water rendering and caching system on the first Moana required a dedicated team of over half a dozen developers and TDs to maintain, and setting up the levelset compositing system correctly proved to be challenging for artists.

For Moana 2, we decided to revisit water rendering with the goal of coming up with something much easier for artists to use, much faster to render, and much easier to maintain by a smaller group of engineers and TDs. Over the course of about half a year, we completely replaced the old levelset compositing and meshing system with a new ray-intersection-time CSG system. Our new system requires almost zero work for artists to set up, requires zero preprocessing time before renderer startup and zero on-disk caching, renders with negligible impact on ray tracing speed, and required zero dedicated TDs and only part of my time as an engineer to support once primary development was completed. In addition to all of the above, the new system also allows for both better looking and more memory efficient water because the level of detail that water meshes have to exist at is no longer constrained by the resolution of a world-size meshed levelset, even with an adaptive levelset meshing. I think this was a great example where by revisiting a world that we already knew how to make, we were given an opportunity to reevaluate what we learned on Moana in order to come up with something better by every metric for Moana 2.

We knew that returning to the world of Moana was likely going to require a heavy lift from a volume rendering perspective. With a mind towards this, we worked closely with Disney Research|Studios in Zürich to implement next-generation volume path guiding techniques in Hyperion [Reichardt et al. 2025], which wound up not seeing wide deployment this time but nonetheless proved to be a fun and interesting project from which we learned a lot. We also realized that the third act’s storm battle was going to be incredibly challenging from both an FX and rendering perspective; creating the storm battle required FX to invent whole new techniques [Rice 2025]! My last few months on Moana 2 were spent helping get the storm battle sequences finished; one extremely unusual thing we wound up doing was providing custom builds of Hyperion with specific optimizations tailored to the specific requirements of the storm sequence, sometimes going as far as to provide specific builds and settings tailored on a per-shot basis. Normally this is something any production rendering team tries to avoid if possible, but one of the benefits of having our own in-house team and our own in-house renderer is that we are still able to do this when the need arises. From a personal perspective, being able to point at specific shots and say “I wrote code for that specific thing” is pretty neat!

From both a story and a technical perspective, Moana 2 is everything we loved from Moana brought back, plus a lot of fun, big, bold new stuff [Bhatawadekar et al. 2025]. Making Moana 2 both gave us new challenges to solve and allowed us to revisit and come up with better solutions to old challenges from Moana. I’m incredibly proud of the work that my teammates and I were able to do on Moana 2; I’m sure we’ll have a lot more to share at SIGGRAPH 2025, and in the meantime I strongly encourage you to see Moana 2 on the biggest screen you can find!

To give you a taste of how beautiful this film looks, here are some frames from Moana 2 from the Bluray, 100% created using Disney’s Hyperion Renderer by our amazing artists. These are presented in no particular order:

Here is the credits frame for the Hyperion team, along with several of the other teams that we work closely with to make rendering happen at Disney Animation. Specifically, the Lighting & Materials team develops our render translation pipeline and much of the artist-facing user interfaces in our lighting and shading tools, and the Interactive Visualization team is our sibling team that develops our in-house realtime rasterizer viewports.

All images in this post are courtesy of and the property of Walt Disney Animation Studios.

References

Sucheta Bhatawadekar, Behzad Mansoori-Dara, Adolph Lusinsky, and Rob Dressel. 2025. The Cinematography of Songs in Disney’s “Moana 2”. In ACM SIGGRAPH 2025 Talks. Article 64.

Brent Burley, David Adler, Matt Jen-Yuan Chiang, Ralf Habel, Patrick Kelly, Peter Kutz, Yining Karl Li, and Daniel Teece. 2017. Recent Advancements in Disney’s Hyperion Renderer. In ACM SIGGRAPH 2017 Course Notes: Path Tracing in Production Part 1. 26-34.

Brent Burley, David Adler, Matt Jen-Yuan Chiang, Hank Driskill, Ralf Habel, Patrick Kelly, Peter Kutz, Yining Karl Li, and Daniel Teece. 2018. The Design and Evolution of Disney’s Hyperion Renderer. ACM Transactions on Graphics 37, 3 (Jul. 2018), Article 33.

Brent Burley. 2019. On Histogram-Preserving Blending for Randomized Texture Tiling. Journal of Computer Graphics Techniques 8, 4 (Nov. 2019), 31-53.

Brent Burley and Francisco Rodriguez. 2022. Fracture-Aware Tessellation of Subdivision Surfaces. In ACM SIGGRAPH 2022 Talks. Article 10.

Brent Burley, Brian Green, and Daniel Teece. 2024. Dynamic Screen Space Textures for Coherent Stylization. In ACM SIGGRAPH 2024 Talks. Article 50.

Matt Jen-Yuan Chiang, Benedikt Bitterli, Chuck Tappan, and Brent Burley. 2016. A Practical and Controllable Hair and Fur Model for Production Path Tracing. Computer Graphics Forum (Proc. of Eurographics) 35, 2 (May 2016), 275-283.

Matt Jen-Yuan Chiang, Peter Kutz, and Brent Burley. 2016. Practical and Controllable Subsurface Scattering for Production Path Tracing. In ACM SIGGRAPH 2016 Talks. Article 49.

Matt Jen-Yuan Chiang and Brent Burley. 2018. Plausible Iris Caustics and Limbal Arc Rendering. In ACM SIGGRAPH 2018 Talks, Article 15.

Matt Jen-Yuan Chiang, Yining Karl Li, and Brent Burley. 2019. Taming the Shadow Terminator. In ACM SIGGRAPH 2019 Talks. Article 71.

Henrik Dahlberg, David Adler, and Jeremy Newlin. 2019. Machine-Learning Denoising in Feature Film Production. In ACM SIGGRAPH 2019 Talks. Article 21.

Colvin Kenji Endo, Norman Moses Joseph, Alex Nijmeh, and Todd Scopio. 2025. Prototype to Production: Building a Lighting Workflow in Houdini for Animation. In Proc. of Digital Production Symposium (DigiPro 2025). 3:1-3:5:.

Ralf Habel. 2017. Volume Rendering in Hyperion. In ACM SIGGRAPH 2017 Course Notes: Production Volume Rendering. 91-96.

Wei-Feng Wayne Huang, Peter Kutz, Yining Karl Li, and Matt Jen-Yuan Chiang. 2021. Unbiased Emission and Scattering Importance Sampling for Heterogeneous Volumes. In ACM SIGGRAPH 2021 Talks. Article 3.

Chenfafu Jiang, Craig Schroeder, Andrew Selle, Joseph Teran, and Alexey Stomakhin. 2015. The Affine Particle-in-Cell Method. ACM Transactions on Graphics (Proc. of SIGGRAPH) 34, 4 (Aug. 2015), Article 51.

Norman Moses Joseph and Rehan Butt. 2025. The Design Opportunities of Moving to Houdini for Lighting with the world of Animation. In ACM SIGGRAPH 2025 Talks. Article 52.

Avneet Kaur, Hubert Leo, Nachiket Pujari, and Bret Bays. 2025. Choreography of Hair and Cloth in Disney’s “Moana 2”. In ACM SIGGRAPH 2025 Talks. Article 13.

Peter Kutz, Ralf Habel, Yining Karl Li, and Jan Novák. 2017. Spectral and Decomposition Tracking for Rendering Heterogeneous Volumes. ACM Transactions on Graphics (Proc. of SIGGRAPH) 36, 4 (Aug. 2017), Article 111.

Mark S. Lee, Nathan Zeichner, and Yining Karl Li. 2025. A Texture Streaming Pipeline for Real-Time GPU Ray Tracing. In ACM SIGGRAPH 2025 Talks. Article 12.

Harmony M. Li, George Rieckenberg, Neelima Karanam, Emily Vo, and Kelsey Hurley. 2024. Optimizing Assets for Authoring and Consumption in USD. In ACM SIGGRAPH 2024 Talks. Article 30.

Yining Karl Li, Charlotte Zhu, Gregory Nichols, Peter Kutz, Wei-Feng Wayne Huang, David Adler, Brent Burley, and Daniel Teece. 2024. Cache Points for Production-Scale Occlusion-Aware Many-Lights Sampling and Volumetric Scattering. In Proc. of Digital Production Symposium (DigiPro 2024). 6:1-6:19.

Tad Miller, Harmony M. Li, Neelima Karanam, Nadim Sinno, and Todd Scopio. 2022. Making Encanto with USD: Rebuilding a Production Pipeline Working from Home. In ACM SIGGRAPH 2022 Talks. Article 12.

Thomas Müller. 2019. Practical Path Guiding in Production. In ACM SIGGRAPH 2019 Course Notes: Path Guiding in Production. 37-50.

Thomas Müller, Markus Gross, and Jan Novák. 2017. Practical Path Guiding for Efficient Light-Transport Simulation. Computer Graphics Forum (Proc. of Eurographics Symposium on Rendering) 36, 4 (Jun. 2017), 91-100.

Sean Palmer, Jonathan Garcia, Sara Drakeley, Patrick Kelly, and Ralf Habel. 2017. The Ocean and Water Pipeline of Disney’s Moana. In ACM SIGGRAPH 2017 Talks. Article 29.

Lea Reichardt, Brian Green, Yining Karl Li, and Marco Manzi. 2025. Path Guiding Surfaces and Volumes in Disney’s Hyperion Renderer- A Case Study. In ACM SIGGRAPH 2025 Course Notes: Path Guiding in Production and Recent Advancements. 30-66.

Jacob Rice. 2025. Steerable Perlin Noise. In ACM SIGGRAPH 2025 Talks. Article 1.

Alberto J Luceño Ros and Cecilia Berriz. 2025. Creating the Mudskipper Pile in Disney’s “Moana 2”: A Slippery Problem Space. In ACM SIGGRAPH 2025 Talks. Article 63.

Alberto J Luceño Ros and Jeff Sullivan. 2025. The Art of Crowds Animation. In ACM SIGGRAPH 2025 Talks. Article 62.

Alexey Stomakhin and Andy Selle. 2017. Fluxed Animated Boundary Method. ACM Transactions on Graphics (Proc. of SIGGRAPH) 36, 4 (Aug. 2017), Article 68.

Emily Vo, George Rieckenberg, and Ernest Petti. 2023. Honing USD: Lessons Learned and Workflow Enhancements at Walt Disney Animation Studios. In ACM SIGGRAPH 2023 Talks. Article 13.

Thijs Vogels, Fabrice Rousselle, Brian McWilliams, Gerhard Röthlin, Alex Harvill, David Adler, Mark Meyer, and Jan Novák. 2018. Denoising with Kernel Prediction and Asymmetric Loss Functions. ACM Transactions on Graphics (Proc. of SIGGRAPH) 37, 4 (Aug. 2018), Article 124.

Tizian Zeltner, Brent Burley, and Matt Jen-Yuan Chiang. 2022. Practical Multiple-Scattering Sheen Using Linearly Transformed Cosines. In ACM SIGGRAPH 2022 Talks. Article 7.

Rikki Zhuang. 2025. Transitioning to an Explicit Dependency USD Asset Resolver. In ACM SIGGRAPH 2025 Course Notes: USD in Production. 73-106.


Footnotes

1 Our deep learning denoiser technology is one of the 2025 Academy of Motion Picture Arts and Sciences Scientific and Engineering Award winners. keyboard_return