Aventador Renders Revisited

A long time ago, I made some posts that featured a cool Lamborghini Aventador model. Recently, I revisited that model and made some new renders using the current version of Takua, mostly just for fun. To me, one of the most important parts of writing a renderer has always been being able to actually use the renderer to make fun images. The last time I rendered this model was something like four years ago, and back then Takua was still in a very basic state; the renders in those old posts don’t even have any shading beyond 50% grey lambertian surfaces! The renders in this post utilize a lot of advanced features that I’ve added since then, such as a proper complex layered Bsdf and texturing system, advanced bidirectional light transport techniques, huge speed improvements to ray traversal, advanced motion blur and generalized time capabilities, and more. I’m way behind in writing up many of these features and capabilities, but in the meantime, I thought I’d post some for-fun rendering projects I’ve done with Takua.

All of the renders in this post are directly from Takua, with a basic white balance and conversion from HDR EXR to LDR PNG being the only post-processing steps. Each render took about half a day to render (except for the wireframe render, which was much faster) on a 12-core workstation at 2560x1440 resolution.

Figure 1: An orange-red Lamborghini Aventador, rendered in Takua a0.7 using VCM.

Shading the Aventador model was a fun, interesting exercise. I went for a orange-red paint scheme since, well, Lamborghinis are supposed to look outrageous and orange-red is a fairly exotic paint scheme (I suppose I could have picked green or yellow or something instead, but I like orange-red). I ended up making a triple-lobe shader with a metallic base, a dielectric lobe, and a clear-coat lobe on top of that. The base lobe uses a GGX microfacet metallic Brdf. Takua’s shading system implements a proper metallic Fresnel model for conductors, where the Fresnel model includes both a Nd component representing refractive index and a k component representing the extinction coefficient for when an electromagnetic wave propagates through a material. For conductors, the final Fresnel index of refraction for each wavelength of light is defined by a complex combination of Nd and k. For the base metallic lobe, most of the color wound up coming from the k component. The dielectric lobe is meant to simulate paint on top of a car’s metal body; the dielectric lobe is where most of the orange-red color comes from. The dielectric lobe is again a GGX microfacet Brdf, but with a dielectric Fresnel model, which has a much simpler index of refraction calculation than the metallic Fresnel model does. I should note that Takua’s current standard material implementation actually only supports a single primary specular lobe and an additional single clear-coat lobe, so for shaders authored with both a metallic and dielectric component, Takua takes a blend weight between the two components and for each shading evaluation stochastically selects between the two lobes according to the blend weight. The clear-coat layer on top has just a slightly amount of extinction to provide just a bit more of the final orange look, but is mostly just clear.

All of the window glass in the render is tinted slightly dark through extinction instead of through a fixed refraction color. Using proper extinction to tint glass is more realistic than using a fixed refraction color. Similarly, the red and yellow glass used in the head lights and tail lights are colored through extinction. The brake disks use an extremely high resolution bump map to get the brushed metal look. The branding and markings on the tire walls are done through a combination of bump mapping and adjusting the roughness of the microfacet Brdf; the tire treads are made using a high resolution normal map. There’s no displacement mapping at all, although in retrospect the tire treads probably should be displacement mapped if I want to put the camera closer to them. Also, I actually didn’t really shade the interior of the car much, since I knew I was going for exterior shots only.

Eventually I’ll get around to implementing a proper car paint Bsdf in Takua, but until then, the approach I took here seems to hold up reasonable well as long as the camera doesn’t get super close up to the car.

I lit the scene using two lights: an HDR skydome from HDRI-Skies, and a single long, thin rectangular area light above the car. The skydome provides the overall soft-ish lighting that illuminates the entire scene, and the rectangular area light provides the long, interesting highlights on the car body that help with bringing out the car’s shape.

For all of the renders in this post, I used my VCM integrator, since the scene contains a lot of subtle caustics and since the inside of the car is lit entirely through glass. I also wound up modifying my adaptive sampler; it’s still the same adaptive sampler that I’ve had for a few years now, but with an important extension. Instead of simply reducing the total number of paths per iteration as areas reach convergence, the adaptive sampler now keeps the number of paths the same and instead reallocates paths from completed pixels to high-variance pixels. The end result is that the adaptive sampler is now much more effective at eliminating fireflies and targeting caustics and other noisy areas. In the above render, some pixels wound up with as few as 512 samples, while a few particularly difficult pixels finished with as many as 20000 samples. Here is the adaptive sampling heatmap for Figure 1 above; brighter areas indicate more samples. Note how the adaptive sampler found a number of areas that we’d expect to be challenging, such as the interior through the car’s glass windows, and parts of the body with specular inter-reflections.

Figure 2: Adaptive sampling heatmap for Figure 1. Brighter areas indicate more samples.

I recently implemented support for arbitrary camera shutter curves, so I thought doing a motion blurred render would be fun. After all, Lamborghinis are supposed to go fast! I animated the Lamborghini driving forward in Maya; the animation was very basic, with the main body just translating forward and the wheels both translating and rotating. Of course Takua has proper rotational motion blur. The motion blur here is effectively multi-segment motion blur; generating multi-segment motion blur from an animated sequence in Takua is very easy due to how Takua handles and understands time. I actually think that Takua’s concept of time is one of the most unique things in Takua; it’s very different from how every other renderer I’ve used and seen handles time. I intend to write more about this later. Instead of an instantaneous shutter, I used a custom cosine-based shutter curve that places many more time samples near the center of the shutter interval than towards the shutter open and close. Using a shutter shape like this wound up being important to getting the right look to the motion blur; even the car is moving extremely quickly, the overall form of the car is still clearly distinguishable and the front and back of the car appear more motion-blurred than the main body.

Figure 3: Motion blurred render, using multi-segment motion blur with a cosine-based shutter curve.

Since Takua has a procedural wireframe texture now, I also did a wireframe render. I mentioned my procedural wireframe texture in a previous post, but I didn’t write about how it actually works. For triangles and quads, the wireframe texture is simply based on the distance from the hitpoint to the nearest edge. If the distance to the nearest edge is smaller than some threshold, draw one color, otherwise, draw some other color. The nearest edge calculation can be done as follows (the variable names should be self-explanatory):

float calculateMinDistance(const Poly& p, const Intersection& hit) const {
    float md = std::numeric_limits<float>::infinity();
    const int verts = p.isQuad() ? 4 : 3;
    for (int i = 0; i < verts; i++) {
        const glm::vec3& cur = p[i].m_position;
        const glm::vec3& next = p[(i + 1) % verts].m_position;
        const glm::vec3 d1 = glm::normalize(next - cur);
        const glm::vec3 d2 = hit.m_point - cur;
        const float l = glm::length((cur + d1 * glm::dot(d1, d2) - hit.m_point));
        md = glm::min(md, l * l);
    }
    return md;
};

The topology of the meshes are pretty strange, since the car model came as a triangle mesh, which I then subdivided:

Figure 4: Procedural wireframe texture.

The material in the wireframe render only uses the lambertian diffuse lobe in Takua’s standard material; as such, the adaptive sampling heatmap for the wireframe render is interesting to compare to Figure 2. Overall the sample distribution is much more even, and areas where diffuse inter-reflections are present got more samples:

Figure 5: Adaptive sampling heatmap for Figure 4. Brighter areas indicate more samples. Compare with Figure 2.

Takua’s shading model supports layering different materials through parameter blending, similar to how the Disney Brdf (and, at this point, most other shading systems) handles material layering. I wanted to make an even more outrageous looking version of the Aventador than the orange-red version, so I used the procedural wireframe texture as a layer mask to drive parameter blending between a black paint and a metallic gold paint:

Figure 6: An outrageous Aventador paint scheme using a procedural wireframe texture to blend between black and metallic gold car paint.

Olaf's Frozen Adventure

After an amazing 2016, Walt Disney Animation Studios is having a bit of a break year this year. Disney Animation doesn’t have a feature film this year; instead, we made a half-hour featurette called Olaf’s Frozen Adventure, which will be released in front of Pixar’s Coco during Thanksgiving. I think this is the first time a Disney Animation short/featurette has accompanied a Pixar film. Olaf’s Frozen Adventure is a fun little holiday story set in the world of Frozen, and I had the privilege of getting to play a small role in making Olaf’s Frozen Adventure! I got an official credit as part of a handful of engineers that did some specific, interesting technology development for Olaf’s Frozen Adventure.

Olaf’s Frozen Adventure is really really funny; because Olaf is the main character, the entire story takes on much more of a self-aware, at times somewhat absurdist tone. The featurette also has a bunch of new songs- there are six new songs in total, which is somehow pretty close to the original film’s count of eight songs, but in a third of the runtime. Olaf’s Frozen Adventure was originally announced as a TV special, but the wider Walt Disney Company was so happy with the result that they decided to give Olaf’s Frozen Adventure a theatrical release instead!

Something I personally find fascinating about Olaf’s Frozen Adventure is comparing it visually with the original Frozen. Olaf’s Frozen Adventure is rendered entirely with Disney’s Hyperion Renderer, compared with Frozen, which was rendered using pre-RIS Renderman. While both films used our Disney BRDF [Burley 2012] and Ptex [Burley and Lacewell 2008], Olaf’s Frozen Adventure benefits from all of the improvements and advancements that have been made during Big Hero 6, Zootopia, and Moana. The original Frozen used dipole subsurface scattering, radiosity caching, and generally had fairly low geometric complexity relative to Hyperion-era films. In comparison, Olaf’s Frozen Adventure uses brute force subsurface scattering, uses path-traced global illumination, uses the full Disney BSDF (which is significantly extended from the Disney BRDF) [Burley 2015], uses our advanced fur/hair shader developed during Zootopia [Chiang et al. 2016], and has much greater geometric complexity. A great example of the greater geometric complexity is the knitted scarf sequence [Staub et al. 2018], where 2D animation was brought into Hyperion as a texture map to drive the colors on a knitted scarf that was modeled and rendered down to the fiber level. Some shots even utilize an extended version of the photon mapped caustics we developed during Moana; the photon mapped caustics system on Moana only supported distant lights as a photon source, but for Olaf’s Frozen Adventure, the photon mapping system was extended to support all of Hyperion’s existing light types as photon sources. These extensions to our photon mapping system is one of the things I worked on for Olaf’s Frozen Adventure, and was used for lighting the ice crystal tree that Elsa creates at the end of the film. Even the water in Arendelle Harbor looks way better than in Frozen, since the FX artists were able to make use of the incredible water systems developed for Moana [Palmer et al. 2017]. Many of these advancements are discussed in our SIGGRAPH 2017 Course Notes [Burley et al. 2017].

One of the huge advantages to working on an in-house production rendering team in a vertically integrated studio is being able to collaborate and partner closely with productions on executing long-term technical visions. Because of the show leadership’s confidence in our long-term development efforts targeted at later shows, the artists on Olaf’s Frozen Adventure were willing to take on and try out early versions of a number of new features in Hyperion that were originally targeted at later shows. Some of these “preview” features wound up making a big difference on Olaf’s Frozen Adventure, and lessons learned on Olaf’s Frozen Adventure were instrumental in making these features much more robust and complete on Ralph Breaks the Internet.

One major feature was brute force path-traced subsurface scattering; Peter Kutz, Matt Chiang, and Brent Burley had originally started development during Moana’s production on brute force path-traced subsurface scattering [Chiang 2016] as a replacement for Hyperion’s existing normalized diffusion based subsurface scattering [Burley 2015]. This feature wasn’t completed in time for use on Moana (although some initial testing was done using Moana assets), but was far enough along by Olaf’s Frozen Adventure was in production that artists started to experiment with it. If I remember correctly, the characters in Olaf’s Frozen Adventure are still using normalized diffusion, but path-traced subsurface wound up finding extensive use in rendering all of the snow in the show, since the additional detail that path-traced subsurface brings out helped highlight the small granular details in the snow. A lot of lessons learned from using path-traced subsurface scattering on the snow were then applied to making path-traced subsurface scattering more robust and easier to use and control. These experiences gave us the confidence to go ahead with full-scale deployment on Ralph Breaks the Internet, which uses path-traced subsurface scattering for everything including characters.

Another major development effort that found experimental use on Olaf’s Frozen Adventure were some large overhauls to Hyperion’s ray traversal system. During the production of Moana, we started running into problems with how large instance groups are structured in Hyperion. Moana’s island environments featured vast quantities of instanced vegetation geometry, and because of how the instancing was authored, Hyperion’s old strategy for grouping instances in the top-level BVH wound up producing heavily overlapping BVH leaves, which in extreme cases could severely degrade traversal performance. On Moana, the solution to this problem was to change how instances were authored upstream in the pipeline, but the way that the renderer wanted instances organized was fairly different from how artists and our pipeline like to think about instances, which made authoring more difficult. This problem motivated Peter Kutz and I to develop a new traversal system that would be less sensitive to how instance groups were authored; the system we came up with allows Hyperion to internally break up top-level BVH nodes with large overlapping bounds into smaller, tighter subbounds based on the topology of the lower-level BVHs. It turns out this system is conceptually essentially identical to BVH rebraiding [Benthin et al. 2017], but we developed and deployed this system independently before Benthin 2017 was published. As part of this effort, we also wound up revisiting Hyperion’s original cone-based packet traversal strategy [Eisenacher et al. 2013] and, motivated by extensive testing and statistical performance analysis, developed a new, simpler, higher performance multithreading strategy for handling Hyperion’s ultra-wide batched ray traversal. Olaf’s Frozen Adventure has a sequence where Olaf and Sven are being pulled down a mountainside through a forest by a burning sled; the enormous scale of the groundplane and large quantities of instanced trees proved to be challenging for Hyperion’s old traversal system. We were able to partner with the artists to deploy a mid-development prototype of our new traversal system on this sequence, and were able to cut traversal times by up to close to an order of magnitude in some cases. As a result, the artists were able to render this sequence with reasonable render times, and we were able to field-test the new traversal system prior to studio-wide deployment and iron out various kinks that were found along the way.

The last major mid-development Hyperion feature that saw early experimental use on Olaf’s Frozen Adventure was our new, next-generation spectral and decomposition tracking [Kutz et al. 2017] based null-collision volume rendering system, which was written with the intention of eventually completely replacing Hyperion’s existing residual ratio tracking [Novák 2014] based volume rendering system [Fong 2017]. Artists on Olaf’s Frozen Adventure ran into some difficulties with rendering loose, fluffy white snow, where the bright white appearance is the result of high-order scattering requiring large numbers of bounces. We realized that this problem is essentially identical to the problem of rendering white puffy clouds, which also have an appearance dominated by energy from high-order scattering. Since null-collision volume integration is specifically very efficient at handling high-order scattering, we gave the artists an early prototype version of Hyperion’s new volume rendering system to experiment with rendering loose fluffy snow as a volume. The initial results looked great; I’m not sure if this approach wound up being used in the final film, but this experiment gave both us and the artists a lot of confidence in the new volume rendering system and provided valuable feedback.

As usual with Disney Animation projects I get to work on, here are some stills in no particular order, from the film. Even though Olaf’s Frozen Adventure was originally meant for TV, the whole studio still put the same level of effort into it that goes into full theatrical features, and I think it shows!

Here is a credits frame with my name! I wasn’t actually expecting to get a credit on Olaf’s Frozen Adventure, but because I had spent a lot of time supporting the show and working with artists on deploying experimental Hyperion features to solve particularly difficult shots, the show decided to give me a credit! I was very pleasantly surprised by that; my teammate Matt Chiang got a credit as well for similar reasons.

All images in this post are courtesy of and the property of Walt Disney Animation Studios.

References

Carsten Benthin, Sven Woop, Ingo Wald, and Attila T. Áfra. 2017. Improved Two-Level BVHs using Partial Re-Braiding. In HPG ‘17 (Proceedings of High Performance Graphics). 7:1-7:8.

Brent Burley. Physically Based Shading at Disney. 2012. In ACM SIGGRAPH 2012 Course Notes: Practical Physically-Based Shading in Film and Game Production.

Brent Burley. Extending the Disney BRDF to a BSDF with Integrated Subsurface Scattering. 2015. In ACM SIGGRAPH 2015 Course Notes: Physically Based Shading in Theory and Practice.

Brent Burley, David Adler, Matt Jen-Yuan Chiang, Ralf Habel, Patrick Kelly, Peter Kutz, Yining Karl Li, and Daniel Teece. 2017. Recent Advances in Disney’s Hyperion Renderer. Path Tracing in Production Part 1, ACM SIGGRAPH 2017 Course Notes.

Brent Burley and Dylan Lacewell. 2008. Ptex: Per-face Texture Mapping for Production Rendering. Computer Graphics Forum. 27, 4 (2008), 1155-1164.

Matt Jen-Yuan Chiang, Benedikt Bitterli, Chuck Tappan, and Brent Burley. 2016. A Practical and Controllable Hair and Fur Model for Production Path Tracing. Computer Graphics Forum. 35, 2 (2016), 275-283.

Matt Jen-Yuan Chiang, Peter Kutz, and Brent Burley. 2016. Practical and Controllable Subsurface Scattering for Production Path Tracing. In ACM SIGGRAPH 2016 Talks. 49:1-49:2.

Christian Eisenacher, Gregory Nichols, Andrew Selle, and Brent Burley. 2013. Sorted Deferred Shading for Production Path Tracing. Computer Graphics Forum. 32, 4 (2013), 125-132.

Julian Fong, Magnus Wrenninge, Christopher Kulla, and Ralf Habel. 2017. Production Volume Rendering. In ACM SIGGRAPH 2017 Courses.

Peter Kutz, Ralf Habel, Yining Karl Li, and Jan Novák. 2017. Spectral and Decomposition Tracking for Rendering Heterogeneous Volumes. ACM Transactions on Graphics. 36, 4 (2017), 111:1-111:16.

Jan Novák, Andrew Selle, and Wojciech Jarosz. 2014. Residual Ratio Tracking for Estimating Attenuation in Participating Media. ACM Transactions on Graphics. 33, 6 (2014), 179:1-179:11.

Sean Palmer, Jonathan Garcia, Sara Drakeley, Patrick Kelly, and Ralf Habel. 2017. The Ocean and Water Pipeline of Disney’s Moana. In ACM SIGGRAPH 2017 Talks. 29:1-29:2.

Josh Staub, Alessandro Jacomini, Dan Lund. 2018. The Handiwork Behind “Olaf’s Frozen Adventure”. In ACM SIGGRAPH 2018 Talks. 26:1-26:2.

SIGGRAPH 2017 Course Notes- Recent Advances in Disney's Hyperion Renderer

This year at SIGGRAPH 2017, Luca Fascione and Johannes Hanika from Weta Digital organized a Path Tracing in Production course. The course was split into two halves: a first half about production renderers, and a second half about using production renderers to make movies. Brent Burley presented our recent work on Disney’s Hyperion Renderer as part of the first half of the course. To support Brent’s section of the course, the entire Hyperion team worked together to put together some course notes describing recent work in Hyperion done for Zootopia, Moana, and upcoming films.

Image from course notes Figure 8: a production frame from Zootopia, rendered using Disney's Hyperion Renderer.

Here is the abstract for the course notes:

Path tracing at Walt Disney Animation Studios began with the Hyperion renderer, first used in production on Big Hero 6. Hyperion is a custom, modern path tracer using a unique architecture designed to efficiently handle complexity, while also providing artistic controllability and efficiency. The concept of physically based shading at Disney Animation predates the Hyperion renderer. Our history with physically based shading significantly influenced the development of Hyperion, and since then, the development of Hyperion has in turn influenced our philosophy towards physically based shading.

The course notes and related materials can be found at:

The course wasn’t recorded due to proprietary content from various studios, but the overall course notes for the entire course cover everything that was presented. The major theme of our part of the course notes (and Brent’s presentation) is replacing multiple scattering approximations with accurate brute-force path-traced solutions. Interestingly, the main motivator for this move is primarily a desire for better, more predictable and intuitive controls for artists, as opposed to simply just wanting better visual quality. In the course notes, we specifically discuss fur/hair, path-traced subsurface scattering, and volume rendering.

The Hyperion team also had two other presentations at SIGGRAPH 2017:

SIGGRAPH 2017 Paper- Spectral and Decomposition Tracking for Rendering Heterogeneous Volumes

Some recent work I was part of at Walt Disney Animation Studios has been published in the July 2017 issue of ACM Transactions on Graphics as part of SIGGRAPH 2017! The paper is titled “Spectral and Decomposition Tracking for Rendering Heterogeneous Volumes”, and the project was a collaboration between the Hyperion development team at Walt Disney Animation Studios (WDAS) and the rendering group at Disney Research Zürich (DRZ). From the WDAS side, the authors are Peter Kutz (who was at Penn at the same time as me), Ralf Habel, and myself. On the DRZ side, our collaborator was Jan Novák, the head of DRZ’s rendering research group.

Image from paper Figure 12: a colorful explosion with chromatic extinction rendered using spectral tracking.

Here is the paper abstract:

We present two novel unbiased techniques for sampling free paths in heterogeneous participating media. Our decomposition tracking accelerates free-path construction by splitting the medium into a control component and a residual component and sampling each of them separately. To minimize expensive evaluations of spatially varying collision coefficients, we define the control component to allow constructing free paths in closed form. The residual heterogeneous component is then homogenized by adding a fictitious medium and handled using weighted delta tracking, which removes the need for computing strict bounds of the extinction function. Our second contribution, spectral tracking, enables efficient light transport simulation in chromatic media. We modify free-path distributions to minimize the fluctuation of path throughputs and thereby reduce the estimation variance. To demonstrate the correctness of our algorithms, we derive them directly from the radiative transfer equation by extending the integral formulation of null-collision algorithms recently developed in reactor physics. This mathematical framework, which we thoroughly review, encompasses existing trackers and postulates an entire family of new estimators for solving transport problems; our algorithms are examples of such. We analyze the proposed methods in canonical settings and on production scenes, and compare to the current state of the art in simulating light transport in heterogeneous participating media.

The paper and related materials can be found at:

Peter Kutz will be presenting the paper at SIGGRAPH 2017 in Log Angeles as part of the Rendering Volumes Technical Papers session.

Instead of repeating the contents of the paper here (which is pointless since the paper already says everything we want to say), I thought instead I’d use this blog post to talk about some of the process we went through while writing this paper. Please note that everything stated in this post are my own opinions and thoughts, not Disney’s.

This project started over a year ago, when we began an effort to significantly overhaul and improve Hyperion’s volume rendering system. Around the same time that we began to revisit volume rendering, we heard a lecture from a visiting professor on multilevel Monte Carlo (MLMC) methods. Although the final paper has nothing to do with MLMC methods, the genesis of this project was in initial conversations we had about how MLMC methods might be applied to volume rendering. We concluded that MLMC could be applicable, but weren’t entirely sure how. However, these conversations eventually gave Peter the idea to develop the technique that would eventually become decomposition tracking (importantly, decomposition tracking does not actually use MLMC though). Further conversations about weighted delta tracking then led to Peter developing the core ideas behind what would become spectral tracking. After testing some initial implementations of these prototype version of decomposition and spectral tracking, Peter, Ralf, and I shared the techniques with Jan. Around the same time, we also shared the techniques with our sister teams, Pixar’s RenderMan development group in Seattle and the Pixar Research Group in Emeryville, who were able to independently implement and verify our techniques. Being able to share research between Walt Disney Animation Studios, Disney Research, the Renderman group, Pixar Animation Studios, Industrial Light & Magic, and Imagineering is one of the reasons why Disney is such an amazing place to be for computer graphics folks.

At this point we had initial rudimentary proofs for why decomposition and spectral tracking worked separately, but we still didn’t have a unified framework that could be used to explain and combine the two techniques. Together with Jan, we began by deep-diving into the origins of delta/Woodcock tracking in neutron transport and reactor physics papers from the 1950s and 1960s and working our way forward to the present. All of the key papers we dug up during this deep-dive are cited in our paper. Some of these early papers were fairly difficult to find. For example, the original delta tracking paper, “Techniques used in the GEM code for Monte Carlo neutronics calculations in reactors and other systems of complex geometry” (Woodcock et al. 1965), is often cited in graphics literature, but a cursory Google search doesn’t provide any links to the actual paper itself. We eventually managed to track down a copy of the original paper in the archives of the United States Department of Commerce, which for some reason hosts a lot of archive material from Argonne National Laboratory. Since the original Woodcock paper has been in the public domain for some time now but is fairly difficult to find, I’m hosting a copy here for any researchers that may be interested.

Several other papers we were only able to obtain by requesting archival microfilm scans from several university libraries. I won’t host copies here, since the public domain status for several of them isn’t clear, but if you are a researcher looking for any of the papers that we cited and can’t find it, feel free to contact me. One particularly cool find was “The Relativistic Doppler Problem” (Zerby et al. 1961), which Peter obtained by writing to the Oak Ridge National Laboratory’s research library. Their staff were eventually able to find the paper in their records/archives, and subsequently scanned and uploaded the paper online. The paper is now publicly available here, on the United States Department of Energy’s Office of Scientific and Technical Information website.

Eventually, through significant effort from Jan, we came to understand Galtier et al.’s 2013 paper, “Integral Formulation of Null-Collision Monte Carlo Algorithms”, and were able to import the integral formulation into computer graphics and demonstrate how to derive both decomposition and spectral tracking directly from the radiative transfer equation using the integral formulation. This step also allowed Peter to figure out how to combine spectral and decomposition tracking into a single technique. With all of these pieces in place, we had the framework for our SIGGRAPH paper. We then put significant effort into working out remaining details, such as finding a good mechanism for bounding the free-path-sampling coefficient in spectral tracking. Producing all of the renders, results, charts, and plots in the paper also took an enormous amount of time; it turns out that producing all of this stuff can take significantly longer than the amount of time originally spent coming up with and implementing the techniques in the first place!

One major challenge we faced in writing the final paper was finding the best order in which to present the three main pieces of the paper: decomposition tracking, spectral tracking, and the integral formulation of null-collision algorithms. At one point, we considered first presenting decomposition tracking, since on a general level decomposition tracking is the easiest of the three contributions to understand. Then, we planned to use the proof of decomposition tracking to expand out into the integral formulation of the RTE with null collisions, and finally derive spectral tracking from the integral formulation. The idea was essentially to introduce the easiest technique first, expand out to the general mathematical framework, and then demonstrate the flexibility of the framework by deriving the second technique. However, this approach in practice felt disjointed, especially with respect to the body of prior work we wanted to present, which underpinned the integral framework but wound up being separated by the decomposition tracking section. So instead, we arrived on the final presentation order, where we first present the integral framework and derive out prior techniques such as delta tracking, and then demonstrate how to derive out new decomposition tracking and spectral tracking techniques. We hope that presenting the paper in this way will encourage other researchers to adopt the integral framework and derive other, new techniques from the framework. For Peter’s presentation at SIGGRAPH, however, Peter chose to go with the original order since it made for a better presentation.

Since our final paper was already quite long, we had to move some content into a separate supplemental document. Although the supplemental content isn’t necessary for implementing the core algorithms presented, I think the supplemental content is very useful for gaining a better understanding of the techniques. The supplemental content contains, among other things, an extended proof of the minimum-of-exponents mechanism that decomposition tracking is built on, various proofs related to choosing bounds for the local collision weight in spectral tracking, and various additional results and further analysis. We also provide a nifty interactive viewer for comparing our techniques against vanilla delta tracking; the interactive viewer framework was originally developed by Fabrice Rousselle, Jan Novák and Benedikt Bitterli at Disney Research Zürich.

One of the major advantages of doing rendering research at a major animation or VFX studio is the availability of hundreds of extremely talented artists, who are always eager to try out new techniques and software. Peter, Ralf, and I worked closely with a number of artists at WDAS to test our techniques and produce interesting scenes with which to generate results and data for the paper. Henrik Falt and Alex Nijmeh had created a number of interesting clouds in the process of testing our general volume rendering improvements, and worked with us to adapt a cloud dataset for use in Figure 11 of our paper. The following is one of the renders from Figure 11:

Image from paper Figure 11: an optically thick cloud rendered using decomposition tracking.

Henrik and Alex also constructed the cloudscape scene used as the banner image on the first page of the paper. After we submitted the paper, Henrik and Alex continued iterating on this scene, which eventually resulted in the more detailed version seen in our SIGGRAPH Fast Forward video. The version of the cloudscape used in our paper is reproduced below:

Image from paper Figure 1: a cloudscape rendered using spectral and decomposition tracking.

To test out spectral tracking, we wanted an interesting, dynamic, colorful dataset. After describing spectral tracking to Jesse Erickson, we arrived at the idea of a color explosion similar in spirit to certain visuals used in recent Apple and Microsoft ads, which in turn were inspired by the Holi festival celebrated in India and Nepal. Jesse authored the color explosion in Houdini and provided a set of VDBs for each color section, which we were then able to shade, light, and render using Hyperion’s implementation of spectral tracking. The final result was the color explosion from Figure 12 of the paper, seen at the top of this post. We were honored to learn that the color explosion figure was chosen to be one of the pictures on the back cover of this year’s conference proceedings!

At one point we also remembered that brute force path-traced subsurface scattering is just volume rendering inside of a bounded surface, which led to the translucent heterogeneous Stanford dragon used in Figure 15 of the paper:

Image from paper Figure 15: a subsurface scattering heterogeneous Stanford dragon rendered using spectral and decomposition tracking.

For our video for the SIGGRAPH 2017 Fast Forward, we were able to get a lot of help from a number of artists. Alex and Henrik and a number of other artists significantly expanded and improved the cloudscape scene, and we also rendered out several more color explosion variants. The final fast forward video contains work from Alex Nijmeh, Henrik Falt, Jesse Erickson, Thom Wickes, Michael Kaschalk, Dale Mayeda, Ben Frost, Marc Bryant, John Kosnik, Mir Ali, Vijoy Gaddipati, and Dimitre Berberov. The awesome title effect was thought up by and created by Henrik. The final video is a bit noisy since we were severely constrained on available renderfarm resources (we were basically squeezing our renders in between actual production renders), but I think the end result is still really great:

Here are a couple of cool stills from the fast forward video:

An improved cloudscape from our SIGGRAPH Fast Forward video.

An orange-purple color explosion from our SIGGRAPH Fast Forward video.

A green-yellow color explosion from our SIGGRAPH Fast Forward video.

We owe an enormous amount of thanks to fellow Hyperion teammate Patrick Kelly, who played an instrumental role in designing and implementing our overall new volume rendering system, and who discussed with us extensively throughout the project. Hyperion teammate David Adler also helped out a lot in profiling and instrumenting our code. We also must thank Thomas Müller, Marios Papas, Géraldine Conti, and David Adler for proofreading, and Brent Burley, Michael Kaschalk, and Rajesh Sharma for providing support, encouragement, and resources for this project.

I’ve worked on a SIGGRAPH Asia paper before, but working on a large scale publication in the context of a major animation studio instead of in school was a very different experience. The support and resources we were given and the amount of talent and help that we were able to tap into made this project possible. This project is also an example of the incredible value that comes from companies maintaining in-house industrial research labs; this project absolutely would not have been possible without all of the collaboration from DRZ, in both the form of direct collaboration from Jan and indirect collaboration from all of the DRZ researchers that provided discussions and feedback. Everyone worked really hard, but overall the whole process was immensely intellectually satisfying and fun, and seeing our new techniques in use by talented, excited artists makes all of the work absolutely worthwhile!

Subdivision Surfaces and Displacement Mapping

Two standard features that every modern production renderer supports are subdivision surfaces and some form of displacement mapping. As we’ll discuss a bit later in this post, these two features are usually very closely linked to each other in both usage and implementation. Subdivision and displacement are crucial tools for representing detail in computer graphics; from both a technical and authorship point of view, being able to represent more detail than is actually present in a mesh is advantageous. Applying detail at runtime allows for geometry to take up less disk space and memory than would be required if all detail was baked into the geometry, and artists often like the ability to separate broad features from high frequency detail.

I recently added support for subdivision surfaces and for both scalar and vector displacement to Takua; Figure 1 shows an ocean wave was rendered using vector displacement in Takua. The ocean surface is entirely displaced from just a single plane!

Figure 1: An ocean surface modeled as a flat plane and rendered using vector displacement mapping.

Both subdivision and displacement originally came from the world of rasterization rendering, where on-the-fly geometry generation was historically both easier to implement and more practical/plausible to use. In rasterization, geometry is streamed to through the renderer and drawn to screen, so each individual piece of geometry could be subdivided, tessellated, displaced, splatted to the framebuffer, and then discarded to free up memory. Old REYES Renderman was famously efficient at rendering subdivision surfaces and displacement surfaces for precisely this reason. However, in naive ray tracing, rays can intersect geometry at any moment in any order. Subdividing and displacing geometry on the fly for each ray and then discarding the geometry is insanely expensive compared to processing geometry once across an entire framebuffer. The simplest solution to this problem is to just subdivide and displace everything up front and keep it all around in memory during ray tracing. Historically though, just caching everything was never a practical solution since computers simply didn’t have enough memory to keep that much data around. As a result, past research work put significant effort into more intelligent ray tracing architectures that made on-the-fly subdivision/displacement affordable again; notable advancements include geometry caching for ray tracing [Pharr and Hanrahan 1996], direct ray tracing of displacement mapped triangles [Smits et al. 2000], reordered ray tracing [Hanika et al. 2010], and GPU ray traced vector displacement [Harada 2015].

In the past five years or so though, the story on ray traced displacement has changed. We now have machines with gobs and gobs of memory (at a number of studios, renderfarm nodes with 256 GB of memory or more is not unusual anymore). As a result, ray traced renderers don’t need to be nearly as clever anymore about managing displaced geometry; a combination of camera-adaptive tessellation and a simple geometry cache with a least-recently-used eviction strategy is often enough to make ray traced displacement practical. Heavy displacement is now common in the workflows for a number of production pathtracers, including Arnold, Renderman/RIS, Vray, Corona, Hyperion, Manuka, etc. With the above in mind, I tried to implement subdivision and displacement in Takua as simply as I possibly could.

Takua doesn’t have any concept of an eviction strategy for cached tessellated geometry; the hope is to just fit in memory and be as efficient as possible with what memory is available. Admittedly, since Takua is just my hobby renderer instead of a fully in-use production renderer, and I have personal machines with 48 GB of memory, I didn’t think particularly hard about cases where things don’t fit in memory. Instead of tessellating on-the-fly per ray or anything like that, I simply pre-subdivide and pre-displace everything upfront during the initial scene load. Meshes are loaded, subdivided, and displaced in parallel with each other. If Takua discovers that all of the subdivided and displaced geometry isn’t going to fit in the allocated memory budget, the renderer simply quits.

I should note that Takua’s scene format distinguishes between a mesh and a geom; a mesh is the raw vertex/face/primvar data that makes up a surface, while a geom is an object containing a reference to a mesh along with transformation matrices, shader bindings, and so on and so forth. This separation between the mesh data and the geometric object allows for some useful features in the subdivision/displacement system. Takua’s scene file format allows for binding subdivision and displacement modifiers either on the shader level, or per each geom. Bindings at the geom level override bindings on the shader level, which is useful for authoring since a whole bunch of objects can share the same shader but then have individual specializations for different subdivision rates and different displacement maps and displacement settings. During scene loading, Takua analyzes what subdivisions/displacements are required for which meshes by which geoms, and then de-duplicates and aggregates any cases where different geoms want the same subdivision/displacement for the same mesh. This de-duplication even works for instances (I should write a separate post about Takua’s approach to instancing someday…).

Once Takua has put together a list of all meshes that require subdivision, meshes are subdivided in parallel. For Catmull-Clark subdivision [Catmull and Clark 1978], I rely on OpenSubdiv for calculating subdivision stencil tables [Halstead et al. 1993] for feature adaptive subdivision [Nießner et al. 2012], evaluating the stencils, and final tessellation. As far as I can tell, stencil calculation in OpenSubdiv is single threaded, so it can get fairly slow on really heavy meshes. Stencil evaluation and final tessellation is super fast though, since OpenSubdiv provides a number of parallel evaluators that can run using a variety of backends ranging from TBB on the CPU to CUDA or OpenGL compute shaders on the GPU. Takua currently relies on OpenSubdiv’s TBB evaluator. One really neat thing about the stencil implementation in OpenSubdiv is that the stencil calculation is dependent on only the topology of the mesh and not individual primvars, so a single stencil calculation can then be reused multiple times to interpolate many different primvars, such as positions, normals, uvs, and more. Currently Takua doesn’t support creases; I’m planning on adding crease support later.

No writing about subdivision surfaces is complete without a picture of a cube being subdivided into a sphere, so Figure 2 shows a render of a cube with subdivision levels 0, 1, 2, and 3, going from left to right. Each subdivided cube is rendered with a procedural wireframe texture that I implemented to help visualize what was going on with subdivision.

Figure 2: A cube with 0, 1, 2, and 3 subdivision levels, going from left to right.

Each subdivided mesh is placed into a new mesh; base meshes that require multiple subdivision levels for multiple different geoms get one new subdivided mesh per subdivision level. After all subdivided meshes are ready, Takua then runs displacement. Displacement is parallelized both by mesh and within each mesh. Also, Takua supports both on-the-fly displacement and fully cached displacement, which can be specified per shader or per geom. If a mesh is marked for full caching, the mesh is fully displaced, stored as a separate mesh from the undisplaced subdivision mesh, and then a BVH is built for the displaced mesh. If a mesh is marked for on-the-fly displacement, the displacement system calculates each displaced face, then calculates the bounds for that face, and then discards the face. The displaced bounds are then used to build a tight BVH for the displaced mesh without actually having to store the displaced mesh itself; instead, just a reference to the undisplaced subdivision mesh has to be kept around. When a ray traverses the BVH for an on-the-fly displacement mesh, each BVH leaf node specifies which triangles on the undisplaced mesh need to be displaced to produce final polys for intersection and then the displaced polys are intersected and discarded again. For the scenes in this post, on-the-fly displacement seems to be about twice as slow as fully cached displacement, which is to be expected, but if the same mesh is displaced multiple different ways, then there are correspondingly large memory savings. After all displacement has been calculated, Takua goes back and analyzes which base meshes and undisplaced subdivision meshes are no longer needed, and frees those meshes to reclaim memory.

I implemented support for both scalar displacement via regular grayscale texture maps, and vector displacement from OpenEXR textures. The ocean render from the start of this post uses vector displacement applied to a single plane. Figure 3 shows another angle of the same vector displaced ocean:

Figure 3: Another view of the vector displaced ocean surface from Figure 1. The ocean surface has a dielectric refractive material complete with colored attenuated transmission. A shallow depth of field is used to lend added realism.

For both ocean renders, the vector displacement OpenEXR texture is borrowed from Autodesk, who generously provide it as part of an article about vector displacement in Arnold. The renders are lit with a skydome using hdri-skies.com’s HDRI Sky 193 texture.

For both scalar and vector displacement, the displacement amount from the displacement texture can be controlled by a single scalar value. Vector displacement maps are assumed to be in a local tangent space; which axis is used as the basis of the tangent space can be specified per displacement map. Figure 4 shows three dirt shaderballs with varying displacement scaling values. The leftmost shaderball has a displacement scale of 0, which effectively disables displacement. The middle shaderball has a displacement scale of 0.5 of the native displacement values in the vector displacement map. The rightmost shaderball has a displacement scale of 1.0, which means just use the native displacement values from the vector displacement map.

Figure 4: Dirt shaderballs with displacement scales of 0.0, 0.5, and 1.0, going from left to right.

Figure 5 shows a closeup of the rightmost dirt shaderball from Figure 4. The base mesh for the shaderball is relatively low resolution, but through subdivision and displacement, a huge amount of geometric detail can be added in-render. In this case, the shaderball is tessellated to a point where each individual micropolygon is at a subpixel size. The model for the shaderball is based on Bertrand Benoit’s shaderball. The displacement map and other textures for the dirt shaderball are from Quixel’s Megascans library.

Figure 5: Closeup of the dirt shaderball from Figure 4. In this render, the shaderball is tessellated and displaced to a subpixel resolution.

One major challenge with displacement mapping is cracking. Cracking occurs when adjacent polygons displace the same shared vertices different ways for each polygon. This can happen when the normals across a surface aren’t continuous, or if there is a discontinuity in either how the displacement texture is mapped to the surface, or in the displacement texture itself. I implemented an optional, somewhat brute-force solution to displacement cracking. If crack removal is enabled, Takua analyzes the mesh at displacement time and records how many different ways each vertex in the mesh has been displaced by different faces, along with which faces want to displace that vertex. After an initial displacement pass, the crack remover then goes back and for every vertex that is displaced more than one way, all of the displacements are averaged into a single displacement, and all faces that use that vertex are updated to share the same averaged result. This approach requires a fair amount of bookkeeping and pre-analysis of the displaced mesh, but it seems to work well. Figure 6 is a render of two cubes with geometric normals assigned per face. The two cubes are displaced using the same checkerboard displacement pattern, but the cube on the left has crack removal disabled, while the cube on the right has crack removal enabled:

Figure 6: Displaced cubes with and without crack elimination.

In most cases, the crack removal system seems to work pretty well. However, the system isn’t perfect; sometimes, stretching artifacts can appear, especially with surfaces with a textured base color. This stretching happens because the crack removal system basically stretches micropolygons to cover the crack. This texture stretching can be seen in some parts of the shaderballs in Figures 5, 7, and 8 in this post.

Takua automatically recalculates normals for subdivided/displaced polygons. By default, Takua simply uses the geometric normal as the shading normal for displaced polygons; however, an option exists to calculate smooth normals for the shading normals as well. I chose to use geometric normals as the default with the hope that for subpixel subdivision and displacement, a different shading normal wouldn’t be as necessary.

In the future, I may choose to implement my own subdivision library, and I should probably also put more thought into some kind of proper combined tessellation cache and eviction strategy for better memory efficiency. For now though, everything seems to work well and renders relatively efficiently; the non-ocean renders in this post all have sub-pixel subdivision with millions of polygons and each took several hours to render at 4K (3840x2160) resolution on a machine with dual Intel Xeon X5675 CPUs (12 cores total). The two ocean renders I let run overnight at 1080p resolution; they took longer to converge mostly due to the depth of field. All renders in this post were shaded using a new, vastly improved shading system that I’ll write about at a later point. Takua can now render a lot more complexity than before!

In closing, I rendered a few more shaderballs using various displacement maps from the Megascans library, seen in Figures 7 and 8.

Figure 7: A pebble sphere and a leafy sphere. Note the overhangs on the leafy sphere, which are only possible using vector displacement.

Figure 8: A compacted sand sphere and a stone sphere. Unfortunately, there is some noticeable texture stretching on the compacted sand sphere where crack removal occured.

References

Edwin E. Catmull and James H. Clark. 1978. Recursively Generated B-spline Surfaces on Arbitrary Topological Meshes. Computer-Aided Design. 10, 6 (1978), 350-355.

Mark Halstead, Michael Kass, and Tony DeRose. 1993. Efficient, Fair Interpolation using Catmull-Clark Surfaces. In SIGGRAPH 1993: Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques. 35-44.

Johannes Hanika, Alexander Keller, and Hendrik P A Lensch. 2010. Two-Level Ray Tracing with Reordering for Highly Complex Scenes. In GI 2010 (Proceedings of the 2010 Conference on Graphics Interfaces). 145-152.

Takahiro Harada. 2015. Rendering Vector Displacement Mapped Surfaces in a GPU Ray Tracer. In GPU Pro 6. 459-474.

Matthias Nießner, Charles Loop, Mark Meyer, and Tony DeRose. 2012. Feature Adaptive GPU Rendering of Catmull-Clark Subdivision Surfaces. ACM Transactions on Graphics. 31, 1 (2012), 6:1-6:11.

Matt Pharr and Pat Hanrahan. 1996. Geometry Caching for Ray-Tracing Displacement Maps. In Rendering Techniques 1996 (Proceedings of the 7th Eurographics Workshop on Rendering). 31-40.

Brian Smits, Peter Shirley, and Michael M. Stark. 2000. Direct Ray Tracing of Displacement Mapped Triangles. In Rendering Techniques 2000 (Proceedings of the 11th Eurographics Workshop on Rendering). 307-318.

Moana

2016 is the first year ever that Walt Disney Animation Studios is releasing two CG animated films. We released Zootopia back in March, and next week, we will be releasing our newest film, Moana. I’ve spent the bulk of the last year and a half working as part of Disney’s Hyperion Renderer team on a long list of improvements and new features for Moana. Moana is the first film I have an official credit on, and I couldn’t be more excited for the world to see what we have made!

We’re all incredibly proud of Moana; the story is fantastic, the characters are fresh and deep and incredibly appealing, and the music is an instant classic. Most important for a rendering guy though, I think Moana is flat out the best looking animated film anyone has ever made. Every single department on this film really outdid themselves. The technology that we had to develop for this film was staggering; we have a whole new distributed fluid simulation package for the endless oceans in the film, we added advanced new lighting capabilities to Hyperion that have never been used in an animated film before to this extent (to the best of my knowledge), we made huge advances in our animation technology for characters such as Maui; the list goes on and on and on. Something like over 85% of the shots in this movie have significant FX work in them, which is unheard of for animated features.

Hyperion gained a number of major new capabilities in support of making Moana. Rendering the ocean was a major concern on Moana, so much of Hyperion’s development during Moana revolved around features related to rendering water. Our lighters wanted caustics in all shots with shallow water, such as shots set at the beach or near the shoreline; faking caustics was quickly ruled out as an option since setting up lighting rigs with fake caustics that looked plausible and visually pleasing proved to be difficult and laborious. We found that providing real caustics was vastly preferable to faking things, both from a visual quality standpoint and a artist workflow standpoint, so we wound up adding a photon mapping system to Hyperion. The design of the photon mapping system is highly optimized around handling sun-water caustics, which allows for some major performance optimizations, such as an adaptive photon distribution system that makes sure that photons are not wasted on off-camera parts of the scene. Most of the photon mapping system was written by Peter Kutz; I also got to work on the photon mapping system a bit.

Water is in almost every shot in the film in some form, and the number of water effects was extremely varied, ranging from the ocean surface going out for dozens of miles in every direction, to splashes and boat wakes [Stomakhin and Selle 2017] and other finely detailed effects. Water had to be created using a host of different techniques, from relatively simple procedural wave functions [Garcia et al. 2016], to hand-animatable rigged wave systems [Byun and Stomakhin 2017], all the way to huge complex fluid simulations using Splash, a custom in-house APIC-based fluid simulator [Jiang et al. 2015]. We even had to support water as a straight up rigged character [Frost et al. 2017]! In order to bring the results of all of these techniques together into a single renderable water surface, an enormous amount of effort was put into building a level-set compositing system, in which all water simulation results would be converted into signed distance fields that could then be combined and converted into a watertight mesh. Having a single watertight mesh was important, since the ocean often also contained a homogeneous volume to produce physically correct scattering. This is where all of the blues and the greens in ocean water come from. This entire system could be run by Hyperion at rendertime, or could be run offline beforehand to generate a cached result that Hyperion could load; a whole complex pipeline had to be build to support this capability [Palmer et al. 2017]. Building this level-set compositing and meshing system involved a large number of TDs and engineers; on the Hyperion side, this project was led by Ralf Habel, Patrick Kelly, and Andy Selle. Peter and I also helped out at various points.

At one point early on the film’s production, we noticed that our lighters were having a difficult time getting specular glints off of the ocean surface to look right. For artistic controllability reasons, our lighters prefer to keep the sun and the skydome as two separate lights; the skydome is usually an image-based light that is either painted or is from photography with the sun painted out, and the sun is usually a distant infinite light that subtends some sold angle. After a lot of testing, we found that the look of specular glints on the ocean surface comes partially from the sun itself, but also partially from the atmospheric scattering that makes the sun look hazy and larger in the sky than it actually is. To get this look, I added a system to analytically add a Mie-scattering halo around our distant lights; we called the result the “halo light”.

Up until Moana, Hyperion actually never had proper importance sampling for emissive meshes; we just relied on paths randomly finding their way to emissive meshes and only worried about importance sampling analytical area lights and distant infinite lights. For shots with the big lava monster Te-Ka [Bryant et al. 2017], however, most of the light in the frame came from emissive lava meshes, and most of what was being lit were complex, dense smoke volumes. Peter added a highly efficient system for importance sampling emissive meshes into the renderer, which made Te-Ka shots go from basically un-renderable to not a problem at all. David Adler also made some huge improvements to our denoiser’s ability to handle volumes to help with those shots.

Hyperion also saw a huge number of other improvements during Moana; Dan Teece and Matt Chiang made numerous improvements to the shading system, I reworked the ribbon curve intersection system to robustly handle Heihei’s and hawk-Maui’s feathers, Greg Nichols made our camera-adaptive tessellation more robust, and the team in general made many speed and memory optimizations. Throughout the whole production cycle, Hyperion partnered really closely with production to make Moana the most beautiful animated film we’ve ever made. This close partnership is what makes working at Disney Animation such an amazing, fun, and interesting experience.

The first section of the credits sequence in Moana showcases a number of the props that our artists made for the film. I highly recommend staying and staring at all of the eye candy; our look and modeling departments are filled with some of the most dedicated and talented folks I’ve ever met. The props in the credits have simply preposterous amounts of detail on them; every single prop has stuff like tiny little flyaway fibers or microscratches or imperfections or whatnot on them. In some of the international posters, one can see that all of the human characters are covered with fine peach fuzz (an important part of making their skin catch the sunlight correctly), which we rendered in every frame! Something that we’re really proud of is the fact that none of the credit props were specially modeled for the credits! Those are all the exact props we used in every frame that they show up in, which really is a testament to both how amazing our artists our and how much work we’ve put into every part of our technology. The vast majority of production for Moana happened in essentially the 9 months between Zootopia’s release in March and October; this timeline becomes even more astonishing given the sheer beauty and craftsmanship in Moana.

Below are a number of stills (in no particular order) from the movie, 100% rendered using Hyperion. These stills give just a hint at how beautiful this movie looks; definitely go see it on the biggest screen you can find!

Here is a credits frame with my name that Disney kindly provided! Most of the Hyperion team is grouped under the Rendering/Pipeline/Engineering Services (three separate teams under the same manager) category this time around, although a handful of Hyperion guys show up in an earlier part of the credits instead.

All images in this post are courtesy of and the property of Walt Disney Animation Studios.

Addendum 2018-08-18: A lot more detailed information about the photon mapping system, the level-set compositing system, and the halo light is now available as part of our recent TOG paper on Hyperion [Burley et al. 2018].

References

Marc Bryant, Ian Coony, and Jonathan Garcia. 2017. Moana: Foundation of a Lava Monster. In ACM SIGGRAPH 2017, Talks. 10:1-10:2.

Brent Burley, David Adler, Matt Jen-Yuan Chiang, Hank Driskill, Ralf Habel, Patrick Kelly, Peter Kutz, Yining Karl Li, and Daniel Teece. 2018. The Design and Evolution of Disney’s Hyperion Renderer. ACM Transactions on Graphics. 37, 3 (2018), 33:1-33:22.

Dong Joo Byun and Alexey Stomakhin. 2017. Moana: Crashing Waves. In ACM SIGGRAPH 2017, Talks. 41:1-41:2.

Ben Frost, Alexey Stomakhin, and Hiroaki Narita. 2017. Moana: Performing Water. In ACM SIGGRAPH 2017, Talks. 30:1-30:2.

Jonathan Garcia, Sara Drakeley, Sean Palmer, Erin Ramos, David Hutchins, Ralf Habel, and Alexey Stomakhin. 2016. Rigging the Oceans of Disney’s Moana. In ACM SIGGRAPH Asia 2016, Technical Briefs. 30:1-30:4.

Chenfafu Jiang, Craig Schroeder, Andrew Selle, Joseph Teran, and Alexey Stomakhin. 2015. The Affine Particle-in-Cell Method. ACM Transactions on Graphics. 34, 4 (2015), 51:1-51:10.

Sean Palmer, Jonathan Garcia, Sara Drakeley, Patrick Kelly, and Ralf Habel. 2017. The Ocean and Water Pipeline of Disney’s Moana. In ACM SIGGRAPH 2017, Talks. 29:1-29:2.

Alexey Stomakhin and Andy Selle. 2017. Fluxed Animated Boundary Method. ACM Transactions on Graphics. 36, 4 (2017), 68:1-68:8.