SIGGRAPH 2017 Paper- Spectral and Decomposition Tracking for Rendering Heterogeneous Volumes

Some recent work I was part of at Walt Disney Animation Studios has been published in the July 2017 issue of ACM Transactions on Graphics as part of SIGGRAPH 2017! The paper is titled “Spectral and Decomposition Tracking for Rendering Heterogeneous Volumes”, and the project was a collaboration between the Hyperion development team at Walt Disney Animation Studios (WDAS) and the rendering group at Disney Research Zürich (DRZ). From the WDAS side, the authors are Peter Kutz (who was at Penn at the same time as me), Ralf Habel, and myself. On the DRZ side, our collaborator was Jan Novák, the head of DRZ’s rendering research group.

Image from paper Figure 12: a colorful explosion with chromatic extinction rendered using spectral tracking.

Here is the paper abstract:

We present two novel unbiased techniques for sampling free paths in heterogeneous participating media. Our decomposition tracking accelerates free-path construction by splitting the medium into a control component and a residual component and sampling each of them separately. To minimize expensive evaluations of spatially varying collision coefficients, we define the control component to allow constructing free paths in closed form. The residual heterogeneous component is then homogenized by adding a fictitious medium and handled using weighted delta tracking, which removes the need for computing strict bounds of the extinction function. Our second contribution, spectral tracking, enables efficient light transport simulation in chromatic media. We modify free-path distributions to minimize the fluctuation of path throughputs and thereby reduce the estimation variance. To demonstrate the correctness of our algorithms, we derive them directly from the radiative transfer equation by extending the integral formulation of null-collision algorithms recently developed in reactor physics. This mathematical framework, which we thoroughly review, encompasses existing trackers and postulates an entire family of new estimators for solving transport problems; our algorithms are examples of such. We analyze the proposed methods in canonical settings and on production scenes, and compare to the current state of the art in simulating light transport in heterogeneous participating media.

The paper and related materials can be found at:

Peter Kutz will be presenting the paper at SIGGRAPH 2017 in Log Angeles as part of the Rendering Volumes Technical Papers session.

Instead of repeating the contents of the paper here (which is pointless since the paper already says everything we want to say), I thought instead I’d use this blog post to talk about some of the process we went through while writing this paper. Please note that everything stated in this post are my own opinions and thoughts, not Disney’s.

This project started over a year ago, when we began an effort to significantly overhaul and improve Hyperion’s volume rendering system. Around the same time that we began to revisit volume rendering, we heard a lecture from a visiting professor on multilevel Monte Carlo (MLMC) methods. Although the final paper has nothing to do with MLMC methods, the genesis of this project was in initial conversations we had about how MLMC methods might be applied to volume rendering. We concluded that MLMC could be applicable, but weren’t entirely sure how. However, these conversations eventually gave Peter the idea to develop the technique that would eventually become decomposition tracking (importantly, decomposition tracking does not actually use MLMC though). Further conversations about weighted delta tracking then led to Peter developing the core ideas behind what would become spectral tracking. After testing some initial implementations of these prototype version of decomposition and spectral tracking, Peter, Ralf, and I shared the techniques with Jan. Around the same time, we also shared the techniques with our sister teams, Pixar’s RenderMan development group in Seattle and the Pixar Research Group in Emeryville, who were able to independently implement and verify our techniques. Being able to share research between Walt Disney Animation Studios, Disney Research, the Renderman group, Pixar Animation Studios, Industrial Light & Magic, and Imagineering is one of the reasons why Disney is such an amazing place to be for computer graphics folks.

At this point we had initial rudimentary proofs for why decomposition and spectral tracking worked separately, but we still didn’t have a unified framework that could be used to explain and combine the two techniques. Together with Jan, we began by deep-diving into the origins of delta/Woodcock tracking in neutron transport and reactor physics papers from the 1950s and 1960s and working our way forward to the present. All of the key papers we dug up during this deep-dive are cited in our paper. Some of these early papers were fairly difficult to find. For example, the original delta tracking paper, “Techniques used in the GEM code for Monte Carlo neutronics calculations in reactors and other systems of complex geometry” (Woodcock et al. 1965), is often cited in graphics literature, but a cursory Google search doesn’t provide any links to the actual paper itself. We eventually managed to track down a copy of the original paper in the archives of the United States Department of Commerce, which for some reason hosts a lot of archive material from Argonne National Laboratory. Since the original Woodcock paper has been in the public domain for some time now but is fairly difficult to find, I’m hosting a copy here for any researchers that may be interested.

Several other papers we were only able to obtain by requesting archival microfilm scans from several university libraries. I won’t host copies here, since the public domain status for several of them isn’t clear, but if you are a researcher looking for any of the papers that we cited and can’t find it, feel free to contact me. One particularly cool find was “The Relativistic Doppler Problem” (Zerby et al. 1961), which Peter obtained by writing to the Oak Ridge National Laboratory’s research library. Their staff were eventually able to find the paper in their records/archives, and subsequently scanned and uploaded the paper online. The paper is now publicly available here, on the United States Department of Energy’s Office of Scientific and Technical Information website.

Eventually, through significant effort from Jan, we came to understand Galtier et al.’s 2013 paper, “Integral Formulation of Null-Collision Monte Carlo Algorithms”, and were able to import the integral formulation into computer graphics and demonstrate how to derive both decomposition and spectral tracking directly from the radiative transfer equation using the integral formulation. This step also allowed Peter to figure out how to combine spectral and decomposition tracking into a single technique. With all of these pieces in place, we had the framework for our SIGGRAPH paper. We then put significant effort into working out remaining details, such as finding a good mechanism for bounding the free-path-sampling coefficient in spectral tracking. Producing all of the renders, results, charts, and plots in the paper also took an enormous amount of time; it turns out that producing all of this stuff can take significantly longer than the amount of time originally spent coming up with and implementing the techniques in the first place!

One major challenge we faced in writing the final paper was finding the best order in which to present the three main pieces of the paper: decomposition tracking, spectral tracking, and the integral formulation of null-collision algorithms. At one point, we considered first presenting decomposition tracking, since on a general level decomposition tracking is the easiest of the three contributions to understand. Then, we planned to use the proof of decomposition tracking to expand out into the integral formulation of the RTE with null collisions, and finally derive spectral tracking from the integral formulation. The idea was essentially to introduce the easiest technique first, expand out to the general mathematical framework, and then demonstrate the flexibility of the framework by deriving the second technique. However, this approach in practice felt disjointed, especially with respect to the body of prior work we wanted to present, which underpinned the integral framework but wound up being separated by the decomposition tracking section. So instead, we arrived on the final presentation order, where we first present the integral framework and derive out prior techniques such as delta tracking, and then demonstrate how to derive out new decomposition tracking and spectral tracking techniques. We hope that presenting the paper in this way will encourage other researchers to adopt the integral framework and derive other, new techniques from the framework. For Peter’s presentation at SIGGRAPH, however, Peter chose to go with the original order since it made for a better presentation.

Since our final paper was already quite long, we had to move some content into a separate supplemental document. Although the supplemental content isn’t necessary for implementing the core algorithms presented, I think the supplemental content is very useful for gaining a better understanding of the techniques. The supplemental content contains, among other things, an extended proof of the minimum-of-exponents mechanism that decomposition tracking is built on, various proofs related to choosing bounds for the local collision weight in spectral tracking, and various additional results and further analysis. We also provide a nifty interactive viewer for comparing our techniques against vanilla delta tracking; the interactive viewer framework was originally developed by Fabrice Rousselle, Jan Novák and Benedikt Bitterli at Disney Research Zürich.

One of the major advantages of doing rendering research at a major animation or VFX studio is the availability of hundreds of extremely talented artists, who are always eager to try out new techniques and software. Peter, Ralf, and I worked closely with a number of artists at WDAS to test our techniques and produce interesting scenes with which to generate results and data for the paper. Henrik Falt and Alex Nijmeh had created a number of interesting clouds in the process of testing our general volume rendering improvements, and worked with us to adapt a cloud dataset for use in Figure 11 of our paper. The following is one of the renders from Figure 11:

Image from paper Figure 11: an optically thick cloud rendered using decomposition tracking.

Henrik and Alex also constructed the cloudscape scene used as the banner image on the first page of the paper. After we submitted the paper, Henrik and Alex continued iterating on this scene, which eventually resulted in the more detailed version seen in our SIGGRAPH Fast Forward video. The version of the cloudscape used in our paper is reproduced below:

Image from paper Figure 1: a cloudscape rendered using spectral and decomposition tracking.

To test out spectral tracking, we wanted an interesting, dynamic, colorful dataset. After describing spectral tracking to Jesse Erickson, we arrived at the idea of a color explosion similar in spirit to certain visuals used in recent Apple and Microsoft ads, which in turn were inspired by the Holi festival celebrated in India and Nepal. Jesse authored the color explosion in Houdini and provided a set of VDBs for each color section, which we were then able to shade, light, and render using Hyperion’s implementation of spectral tracking. The final result was the color explosion from Figure 12 of the paper, seen at the top of this post. We were honored to learn that the color explosion figure was chosen to be one of the pictures on the back cover of this year’s conference proceedings!

At one point we also remembered that brute force path-traced subsurface scattering is just volume rendering inside of a bounded surface, which led to the translucent heterogeneous Stanford dragon used in Figure 15 of the paper:

Image from paper Figure 15: a subsurface scattering heterogeneous Stanford dragon rendered using spectral and decomposition tracking.

For our video for the SIGGRAPH 2017 Fast Forward, we were able to get a lot of help from a number of artists. Alex and Henrik and a number of other artists significantly expanded and improved the cloudscape scene, and we also rendered out several more color explosion variants. The final fast forward video contains work from Alex Nijmeh, Henrik Falt, Jesse Erickson, Thom Wickes, Michael Kaschalk, Dale Mayeda, Ben Frost, Marc Bryant, John Kosnik, Mir Ali, Vijoy Gaddipati, and Dimitre Berberov. The awesome title effect was thought up by and created by Henrik. The final video is a bit noisy since we were severely constrained on available renderfarm resources (we were basically squeezing our renders in between actual production renders), but I think the end result is still really great:

Here are a couple of cool stills from the fast forward video:

An improved cloudscape from our SIGGRAPH Fast Forward video.

An orange-purple color explosion from our SIGGRAPH Fast Forward video.

A green-yellow color explosion from our SIGGRAPH Fast Forward video.

We owe an enormous amount of thanks to fellow Hyperion teammate Patrick Kelly, who played an instrumental role in designing and implementing our overall new volume rendering system, and who discussed with us extensively throughout the project. Hyperion teammate David Adler also helped out a lot in profiling and instrumenting our code. We also must thank Thomas Müller, Marios Papas, Géraldine Conti, and David Adler for proofreading, and Brent Burley, Michael Kaschalk, and Rajesh Sharma for providing support, encouragement, and resources for this project.

I’ve worked on a SIGGRAPH Asia paper before, but working on a large scale publication in the context of a major animation studio instead of in school was a very different experience. The support and resources we were given and the amount of talent and help that we were able to tap into made this project possible. This project is also an example of the incredible value that comes from companies maintaining in-house industrial research labs; this project absolutely would not have been possible without all of the collaboration from DRZ, in both the form of direct collaboration from Jan and indirect collaboration from all of the DRZ researchers that provided discussions and feedback. Everyone worked really hard, but overall the whole process was immensely intellectually satisfying and fun, and seeing our new techniques in use by talented, excited artists makes all of the work absolutely worthwhile!

Subdivision Surfaces and Displacement Mapping

Two standard features that every modern production renderer supports are subdivision surfaces and some form of displacement mapping. As we’ll discuss a bit later in this post, these two features are usually very closely linked to each other in both usage and implementation. Subdivision and displacement are crucial tools for representing detail in computer graphics; from both a technical and authorship point of view, being able to represent more detail than is actually present in a mesh is advantageous. Applying detail at runtime allows for geometry to take up less disk space and memory than would be required if all detail was baked into the geometry, and artists often like the ability to separate broad features from high frequency detail.

I recently added support for subdivision surfaces and for both scalar and vector displacement to Takua; Figure 1 shows an ocean wave was rendered using vector displacement in Takua. The ocean surface is entirely displaced from just a single plane!

Figure 1: An ocean surface modeled as a flat plane and rendered using vector displacement mapping.

Both subdivision and displacement originally came from the world of rasterization rendering, where on-the-fly geometry generation was historically both easier to implement and more practical/plausible to use. In rasterization, geometry is streamed to through the renderer and drawn to screen, so each individual piece of geometry could be subdivided, tessellated, displaced, splatted to the framebuffer, and then discarded to free up memory. Old REYES Renderman was famously efficient at rendering subdivision surfaces and displacement surfaces for precisely this reason. However, in naive ray tracing, rays can intersect geometry at any moment in any order. Subdividing and displacing geometry on the fly for each ray and then discarding the geometry is insanely expensive compared to processing geometry once across an entire framebuffer. The simplest solution to this problem is to just subdivide and displace everything up front and keep it all around in memory during ray tracing. Historically though, just caching everything was never a practical solution since computers simply didn’t have enough memory to keep that much data around. As a result, past research work put significant effort into more intelligent ray tracing architectures that made on-the-fly subdivision/displacement affordable again; notable advancements include geometry caching for ray tracing [Pharr and Hanrahan 1996], direct ray tracing of displacement mapped triangles [Smits et al. 2000], reordered ray tracing [Hanika et al. 2010], and GPU ray traced vector displacement [Harada 2015].

In the past five years or so though, the story on ray traced displacement has changed. We now have machines with gobs and gobs of memory (at a number of studios, renderfarm nodes with 256 GB of memory or more is not unusual anymore). As a result, ray traced renderers don’t need to be nearly as clever anymore about managing displaced geometry; a combination of camera-adaptive tessellation and a simple geometry cache with a least-recently-used eviction strategy is often enough to make ray traced displacement practical. Heavy displacement is now common in the workflows for a number of production pathtracers, including Arnold, Renderman/RIS, Vray, Corona, Hyperion, Manuka, etc. With the above in mind, I tried to implement subdivision and displacement in Takua as simply as I possibly could.

Takua doesn’t have any concept of an eviction strategy for cached tessellated geometry; the hope is to just fit in memory and be as efficient as possible with what memory is available. Admittedly, since Takua is just my hobby renderer instead of a fully in-use production renderer, and I have personal machines with 48 GB of memory, I didn’t think particularly hard about cases where things don’t fit in memory. Instead of tessellating on-the-fly per ray or anything like that, I simply pre-subdivide and pre-displace everything upfront during the initial scene load. Meshes are loaded, subdivided, and displaced in parallel with each other. If Takua discovers that all of the subdivided and displaced geometry isn’t going to fit in the allocated memory budget, the renderer simply quits.

I should note that Takua’s scene format distinguishes between a mesh and a geom; a mesh is the raw vertex/face/primvar data that makes up a surface, while a geom is an object containing a reference to a mesh along with transformation matrices, shader bindings, and so on and so forth. This separation between the mesh data and the geometric object allows for some useful features in the subdivision/displacement system. Takua’s scene file format allows for binding subdivision and displacement modifiers either on the shader level, or per each geom. Bindings at the geom level override bindings on the shader level, which is useful for authoring since a whole bunch of objects can share the same shader but then have individual specializations for different subdivision rates and different displacement maps and displacement settings. During scene loading, Takua analyzes what subdivisions/displacements are required for which meshes by which geoms, and then de-duplicates and aggregates any cases where different geoms want the same subdivision/displacement for the same mesh. This de-duplication even works for instances (I should write a separate post about Takua’s approach to instancing someday…).

Once Takua has put together a list of all meshes that require subdivision, meshes are subdivided in parallel. For Catmull-Clark subdivision [Catmull and Clark 1978], I rely on OpenSubdiv for calculating subdivision stencil tables [Halstead et al. 1993] for feature adaptive subdivision [Nießner et al. 2012], evaluating the stencils, and final tessellation. As far as I can tell, stencil calculation in OpenSubdiv is single threaded, so it can get fairly slow on really heavy meshes. Stencil evaluation and final tessellation is super fast though, since OpenSubdiv provides a number of parallel evaluators that can run using a variety of backends ranging from TBB on the CPU to CUDA or OpenGL compute shaders on the GPU. Takua currently relies on OpenSubdiv’s TBB evaluator. One really neat thing about the stencil implementation in OpenSubdiv is that the stencil calculation is dependent on only the topology of the mesh and not individual primvars, so a single stencil calculation can then be reused multiple times to interpolate many different primvars, such as positions, normals, uvs, and more. Currently Takua doesn’t support creases; I’m planning on adding crease support later.

No writing about subdivision surfaces is complete without a picture of a cube being subdivided into a sphere, so Figure 2 shows a render of a cube with subdivision levels 0, 1, 2, and 3, going from left to right. Each subdivided cube is rendered with a procedural wireframe texture that I implemented to help visualize what was going on with subdivision.

Figure 2: A cube with 0, 1, 2, and 3 subdivision levels, going from left to right.

Each subdivided mesh is placed into a new mesh; base meshes that require multiple subdivision levels for multiple different geoms get one new subdivided mesh per subdivision level. After all subdivided meshes are ready, Takua then runs displacement. Displacement is parallelized both by mesh and within each mesh. Also, Takua supports both on-the-fly displacement and fully cached displacement, which can be specified per shader or per geom. If a mesh is marked for full caching, the mesh is fully displaced, stored as a separate mesh from the undisplaced subdivision mesh, and then a BVH is built for the displaced mesh. If a mesh is marked for on-the-fly displacement, the displacement system calculates each displaced face, then calculates the bounds for that face, and then discards the face. The displaced bounds are then used to build a tight BVH for the displaced mesh without actually having to store the displaced mesh itself; instead, just a reference to the undisplaced subdivision mesh has to be kept around. When a ray traverses the BVH for an on-the-fly displacement mesh, each BVH leaf node specifies which triangles on the undisplaced mesh need to be displaced to produce final polys for intersection and then the displaced polys are intersected and discarded again. For the scenes in this post, on-the-fly displacement seems to be about twice as slow as fully cached displacement, which is to be expected, but if the same mesh is displaced multiple different ways, then there are correspondingly large memory savings. After all displacement has been calculated, Takua goes back and analyzes which base meshes and undisplaced subdivision meshes are no longer needed, and frees those meshes to reclaim memory.

I implemented support for both scalar displacement via regular grayscale texture maps, and vector displacement from OpenEXR textures. The ocean render from the start of this post uses vector displacement applied to a single plane. Figure 3 shows another angle of the same vector displaced ocean:

Figure 3: Another view of the vector displaced ocean surface from Figure 1. The ocean surface has a dielectric refractive material complete with colored attenuated transmission. A shallow depth of field is used to lend added realism.

For both ocean renders, the vector displacement OpenEXR texture is borrowed from Autodesk, who generously provide it as part of an article about vector displacement in Arnold. The renders are lit with a skydome using hdri-skies.com’s HDRI Sky 193 texture.

For both scalar and vector displacement, the displacement amount from the displacement texture can be controlled by a single scalar value. Vector displacement maps are assumed to be in a local tangent space; which axis is used as the basis of the tangent space can be specified per displacement map. Figure 4 shows three dirt shaderballs with varying displacement scaling values. The leftmost shaderball has a displacement scale of 0, which effectively disables displacement. The middle shaderball has a displacement scale of 0.5 of the native displacement values in the vector displacement map. The rightmost shaderball has a displacement scale of 1.0, which means just use the native displacement values from the vector displacement map.

Figure 4: Dirt shaderballs with displacement scales of 0.0, 0.5, and 1.0, going from left to right.

Figure 5 shows a closeup of the rightmost dirt shaderball from Figure 4. The base mesh for the shaderball is relatively low resolution, but through subdivision and displacement, a huge amount of geometric detail can be added in-render. In this case, the shaderball is tessellated to a point where each individual micropolygon is at a subpixel size. The model for the shaderball is based on Bertrand Benoit’s shaderball. The displacement map and other textures for the dirt shaderball are from Quixel’s Megascans library.

Figure 5: Closeup of the dirt shaderball from Figure 4. In this render, the shaderball is tessellated and displaced to a subpixel resolution.

One major challenge with displacement mapping is cracking. Cracking occurs when adjacent polygons displace the same shared vertices different ways for each polygon. This can happen when the normals across a surface aren’t continuous, or if there is a discontinuity in either how the displacement texture is mapped to the surface, or in the displacement texture itself. I implemented an optional, somewhat brute-force solution to displacement cracking. If crack removal is enabled, Takua analyzes the mesh at displacement time and records how many different ways each vertex in the mesh has been displaced by different faces, along with which faces want to displace that vertex. After an initial displacement pass, the crack remover then goes back and for every vertex that is displaced more than one way, all of the displacements are averaged into a single displacement, and all faces that use that vertex are updated to share the same averaged result. This approach requires a fair amount of bookkeeping and pre-analysis of the displaced mesh, but it seems to work well. Figure 6 is a render of two cubes with geometric normals assigned per face. The two cubes are displaced using the same checkerboard displacement pattern, but the cube on the left has crack removal disabled, while the cube on the right has crack removal enabled:

Figure 6: Displaced cubes with and without crack elimination.

In most cases, the crack removal system seems to work pretty well. However, the system isn’t perfect; sometimes, stretching artifacts can appear, especially with surfaces with a textured base color. This stretching happens because the crack removal system basically stretches micropolygons to cover the crack. This texture stretching can be seen in some parts of the shaderballs in Figures 5, 7, and 8 in this post.

Takua automatically recalculates normals for subdivided/displaced polygons. By default, Takua simply uses the geometric normal as the shading normal for displaced polygons; however, an option exists to calculate smooth normals for the shading normals as well. I chose to use geometric normals as the default with the hope that for subpixel subdivision and displacement, a different shading normal wouldn’t be as necessary.

In the future, I may choose to implement my own subdivision library, and I should probably also put more thought into some kind of proper combined tessellation cache and eviction strategy for better memory efficiency. For now though, everything seems to work well and renders relatively efficiently; the non-ocean renders in this post all have sub-pixel subdivision with millions of polygons and each took several hours to render at 4K (3840x2160) resolution on a machine with dual Intel Xeon X5675 CPUs (12 cores total). The two ocean renders I let run overnight at 1080p resolution; they took longer to converge mostly due to the depth of field. All renders in this post were shaded using a new, vastly improved shading system that I’ll write about at a later point. Takua can now render a lot more complexity than before!

In closing, I rendered a few more shaderballs using various displacement maps from the Megascans library, seen in Figures 7 and 8.

Figure 7: A pebble sphere and a leafy sphere. Note the overhangs on the leafy sphere, which are only possible using vector displacement.

Figure 8: A compacted sand sphere and a stone sphere. Unfortunately, there is some noticeable texture stretching on the compacted sand sphere where crack removal occured.

References

Edwin E. Catmull and James H. Clark. 1978. Recursively Generated B-spline Surfaces on Arbitrary Topological Meshes. Computer-Aided Design. 10, 6 (1978), 350-355.

Mark Halstead, Michael Kass, and Tony DeRose. 1993. Efficient, Fair Interpolation using Catmull-Clark Surfaces. In SIGGRAPH 1993: Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques. 35-44.

Johannes Hanika, Alexander Keller, and Hendrik P A Lensch. 2010. Two-Level Ray Tracing with Reordering for Highly Complex Scenes. In GI 2010 (Proceedings of the 2010 Conference on Graphics Interfaces). 145-152.

Takahiro Harada. 2015. Rendering Vector Displacement Mapped Surfaces in a GPU Ray Tracer. In GPU Pro 6. 459-474.

Matthias Nießner, Charles Loop, Mark Meyer, and Tony DeRose. 2012. Feature Adaptive GPU Rendering of Catmull-Clark Subdivision Surfaces. ACM Transactions on Graphics. 31, 1 (2012), 6:1-6:11.

Matt Pharr and Pat Hanrahan. 1996. Geometry Caching for Ray-Tracing Displacement Maps. In Rendering Techniques 1996 (Proceedings of the 7th Eurographics Workshop on Rendering). 31-40.

Brian Smits, Peter Shirley, and Michael M. Stark. 2000. Direct Ray Tracing of Displacement Mapped Triangles. In Rendering Techniques 2000 (Proceedings of the 11th Eurographics Workshop on Rendering). 307-318.

Moana

2016 is the first year ever that Walt Disney Animation Studios is releasing two CG animated films. We released Zootopia back in March, and next week, we will be releasing our newest film, Moana. I’ve spent the bulk of the last year and a half working as part of Disney’s Hyperion Renderer team on a long list of improvements and new features for Moana. Moana is the first film I have an official credit on, and I couldn’t be more excited for the world to see what we have made!

We’re all incredibly proud of Moana; the story is fantastic, the characters are fresh and deep and incredibly appealing, and the music is an instant classic. Most important for a rendering guy though, I think Moana is flat out the best looking animated film anyone has ever made. Every single department on this film really outdid themselves. The technology that we had to develop for this film was staggering; we have a whole new distributed fluid simulation package for the endless oceans in the film, we added advanced new lighting capabilities to Hyperion that have never been used in an animated film before to this extent (to the best of my knowledge), we made huge advances in our animation technology for characters such as Maui; the list goes on and on and on. Something like over 85% of the shots in this movie have significant FX work in them, which is unheard of for animated features.

Hyperion gained a number of major new capabilities in support of making Moana. Rendering the ocean was a major concern on Moana, so much of Hyperion’s development during Moana revolved around features related to rendering water. Our lighters wanted caustics in all shots with shallow water, such as shots set at the beach or near the shoreline; faking caustics was quickly ruled out as an option since setting up lighting rigs with fake caustics that looked plausible and visually pleasing proved to be difficult and laborious. We found that providing real caustics was vastly preferable to faking things, both from a visual quality standpoint and a artist workflow standpoint, so we wound up adding a photon mapping system to Hyperion. The design of the photon mapping system is highly optimized around handling sun-water caustics, which allows for some major performance optimizations, such as an adaptive photon distribution system that makes sure that photons are not wasted on off-camera parts of the scene. Most of the photon mapping system was written by Peter Kutz; I also got to work on the photon mapping system a bit.

Water is in almost every shot in the film in some form, and the number of water effects was extremely varied, ranging from the ocean surface going out for dozens of miles in every direction, to splashes and boat wakes [Stomakhin and Selle 2017] and other finely detailed effects. Water had to be created using a host of different techniques, from relatively simple procedural wave functions [Garcia et al. 2016], to hand-animatable rigged wave systems [Byun and Stomakhin 2017], all the way to huge complex fluid simulations using Splash, a custom in-house APIC-based fluid simulator [Jiang et al. 2015]. We even had to support water as a straight up rigged character [Frost et al. 2017]! In order to bring the results of all of these techniques together into a single renderable water surface, an enormous amount of effort was put into building a level-set compositing system, in which all water simulation results would be converted into signed distance fields that could then be combined and converted into a watertight mesh. Having a single watertight mesh was important, since the ocean often also contained a homogeneous volume to produce physically correct scattering. This is where all of the blues and the greens in ocean water come from. This entire system could be run by Hyperion at rendertime, or could be run offline beforehand to generate a cached result that Hyperion could load; a whole complex pipeline had to be build to support this capability [Palmer et al. 2017]. Building this level-set compositing and meshing system involved a large number of TDs and engineers; on the Hyperion side, this project was led by Ralf Habel, Patrick Kelly, and Andy Selle. Peter and I also helped out at various points.

At one point early on the film’s production, we noticed that our lighters were having a difficult time getting specular glints off of the ocean surface to look right. For artistic controllability reasons, our lighters prefer to keep the sun and the skydome as two separate lights; the skydome is usually an image-based light that is either painted or is from photography with the sun painted out, and the sun is usually a distant infinite light that subtends some sold angle. After a lot of testing, we found that the look of specular glints on the ocean surface comes partially from the sun itself, but also partially from the atmospheric scattering that makes the sun look hazy and larger in the sky than it actually is. To get this look, I added a system to analytically add a Mie-scattering halo around our distant lights; we called the result the “halo light”.

Up until Moana, Hyperion actually never had proper importance sampling for emissive meshes; we just relied on paths randomly finding their way to emissive meshes and only worried about importance sampling analytical area lights and distant infinite lights. For shots with the big lava monster Te-Ka [Bryant et al. 2017], however, most of the light in the frame came from emissive lava meshes, and most of what was being lit were complex, dense smoke volumes. Peter added a highly efficient system for importance sampling emissive meshes into the renderer, which made Te-Ka shots go from basically un-renderable to not a problem at all. David Adler also made some huge improvements to our denoiser’s ability to handle volumes to help with those shots.

Moana also saw a huge number of other improvements during Moana; Dan Teece and Matt Chiang made numerous improvements to the shading system, I reworked the ribbon curve intersection system to robustly handle Heihei’s and hawk-Maui’s feathers, Greg Nichols made our camera-adaptive tessellation more robust, and the team in general made many speed and memory optimizations. Throughout the whole production cycle, Hyperion partnered really closely with production to make Moana the most beautiful animated film we’ve ever made. This close partnership is what makes working at Disney Animation such an amazing, fun, and interesting experience.

The first section of the credits sequence in Moana showcases a number of the props that our artists made for the film. I highly recommend staying and staring at all of the eye candy; our look and modeling departments are filled with some of the most dedicated and talented folks I’ve ever met. The props in the credits have simply preposterous amounts of detail on them; every single prop has stuff like tiny little flyaway fibers or microscratches or imperfections or whatnot on them. In some of the international posters, one can see that all of the human characters are covered with fine peach fuzz (an important part of making their skin catch the sunlight correctly), which we rendered in every frame! Something that we’re really proud of is the fact that none of the credit props were specially modeled for the credits! Those are all the exact props we used in every frame that they show up in, which really is a testament to both how amazing our artists our and how much work we’ve put into every part of our technology. The vast majority of production for Moana happened in essentially the 9 months between Zootopia’s release in March and October; this timeline becomes even more astonishing given the sheer beauty and craftsmanship in Moana.

Below are a number of stills (in no particular order) from the movie, 100% rendered using Hyperion. These stills give just a hint at how beautiful this movie looks; definitely go see it on the biggest screen you can find!

Here is a credits frame with my name that Disney kindly provided! Most of the Hyperion team is grouped under the Rendering/Pipeline/Engineering Services (three separate teams under the same manager) category this time around, although a handful of Hyperion guys show up in an earlier part of the credits instead.

All images in this post are courtesy of and the property of Walt Disney Animation Studios.

Addendum 2018-08-18: A lot more detailed information about the photon mapping system, the level-set compositing system, and the halo light is now available as part of our recent TOG paper on Hyperion [Burley et al. 2018].

References

Marc Bryant, Ian Coony, and Jonathan Garcia. 2017. Moana: Foundation of a Lava Monster. In ACM SIGGRAPH 2017, Talks. 10:1-10:2.

Brent Burley, David Adler, Matt Jen-Yuan Chiang, Hank Driskill, Ralf Habel, Patrick Kelly, Peter Kutz, Yining Karl Li, and Daniel Teece. 2018. The Design and Evolution of Disney’s Hyperion Renderer. ACM Transactions on Graphics. 37, 3 (2018), 33:1-33:22.

Dong Joo Byun and Alexey Stomakhin. 2017. Moana: Crashing Waves. In ACM SIGGRAPH 2017, Talks. 41:1-41:2.

Ben Frost, Alexey Stomakhin, and Hiroaki Narita. 2017. Moana: Performing Water. In ACM SIGGRAPH 2017, Talks. 30:1-30:2.

Jonathan Garcia, Sara Drakeley, Sean Palmer, Erin Ramos, David Hutchins, Ralf Habel, and Alexey Stomakhin. 2016. Rigging the Oceans of Disney’s Moana. In ACM SIGGRAPH Asia 2016, Technical Briefs. 30:1-30:4.

Chenfafu Jiang, Craig Schroeder, Andrew Selle, Joseph Teran, and Alexey Stomakhin. 2015. The Affine Particle-in-Cell Method. ACM Transactions on Graphics. 34, 4 (2015), 51:1-51:10.

Sean Palmer, Jonathan Garcia, Sara Drakeley, Patrick Kelly, and Ralf Habel. 2017. The Ocean and Water Pipeline of Disney’s Moana. In ACM SIGGRAPH 2017, Talks. 29:1-29:2.

Alexey Stomakhin and Andy Selle. 2017. Fluxed Animated Boundary Method. ACM Transactions on Graphics. 36, 4 (2017), 68:1-68:8.

Physically Based Rendering 3rd Edition

Today is the release date for the digital version of the new Physically Based Rendering 3rd Edition, by Matt Pharr, Wenzel Jakob, and Greg Humphreys. As anyone in the rendering world knows, Physically Based Rendering is THE reference book for the field; for novices, Physically Based Rendering is the best introduction one can get to the field, and for experts, Physically Based Rendering is an invaluable reference book to consult and check. I share a large office with three other engineers on the Hyperion team, and I think between the four of us, we actually have an average of more than one copy per person (of varying editions). I could not recommend this book enough. The third edition adds Wenzel Jakob as an author; Wenzel is the author of the research-oriented Mitsuba Renderer and is one of the most prominent new researchers in rendering in the past decade. There is a lot of great new light transport stuff in the third edition, which I’m guessing comes from Wenzel. Both Wenzel’s work and the previous editions of Physically Based Rendering were instrumental in influencing my path in rendering, so of course I’ve already had the third edition on pre-order since it was announced over a year ago.

Each edition of Physically Based Rendering is accompanied by a major release of the PBRT renderer, which implements the book. The PBRT renderer is a major research resource for the community, and basically everyone I know in the field has learned something or another from looking through and taking apart PBRT. As part of the drive towards PBRT-v3, Matt Pharr made a call for interesting scenes to provide as demo scenes with the PBRT-v3 release. I offered Matt the PBRT-v2 scene I made a while back, because how could that scene not be rendered in PBRT? I’m very excited that Matt accepted and included the scene as part of PBRT-v3’s example scenes! You can find the example scenes here on the PBRT website.

Converting the scene to PBRT’s format required a lot of manual work, since PBRT’s scene specification and shading system is very different from Takua’s. As a result, the image that PBRT renders out looks slightly different from Takua’s version, but that’s not a big deal. Here is the scene rendered using PBRT-v3:

Physically Based Rendering 2nd Edition, rendered using PBRT-v3.

…and for comparison, the same scene rendered using Takua:

Physically Based Rendering 2nd Edition, rendered using Takua Renderer a0.5.

Really, it’s just the lighting that is a bit different; the Takua version is slightly warmer and slightly underexposed in comparison.

At some point I should make an updated version of this scene using the third edition book. I’m hoping to be able to contribute more of my Takua test scenes to the community in PBRT-v3 format in the future; giving back to such a major influence on my own career is extremely important. As part of the process of porting the scene over to PBRT-v3, I also wrote a super-hacky render viewer for PBRT that shows the progress of the render as the renderer runs. Unfortunately, this viewer is mega-hacky, and I don’t have time at the moment to clean it up and release it. Hopefully at some point I’ll be able to; alternatively, anyone else who wants to take a look and give it a stab, feel free to contact me.


Addendum 04/28/2017: Matt was recently looking for some interesting water-sim scenes to demonstrate dielectrics and glass materials and refraction and whatnot. I contributed a few frames from my PIC/FLIP fluid simulator, Ariel. Most of the data from Ariel doesn’t exist in meshed format anymore; I still have all of the raw VDBs and stuff, but the meshes took up way more storage space than I could afford at the time. I can still regenerate all of the meshes though, and I also have a handleful of frames in mesh form still from my attenuated transmission blog post. The frame from the first image in that post is now also included in the PBRT-v3 example scene suite! The PBRT version looks very different since it is intended to demonstrate and test something very different from what I was doing in that blog post, but it still looks great!

A frame from my Ariel fluid simulator, rendered using PBRT-v3.

Rendering Minecraft in Renderman/RIS

The vast majority of my computer graphics time is spent developing renderers (Disney’s Hyperion renderer as a professional, Takua Renderer as a hobbyist). However, I think having experience using renderers as an artist is an important part of knowing what to focus on as a renderer developer. I also think that knowing how a variety of different renderers work and how they are used is important; a lot of artists are used to using several different renderers, and each renderer has its own vocabulary and tried and true workflows and whatnot. Finally, there are a lot of really smart people working on all of the major production renderers out there, and seeing the cool things everyone is doing is fun and interesting! Because of all of these reasons, I like putting some time aside every once in a while to tinker with other renderers. I usually don’t write about my art projects that much anymore, but this project was particularly fun and produced some nice looking images, so I thought I’d write it up. As usual, before we dive into the post, here is the final image I made, rendered using Pixar’s Photorealistic Renderman 20 in RIS mode:

A Minecraft town from the pve.nerd.nu Minecraft server, rendered in Renderman 20/RIS.

About two years ago, Pixar’s Photorealistic Renderman got a new rendering mode called RIS. PRman was one of the first production renderers ever developed, and historically PRman has always been a REYES-style rasterization renderer. Over time though, PRman has gained a whole bunch of added on features. At the time of Monsters University, PRman was actually a kind of hybrid rasterizer and raytracer; the rendering system on Monsters University used raytracing to build a multiresolution radiosity cache that was then used for calculating GI contributions in the shading part of REYES rasterization. That approach worked well and produced beautiful images, but it was also really complicated and had a number of drawbacks! RIS replaces all of that with a brand new, pure pathtracing system. In fact, while RIS is marketed as a new mode in PRman, RIS is actually a completely new renderer written almost completely from scratch; it just happens to be able to read Renderman RIB files as input.

Recently, I wanted to try rendering a Minecraft world from a Minecraft server that I play on. There are a lot of great Minecraft rendering tools available these days (Chunky comes to mind), but I wanted much more production-like control over the look of the render, so I decided to do everything using a normal CG production workflow instead of a prebuilt dedicated Minecraft rendering tool. I thought that I would use the project as a chance to give RIS a spin. At Cornell’s Program of Computer Graphics, Pixar was kind enough to provide us with access to the Renderman 19 beta program, which included the first version of RIS. I tinkered with the PRman 19 beta quite a lot at Cornell, and being an early beta, RIS had some bugs and incomplete bits back then. Since then though, the Renderman team has followed up PRman 19 with versions 20 and 21, which introduced a number of new features and speed/stability improvements to RIS. Since joining the Hyperion team, I’ve had the chance to meet and talk to various (really smart!) folks on the Renderman team since they are a sister team to us, but I haven’t actually had time to try the new versions of RIS. This project was a fun way to try the newest version of RIS on my own!

The Minecraft data for this project comes from the Nerd.nu community Minecraft server, which is run by a collective of players for free. I’ve been playing on the Nerd.nu PvE (Player versus Environment) server for years and years now, and players have built a mind-boggling number of amazing detailed creations. Every couple of months, the server is reset with a fresh map; I wanted to render a town that fellow player Avi_Dangerstein and I built on the previous map revision. Fortunately, all previous Nerd.nu map revisions are available for download in the server archives (the specific map I used is labeled pve-rev17). Here is an overview of the map revision I wanted to pull data from:

Cartograph view of Revision 17 of the Nerd.nu PvE server, located at p.nerd.nu. Click through to go to the full, zoomable cartograph.

…and here is a zoomed in view of the part of the map that contains our town. The vast majority of the town was built by two players over the course of about 4 months. Our town is about 250 blocks long; the entire server map is a 6000 block by 6000 block square.

Zoomed cartograph view of our Minecraft town.

The first problem to tackle in this project was just getting Minecraft world data into a usable format. Pixar provides a free, non-commercial version of Renderman for Maya, and I’m very familiar with Maya, so my entire workflow for this project was based around good ol’ Maya. Maya doesn’t know how to read Minecraft data though… in fact, Minecraft’s chunked data format is a fascinating rabbit hole to read about in its own right. I briefly entertained the idea of writing my own Minecraft to Maya importer, but then I found a number of Minecraft to Obj exporters that other folks have already written. I first tried jmc2obj, but the section of the Minecraft world that I wanted to export was so large that jmc2obj kept running out of memory and crashing. Instead, I found that Eric Haines’s Mineways exporter was able to handle the data load well (incidentally, Eric Haines is also a Cornell Program of Computer Graphics alum; I inherited a pile of his ACM Transactions on Graphics hardcopies while at Cornell). The chunk of the world I wanted to export was pretty large; in the Mineways screenshot below, the area outlined in red is the part of the world that I wanted:

Section of the map for export is outlined in red.

The area outlined above is significantly larger than the area I wound up rendering… initially I was thinking of a very different camera angle from the ground with the mountains in the background before I picked an aerial view much later. The size of the exported obj mesh was about 1.5 GB. Mineways exports the world as a single mesh, optimized to remove all completely occluded internal faces (so the final mesh is hollow instead of containing useless faces for all of the internal blocks). Each visible block face is uv’d into a corresponding square on a single texture file. This approach produces an efficient mesh, but I realized early on that I would need water in a separate mesh containing completely enclosed volumes for each body of water (Mineways only provides geometry for the top surface of water). Glass had to be handled similarly; both water and glass need special handling for the same reasons that I mentioned immediately after the first diagram in my attenuated transmission blog post. Mineways allows for exporting different block types as separate meshes (but still with internal faces removed), so I simply deleted the water and glass meshes after exporting. Luckily, jmc2obj allows exporting individual block types as closed meshes, so I went back to jmc2obj for just the water and glass. Since just the water and glass is a much smaller data set than the whole world, jmc2obj was able to export without a problem. Since rendering refractive interfaces correctly requires expanding out the refractive mesh slightly at the interfaces, I wrote a custom program based on Takua Renderer’s obj mesh processing library to push out all of the vertices of the water and glass meshes slightly along the average of the face normals at each vertex.

Next up was shading everything in Maya. Renderman 20 ships with an implementation of Disney’s Principled Brdf, which I’ve gotten very familiar with using, so I went with Renderman’s PxrDisney Bxdf. The Disney Brdf allows for quickly creating very good looking materials using a fairly small parameter set. Overall I tried to stick close to the in-game aesthetic, which meant using all of the standard in-game textures instead of a custom resource pack, and I also wound up having to reign back a bit on making materials look super realistic. Everything basically has some varied roughness and specularity, and that’s pretty much it. I did add a subtle bump map to everything though; I made the bump map by simply making a black and white version of the default texture pack and messing with the brightness and contrast a bit. Below is a render of a test world created by Minecraft Forum user QMagnet specifically for testing resource packs. I lit the test scene using a single IBL (HDRI Sky 141 from the HDRI-Skies library). The test render below isn’t using the final specialized water and leaf shaders I created, which I’ll describe a bit further down, and there are also some resolution problems on the alpha masks for the leaf blocks, but overall this test render gives an idea of what my final materials look like:

Final materials on a resource pack test world.

One detail worth going into a bit more detail about are the glowing blocks. The glowstone, lantern, and various torch blocks use a trick based on something that I have seen lighters use in production. The basic idea is to decouple the direct and indirect visibility for the light. I got this decoupling to work in RIS by making all of the glowing blocks into pairs of textured PxrMeshLights. Using PxrMeshLights is necessary in order to allow for efficient light sampling; however, the actual exposures the lights are at make the textures blow out in camera. In order to make the textures discernible in camera, a second PxrMeshLights is needed for each glowing object; one of the lights is at the correct exposure but is marked visible to only indirect rays and invisible to direct camera rays, and the other light is at a much lower exposure but is also only visible to direct camera rays. This trick is a totally non-physical cheaty-hack, but it allows for a believable visual appearance if the exposures are chosen carefully.

In the final renders a few pictures down, I also used a more specialized shader for leaves and vines and tall grass and whatnot. The leaf block shader uses a PxrLMPlastic material instead of PxrDisney; this is because the leaf block shader has a slight amount of diffuse transmission (translucency) and also has more specialized diffuse/specular roughness maps.

For the water shader in the final render, I used a PxrLMGlass material with an IOR of 1.325, a slightly blue tinted refraction color, and a light blue absorption color. Using slightly different colors for the refraction and absorption colors allows for the water to transition to a slightly different hue at deeper depths than at the surface (as opposed to just a more saturated version of the same color). I also added a simple water surface displacement map to get the wavy surface effect. Combined with the refractive interface stuff mentioned before, the final water looks like this:

Water test render, using a PxrLMGlass material. Unfortunately, no true caustics here...

Note the total lack of real caustics in the water… I wound up just using the basic pathtracer built into RIS instead of Pixar’s VCM implementation. Pixar’s VCM implementation is one of the first commercial VCM implementations out there, but as of Renderman 20, it has no adaptivity in its light path distribution whatsoever. As a result, the Renderman 20 VCM integrator is not really suitable for use on huge scenes; most of the light paths end up getting wasted on areas of the scene that aren’t even close to being in-camera, which means that all of the efficiency in rendering caustics is lost. This problem is fundamental to lighttracing-based techniques (meaning that bidirectional techniques inherit the same problem), and solving it remains a relatively open problem (Takua has some basic photon targeting mechanisms for PPM/VCM that I’ll write about at some point). Apparently, this large-scene problem was a major challenge on Finding Dory and is one of the main reasons why Pixar didn’t use VCM heavily on Dory; Dory relied mostly on projected and pre-baked caustics.

I should also note that Renderman 21 does away with the PxrLM and PxrDisney materials entirely and instead introduces the shader set that Christophe Hery and Ryusuke Villemin wrote for Finding Dory. I haven’t tried the Renderman 21 shading system yet, but I would be very curious to compare against our Disney Brdf.

The final lighting setup I used was very simple. There are two main lights in the scene: an IBL dome light for sky illumination, and a 0.5 degree distant light as a sun stand-in. The IBL is another free sky from the HDRI-Skies library; this time, I used HDRI Sky 84. There is also a third spotlight used for getting long, dramatic shadows out of the fog, which I’ll talk about a bit later. Here is a lighting test with just the dome and distant lights on a grey clay version of the scene:

Grey clay render lit using the final distant and dome light setup.

For efficiency reasons, I broke out the fog into a separate pass entirely and added it back in comp afterwards. At the time that I did this project, Renderman 20’s volume system was still relatively new (Renderman 21 introduces a significantly overhauled, much faster volume system, but Renderman 21 wasn’t out yet when I did this project), and while perfectly capable, wasn’t necessarily super fast. Iterating on the look of the fog separately from the main render was simply a more efficient workflow. Here is the raw render directly out of RIS:

Raw render of the main render pass, straight out of RIS.

For the fog, I initially wanted to do fully simulated fog in Houdini. I experimented with using a point SOP to control wind direction and to drive a wind DOP and have fog flow through the scene, but the sheer scale of the scene made this approach impracticable on my home computers. Instead, I wound up just creating a static procedural volume noise field and dumping it out to VDB. I then brought the VDB back into Maya for RIS rendering. Initially I tried rendering the fog pass without the additional spotlight and got something that looked like this:

My initial attempt at the fog pass.

After getting this first fog attempt rendered, I did a first pass at a final comp and color grade. I wound up using a very different color grade on this earlier attempt. This earlier version is the version that I shared in some places, such as the Nerd.nu subreddit and on Twitter:

First comp and grade attempt, using old version of fog.

This first attempt looked okay, but didn’t quite hit what I was going for. I wanted something with much more dramatic shadow beams, and I also felt that the fog didn’t really look settled into the terrain. Eventually I realized that I needed to make the fog sparser and that the fog should start thinning out after rising just a bit off of the ground. After adjusting the fog and adding in a spotlight with a bit of a cooler temperature than the sun, I got the image below. I’m pretty happy with how the fog looks like it is settling in the river valley and is pouring out of the forested hill in the upper left of the image, even though none of the fog is actually simulated!

Final fog pass, with extra spotlight. Note how the fog seems to sit in the lower river valley and pour out of the forest.

Finally, I brought everything together in comp and added a color grading pass in Lightroom. The grade that I went with is much much more heavy-handed than what I usually like to use, but it felt appropriate for this image. I also added a slight amount of vignetting and grain in the final image. The final image is at the top of this post, but here it is again for convenience:

Final composite with fog, color grading, and vignetting/grain.

I learned a lot about using RIS from this project! By my estimation, RIS is orders of magnitude easier to use than old REYES Renderman; the overall experience was fairly similar to my previous experiences with Vray and Arnold. Both Takua and Hyperion make some similar choices and some very different choices in comparison, but then again, every renderer has large similarities and large differences from every other renderer out there. Rendering a Minecraft world was a lot of fun; I definitely am looking forward to doing more Minecraft renders using this pipeline again sometime in the future.

Also, here’s a shameless plug for the Nerd.nu Minecraft server that this data set is from. If you like playing Minecraft and are looking for a fast, free, friendly community to build with, you should definitely come check out the Nerd.nu PvE server, located at p.nerd.nu. The little town in this post is not even close to the most amazing thing that people have built on that server.

A final note on the (lack of) activity on my blog recently: we’ve been extremely busy at Walt Disney Animation Studios for the past year trying to release both Zootopia and Moana in the same year. Now that we’re closing in on the release of Moana, hopefully I’ll find time to post more. I have a lot of cool posts about Takua Renderer in various states of drafting; look for them soon!


Addendum 10/02/2016: After I published this post, Eric Haines wrote to me with a few typo corrections and, more importantly, to tell me about a way to get completely enclosed meshes from Mineways using the color schemes feature. Serves me right for not reading the documentation completely before starting! The color schemes feature allows assigning a color and alpha value to each block type; the key part of this feature for my use case is that Mineways will delete blocks with a zero alpha value when exporting. Setting all blocks except for water to have an alpha of zero allows for exporting water as a complete enclosed mesh; the same trick applies for glass or really any other block type.

One of the neat things about this feature is that the Mineways UI draws the map respecting assigned alpha values from the color scheme being used. As a result, setting everything except for water to have a zero alpha produces a cool view that shows only all of the water on the map:

Mineways map view showing only water blocks. This image shows the same exact area of the map as the other Mineways screenshot earlier in the post.

Going forward, I’ll definitely be adopting this technique to get water meshes instead of using jmc2obj. Being able to handle all of the mesh exporting work in a single program makes for a nicer, more streamlined pipeline. Of course both jmc2obj and Mineways are excellent pieces of software, but in my testing Mineways handles large map sections much better, and I also think that Mineways produces better water meshes compared to jmc2obj. As a result, my pipeline is now entirely centered around Mineways.

Zootopia

Walt Disney Animation Studios’ newest film, Zootopia, will be releasing in the United States three weeks from today. I’ve been working at Walt Disney Animation Studios on the the core development team for Disney’s Hyperion Renderer since July of last year, and the release of Zootopia is really special for me; Zootopia is the first feature film I’ve worked on. My actual role on Zootopia was fairly limited; so far, I’ve been spending most of my time and effort on the version of Hyperion for our next film, Moana (coming out November of this year). On Zootopia I basically only did support and bugfixes for Zootopia’s version of Hyperion (and I actually don’t even have a credit in Zootopia, since I hadn’t been at the studio for very long when the credits were compiled). Nonetheless, I’m incredibly proud of all of the work and effort that has been put into Zootopia, and I consider myself very fortunate to have been able to play even a small role in making the film!

Zootopia is a striking film in every way. The story is fantastic and original and relevant, the characters are all incredibly appealing, the setting is fascinating and immensely clever, the music is wonderful. However, on this blog, we are more interested in the technical side of things; luckily, the film is just as unbelievable in its technology. Quite simply, Zootopia is a breathtakingly beautiful film. In the same way that Big Hero 6 was several orders of magnitude more complex and technically advanced than Frozen in every way, Zootopia represents yet another enormous leap over Big Hero 6 (which can be hard to believe, considering how gorgeous Big Hero 6 is).

The technical advances made on Zootopia are far beyond what I can go into detail here since I don’t think I can describe them in a way that does them justice, but I think I can safely say that Zootopia is the most technically advanced animated film ever made to date. The fur and cloth (and cloth on top of fur!) systems on Zootopia are beyond anything I’ve ever seen, the sets and environments are simply ludicrous in both detail and scale, and of course the shading and lighting and rendering are jaw-dropping. In a lot of ways, many of the technical challenges that had to be solved on Zootopia can be summarized in a single word: complexity. Enormous care had to be put into creating believable fur and integrating different furry characters into different environments [Burkhard et al. 2016], and the huge quantities of fur in the movie required developing new level-of-detail approaches [Palmer and Litaker 2016] to make the fur manageable on both the authoring and rendering sides. The sheer number of crowds characters in the film also required developing a new crowds workflow [El-Ali et al. 2016], again to make both authoring and rendering tractable, and the complex jungle environments seen throughout most of the film similarly required new approaches to procedural vegetation [Keim et al. 2016]. Complexity wasn’t just a problem on a large scale though; Zootopia is also incredible rich in the smaller details. Zootopia was the first movie that Disney Animation deployed a flesh simulation system on [Milne et al. 2016] in order to create convincing muscular movement under the skin and fur of the animal characters. Even individual effects such as scooping ice cream [Byun et al. 2016] sometimes required innovative new CG techniques. On the rendering side the Hyperion team developed a brand new BSDF for shading hair and fur [Chiang et al. 2016], with a specific focus on balencing artistic controllability, physical plausibility, and render efficiency. Disney isn’t paying me to write this on my personal blog, and I don’t write any of this to make myself look grand either. I played only a small role, and really the amazing quality of the film is a testament to the capabilities of the hundreds of artists that actually made the final frames. I’m deeply humbled to see what amazing things great artists can do with the tools that my team makes.

Okay, enough rambling. Here are some stills from the film, 100% rendered with Hyperion, of course. Go see the film; these images only scratch the surface in conveying how gorgeous the film is.

All images in this post are courtesy of and the property of Walt Disney Animation Studios.

References

Nicholas Burkard, Hans Keim, Brian Leach, Sean Palmer, Ernest J. Petti, and Michelle Robinson. 2016. From Armadillo to Zebra: Creating the Diverse Characters and World of Zootopia. In ACM SIGGRAPH 2016 Production Sessions. 24:1-24:2.

Dong Joo Byun, James Mansfield, and Cesar Velazquez. 2016. Delicious Looking Ice Cream Effects with Non-Simulation Approaches. In ACM SIGGRAPH 2016 Talks. 25:1-25:2.

Matt Jen-Yuan Chiang, Benedikt Bitterli, Chuck Tappan, and Brent Burley. 2016. A Practical and Controllable Hair and Fur Model for Production Path Tracing. Computer Graphics Forum. 35, 2 (2016), 275-283.

Moe El-Ali, Joyce Le Tong, Josh Richards, Tuan Nguyen, Alberto Luceño Ros, and Norman Moses Joseph. 2016. Zootopia Crowd Pipeline. In ACM SIGGRAPH 2016 Talks. 59:1-59:2.

Hans Keim, Maryann Simmons, Daniel Teece, and Jared Reisweber. 2016. Art-Directable Procedural Vegetation in Disney’s Zootopia. In ACM SIGGRAPH 2016 Talks. 18:1-18:2.

Andy Milne, Mark McLaughlin, Rasmus Tamstorf, Alexey Stomakhin, Nicholas Burkard, Mitch Counsell, Jesus Canal, David Komorowski, and Evan Goldberg. 2016. Flesh, Flab, and Fascia Simulation on Zootopia. In ACM SIGGRAPH 2016 Talks. 34:1-34:2.

Sean Palmer and Kendall Litaker. 2016. Artist Friendly Level-of-Detail in a Fur-Filled World. In ACM SIGGRAPH 2016 Talks. 32:1-32:2.