Subdivision Surfaces and Displacement Mapping

Two standard features that every modern production renderer supports are subdivision surfaces and some form of displacement mapping. As we’ll discuss a bit later in this post, these two features are usually very closely linked to each other in both usage and implementation. Subdivision and displacement are crucial tools for representing detail in computer graphics; from both a technical and authorship point of view, being able to represent more detail than is actually present in a mesh is advantageous. Applying detail at runtime allows for geometry to take up less disk space and memory than would be required if all detail was baked into the geometry, and artists often like the ability to separate broad features from high frequency detail.

I recently added support for subdivision surfaces and for both scalar and vector displacement to Takua; Figure 1 shows an ocean wave was rendered using vector displacement in Takua. The ocean surface is entirely displaced from just a single plane!

Figure 1: An ocean surface modeled as a flat plane and rendered using vector displacement mapping.

Both subdivision and displacement originally came from the world of rasterization rendering, where on-the-fly geometry generation was historically both easier to implement and more practical/plausible to use. In rasterization, geometry is streamed to through the renderer and drawn to screen, so each individual piece of geometry could be subdivided, tessellated, displaced, splatted to the framebuffer, and then discarded to free up memory. Old REYES Renderman was famously efficient at rendering subdivision surfaces and displacement surfaces for precisely this reason. However, in naive raytracing, rays can intersect geometry at any moment in any order. Subdividing and displacing geometry on the fly for each ray and then discarding the geometry is insanely expensive compared to processing geometry once across an entire framebuffer. The simplest solution to this problem is to just subdivide and displace everything up front and keep it all around in memory during raytracing. Historically though, just caching everything was never a practical solution since computers simply didn’t have enough memory to keep that much data around. As a result, past research work put significant effort into more intelligent raytracing architectures that made on-the-fly subdivision/displacement affordable again; notable papers include Matt Pharr’s 1996 geometry caching for raytracing paper, Brian Smits et al.’s 2000 direct raytracing of displacement mapped triangles paper, Johannes Hanika et al.’s 2010 reordered raytracing paper, and Takahiro Harada’s 2015 article on GPU raytraced vector displacement in GPU Pro 6.

In the past five years or so though, the story on raytraced displacement has changed. We now have machines with gobs and gobs of memory (at a number of studios, renderfarm nodes with 256 GB of memory or more is not unusual anymore). As a result, raytraced renderers don’t need to be nearly as clever anymore about managing displaced geometry; a combination of camera-adaptive tessellation and a simple geometry cache with a least-recently-used eviction strategy is often enough to make raytraced displacement practical. Heavy displacement is now common in the workflows for a number of production pathtracers, including Arnold, Renderman/RIS, Vray, Corona, Hyperion, Manuka, etc. With the above in mind, I tried to implement subdivision and displacement in Takua as simply as I possibly could.

Takua doesn’t have any concept of an eviction strategy for cached tessellated geometry; the hope is to just fit in memory and be as efficient as possible with what memory is available. Admittedly, since Takua is just my hobby renderer instead of a fully in-use production renderer, and I have personal machines with 48 GB of memory, I didn’t think particularly hard about cases where things don’t fit in memory. Instead of tessellating on-the-fly per ray or anything like that, I simply pre-subdivide and pre-displace everything upfront during the initial scene load. Meshes are loaded, subdivided, and displaced in parallel with each other. If Takua discovers that all of the subdivided and displaced geometry isn’t going to fit in the allocated memory budget, the renderer simply quits.

I should note that Takua’s scene format distinguishes between a mesh and a geom; a mesh is the raw vertex/face/primvar data that makes up a surface, while a geom is an object containing a reference to a mesh along with transformation matrices, shader bindings, and so on and so forth. This separation between the mesh data and the geometric object allows for some useful features in the subdivision/displacement system. Takua’s scene file format allows for binding subdivision and displacement modifiers either on the shader level, or per each geom. Bindings at the geom level override bindings on the shader level, which is useful for authoring since a whole bunch of objects can share the same shader but then have individual specializations for different subdivision rates and different displacement maps and displacement settings. During scene loading, Takua analyzes what subdivisions/displacements are required for which meshes by which geoms, and then de-duplicates and aggregates any cases where different geoms want the same subdivision/displacement for the same mesh. This de-duplication even works for instances (I should write a separate post about Takua’s approach to instancing someday…).

Once Takua has put together a list of all meshes that require subdivision, meshes are subdivided in parallel. For Catmull-Clark subdivision, I rely on OpenSubdiv for calculating subdivision stencil tables, evaluating the stencils, and final tessellation. As far as I can tell, stencil calculation in OpenSubdiv is single threaded, so it can get fairly slow on really heavy meshes. Stencil evaluation and final tessellation is super fast though, since OpenSubdiv provides a number of parallel evaluators that can run using a variety of backends ranging from TBB on the CPU to CUDA or OpenGL compute shaders on the GPU. Takua currently relies on OpenSubdiv’s TBB evaluator. One really neat thing about the stencil implementation in OpenSubdiv is that the stencil calculation is dependent on only the topology of the mesh and not individual primvars, so a single stencil calculation can then be reused multiple times to interpolate many different primvars, such as positions, normals, uvs, and more. Currently Takua doesn’t support creases; I’m planning on adding crease support later.

No writing about subdivision surfaces is complete without a picture of a cube being subdivided into a sphere, so Figure 2 shows a render of a cube with subdivision levels 0, 1, 2, and 3, going from left to right. Each subdivided cube is rendered with a procedural wireframe texture that I implemented to help visualize what was going on with subdivision.

Figure 2: A cube with 0, 1, 2, and 3 subdivision levels, going from left to right.

Each subdivided mesh is placed into a new mesh; base meshes that require multiple subdivision levels for multiple different geoms get one new subdivided mesh per subdivision level. After all subdivided meshes are ready, Takua then runs displacement. Displacement is parallelized both by mesh and within each mesh. Also, Takua supports both on-the-fly displacement and fully cached displacement, which can be specified per shader or per geom. If a mesh is marked for full caching, the mesh is fully displaced, stored as a separate mesh from the undisplaced subdivision mesh, and then a BVH is built for the displaced mesh. If a mesh is marked for on-the-fly displacement, the displacement system calculates each displaced face, then calculates the bounds for that face, and then discards the face. The displaced bounds are then used to build a tight BVH for the displaced mesh without actually having to store the displaced mesh itself; instead, just a reference to the undisplaced subdivision mesh has to be kept around. When a ray traverses the BVH for an on-the-fly displacement mesh, each BVH leaf node specifies which triangles on the undisplaced mesh need to be displaced to produce final polys for intersection and then the displaced polys are intersected and discarded again. For the scenes in this post, on-the-fly displacement seems to be about twice as slow as fully cached displacement, which is to be expected, but if the same mesh is displaced multiple different ways, then there are correspondingly large memory savings. After all displacement has been calculated, Takua goes back and analyzes which base meshes and undisplaced subdivision meshes are no longer needed, and frees those meshes to reclaim memory.

I implemented support for both scalar displacement via regular grayscale texture maps, and vector displacement from OpenEXR textures. The ocean render from the start of this post uses vector displacement applied to a single plane. Figure 3 shows another angle of the same vector displaced ocean:

Figure 3: Another view of the vector displaced ocean surface from Figure 1. The ocean surface has a dielectric refractive material complete with colored attenuated transmission. A shallow depth of field is used to lend added realism.

For both ocean renders, the vector displacement OpenEXR texture is borrowed from Autodesk, who generously provide it as part of an article about vector displacement in Arnold. The renders are lit with a skydome using hdri-skies.com’s HDRI Sky 193 texture.

For both scalar and vector displacement, the displacement amount from the displacement texture can be controlled by a single scalar value. Vector displacement maps are assumed to be in a local tangent space; which axis is used as the basis of the tangent space can be specified per displacement map. Figure 4 shows three dirt shaderballs with varying displacement scaling values. The leftmost shaderball has a displacement scale of 0, which effectively disables displacement. The middle shaderball has a displacement scale of 0.5 of the native displacement values in the vector displacement map. The rightmost shaderball has a displacement scale of 1.0, which means just use the native displacement values from the vector displacement map.

Figure 4: Dirt shaderballs with displacement scales of 0.0, 0.5, and 1.0, going from left to right.

Figure 5 shows a closeup of the rightmost dirt shaderball from Figure 4. The base mesh for the shaderball is relatively low resolution, but through subdivision and displacement, a huge amount of geometric detail can be added in-render. In this case, the shaderball is tessellated to a point where each individual micropolygon is at a subpixel size. The model for the shaderball is based on Bertrand Benoit’s shaderball. The displacement map and other textures for the dirt shaderball are from Quixel’s Megascans library.

Figure 5: Closeup of the dirt shaderball from Figure 4. In this render, the shaderball is tessellated and displaced to a subpixel resolution.

One major challenge with displacement mapping is cracking. Cracking occurs when adjacent polygons displace the same shared vertices different ways for each polygon. This can happen when the normals across a surface aren’t continuous, or if there is a discontinuity in either how the displacement texture is mapped to the surface, or in the displacement texture itself. I implemented an optional, somewhat brute-force solution to displacement cracking. If crack removal is enabled, Takua analyzes the mesh at displacement time and records how many different ways each vertex in the mesh has been displaced by different faces, along with which faces want to displace that vertex. After an initial displacement pass, the crack remover then goes back and for every vertex that is displaced more than one way, all of the displacements are averaged into a single displacement, and all faces that use that vertex are updated to share the same averaged result. This approach requires a fair amount of bookkeeping and pre-analysis of the displaced mesh, but it seems to work well. Figure 6 is a render of two cubes with geometric normals assigned per face. The two cubes are displaced using the same checkerboard displacement pattern, but the cube on the left has crack removal disabled, while the cube on the right has crack removal enabled:

Figure 6: Displaced cubes with and without crack elimination.

In most cases, the crack removal system seems to work pretty well. However, the system isn’t perfect; sometimes, stretching artifacts can appear, especially with surfaces with a textured base color. This stretching happens because the crack removal system basically stretches micropolygons to cover the crack. This texture stretching can be seen in some parts of the shaderballs in Figures 5, 7, and 8 in this post.

Takua automatically recalculates normals for subdivided/displaced polygons. By default, Takua simply uses the geometric normal as the shading normal for displaced polygons; however, an option exists to calculate smooth normals for the shading normals as well. I chose to use geometric normals as the default with the hope that for subpixel subdivision and displacement, a different shading normal wouldn’t be as necessary.

In the future, I may choose to implement my own subdivision library, and I should probably also put more thought into some kind of proper combined tessellation cache and eviction strategy for better memory efficiency. For now though, everything seems to work well and renders relatively efficiently; the non-ocean renders in this post all have sub-pixel subdivision with millions of polygons and each took several hours to render at 4K (3840x2160) resolution on a machine with dual Intel Xeon X5675 CPUs (12 cores total). The two ocean renders I let run overnight at 1080p resolution; they took longer to converge mostly due to the depth of field. All renders in this post were shaded using a new, vastly improved shading system that I’ll write about at a later point. Takua can now render a lot more complexity than before!

In closing, I rendered a few more shaderballs using various displacement maps from the Megascans library, seen in Figures 7 and 8.

Figure 7: A pebble sphere and a leafy sphere. Note the overhangs on the leafy sphere, which are only possible using vector displacement.

Figure 8: A compacted sand sphere and a stone sphere. Unfortunately, there is some noticeable texture stretching on the compacted sand sphere where crack removal occured.

Moana

This year is the first year ever that Walt Disney Animation Studios is releasing two CG animated films. We released Zootopia back in March, and next week, we will be releasing our newest film, Moana. I’ve spent the bulk of the last year and a half working as part of Disney’s Hyperion Renderer team on a long list of improvements and new features for Moana. Moana is the first film I have an official credit on, and I couldn’t be more excited for the world to see what we have made!

We’re all incredibly proud of Moana; the story is fantastic, the characters are fresh and deep and incredibly appealing, and the music is an instant classic. Most important for a rendering guy though, I think Moana is flat out the best looking animated film anyone has ever made. Every single department on this film really outdid themselves. The technology that we had to develop for this film was staggering; we have a whole new distributed fluid simulation package for the endless oceans in the film, we added advanced new lighting capabilities to Hyperion that have never been used in an animated film before to this extent (to the best of my knowledge), we made huge advances in our animation technology for characters such as Maui; the list goes on and on and on. Something like over 85% of the shots in this movie have significant FX work in them, which is unheard of for animated features. We are going to have a lot to show at the next SIGGRAPH conference.

The first section of the credits sequence in Moana showcases a number of the props that our artists made for the film. I highly recommend staying and staring at all of the eye candy; our look and modeling departments are filled with some of the most dedicated and talented folks I’ve ever met. The props in the credits have simply preposterous amounts of detail on them; every single prop has stuff like tiny little flyaway fibers or microscratches or imperfections or whatnot on them. In some of the international posters, one can see that all of the human characters are covered with fine peach fuzz (an important part of making their skin catch the sunlight correctly), which we rendered in every frame! Something that we’re really proud of is the fact that none of the credit props were specially modeled for the credits! Those are all the exact props we used in every frame that they show up in, which really is a testament to both how amazing our artists our and how much work we’ve put into every part of our technology. The vast majority of production for Moana happened in essentially the 9 months between Zootopia’s release in March and October; this timeline becomes even more astonishing given the sheer beauty and craftsmanship in Moana.

Below are a number of stills (in no particular order) from the movie, pulled from trailers and various Disney marketing materials. These stills give just a hint at how beautiful this movie looks; definitely go see it on the biggest screen you can find! (These stills aren’t of the highest quality; I’ll replace them with higher quality versions once the film is out on Bluray.)

Here is a credits frame with my name that Disney kindly provided! Most of the Hyperion team is grouped under the Rendering/Pipeline/Engineering Services (three separate teams under the same manager) category this time around, although a handful of Hyperion guys show up in an earlier part of the credits instead.

All images in this post are courtesy of and the property of Walt Disney Animation Studios.

Physically Based Rendering 3rd Edition

Today is the release date for the digital version of the new Physically Based Rendering 3rd Edition, by Matt Pharr, Wenzel Jakob, and Greg Humphreys. As anyone in the rendering world knows, Physically Based Rendering is THE reference book for the field; for novices, Physically Based Rendering is the best introduction one can get to the field, and for experts, Physically Based Rendering is an invaluable reference book to consult and check. I share a large office with three other engineers on the Hyperion team, and I think between the four of us, we actually have an average of more than one copy per person (of varying editions). I could not recommend this book enough. The third edition adds Wenzel Jakob as an author; Wenzel is the author of the research-oriented Mitsuba Renderer and is one of the most prominent new researchers in rendering in the past decade. There is a lot of great new light transport stuff in the third edition, which I’m guessing comes from Wenzel. Both Wenzel’s work and the previous editions of Physically Based Rendering were instrumental in influencing my path in rendering, so of course I’ve already had the third edition on pre-order since it was announced over a year ago.

Each edition of Physically Based Rendering is accompanied by a major release of the PBRT renderer, which implements the book. The PBRT renderer is a major research resource for the community, and basically everyone I know in the field has learned something or another from looking through and taking apart PBRT. As part of the drive towards PBRT-v3, Matt Pharr made a call for interesting scenes to provide as demo scenes with the PBRT-v3 release. I offered Matt the PBRT-v2 scene I made a while back, because how could that scene not be rendered in PBRT? I’m very excited that Matt accepted and included the scene as part of PBRT-v3’s example scenes! You can find the example scenes here on the PBRT website.

Converting the scene to PBRT’s format required a lot of manual work, since PBRT’s scene specification and shading system is very different from Takua’s. As a result, the image that PBRT renders out looks slightly different from Takua’s version, but that’s not a big deal. Here is the scene rendered using PBRT-v3:

Physically Based Rendering 2nd Edition, rendered using PBRT-v3.

…and for comparison, the same scene rendered using Takua:

Physically Based Rendering 2nd Edition, rendered using Takua Render a0.5.

Really, it’s just the lighting that is a bit different; the Takua version is slightly warmer and slightly underexposed in comparison.

At some point I should make an updated version of this scene using the third edition book. I’m hoping to be able to contribute more of my Takua test scenes to the community in PBRT-v3 format in the future; giving back to such a major influence on my own career is extremely important. As part of the process of porting the scene over to PBRT-v3, I also wrote a super-hacky render viewer for PBRT that shows the progress of the render as the renderer runs. Unfortunately, this viewer is mega-hacky, and I don’t have time at the moment to clean it up and release it. Hopefully at some point I’ll be able to; alternatively, anyone else who wants to take a look and give it a stab, feel free to contact me.


Addendum 04/28/2017: Matt was recently looking for some interesting water-sim scenes to demonstrate dielectrics and glass materials and refraction and whatnot. I contributed a few frames from my PIC/FLIP fluid simulator, Ariel. Most of the data from Ariel doesn’t exist in meshed format anymore; I still have all of the raw VDBs and stuff, but the meshes took up way more storage space than I could afford at the time. I can still regenerate all of the meshes though, and I also have a handleful of frames in mesh form still from my attenuated transmission blog post. The frame from the first image in that post is now also included in the PBRT-v3 example scene suite! The PBRT version looks very different since it is intended to demonstrate and test something very different from what I was doing in that blog post, but it still looks great!

A frame from my Ariel fluid simulator, rendered using PBRT-v3.

Rendering Minecraft in Renderman/RIS

The vast majority of my computer graphics time is spent developing renderers (Disney’s Hyperion renderer as a professional, Takua Render as a hobbyist). However, I think having experience using renderers as an artist is an important part of knowing what to focus on as a renderer developer. I also think that knowing how a variety of different renderers work and how they are used is important; a lot of artists are used to using several different renderers, and each renderer has its own vocabulary and tried and true workflows and whatnot. Finally, there are a lot of really smart people working on all of the major production renderers out there, and seeing the cool things everyone is doing is fun and interesting! Because of all of these reasons, I like putting some time aside every once in a while to tinker with other renderers. I usually don’t write about my art projects that much anymore, but this project was particularly fun and produced some nice looking images, so I thought I’d write it up. As usual, before we dive into the post, here is the final image I made, rendered using Pixar’s Photorealistic Renderman 20 in RIS mode:

A Minecraft town from the pve.nerd.nu Minecraft server, rendered in Renderman 20/RIS.

About two years ago, Pixar’s Photorealistic Renderman got a new rendering mode called RIS. PRman was one of the first production renderers ever developed, and historically PRman has always been a REYES-style rasterization renderer. Over time though, PRman has gained a whole bunch of added on features. At the time of Monsters University, PRman was actually a kind of hybrid rasterizer and raytracer; the rendering system on Monsters University used raytracing to build a multiresolution radiosity cache that was then used for calculating GI contributions in the shading part of REYES rasterization. That approach worked well and produced beautiful images, but it was also really complicated and had a number of drawbacks! RIS replaces all of that with a brand new, pure pathtracing system. In fact, while RIS is marketed as a new mode in PRman, RIS is actually a completely new renderer written almost completely from scratch; it just happens to be able to read Renderman RIB files as input.

Recently, I wanted to try rendering a Minecraft world from a Minecraft server that I play on. There are a lot of great Minecraft rendering tools available these days (Chunky comes to mind), but I wanted much more production-like control over the look of the render, so I decided to do everything using a normal CG production workflow instead of a prebuilt dedicated Minecraft rendering tool. I thought that I would use the project as a chance to give RIS a spin. At Cornell’s Program of Computer Graphics, Pixar was kind enough to provide us with access to the Renderman 19 beta program, which included the first version of RIS. I tinkered with the PRman 19 beta quite a lot at Cornell, and being an early beta, RIS had some bugs and incomplete bits back then. Since then though, the Renderman team has followed up PRman 19 with versions 20 and 21, which introduced a number of new features and speed/stability improvements to RIS. Since joining the Hyperion team, I’ve had the chance to meet and talk to various (really smart!) folks on the Renderman team since they are a sister team to us, but I haven’t actually had time to try the new versions of RIS. This project was a fun way to try the newest version of RIS on my own!

The Minecraft data for this project comes from the Nerd.nu community Minecraft server, which is run by a collective of players for free. I’ve been playing on the Nerd.nu PvE (Player versus Environment) server for years and years now, and players have built a mind-boggling number of amazing detailed creations. Every couple of months, the server is reset with a fresh map; I wanted to render a town that fellow player Avi_Dangerstein and I built on the previous map revision. Fortunately, all previous Nerd.nu map revisions are available for download in the server archives (the specific map I used is labeled pve-rev17). Here is an overview of the map revision I wanted to pull data from:

Cartograph view of Revision 17 of the Nerd.nu PvE server, located at p.nerd.nu. Click through to go to the full, zoomable cartograph.

…and here is a zoomed in view of the part of the map that contains our town. The vast majority of the town was built by two players over the course of about 4 months. Our town is about 250 blocks long; the entire server map is a 6000 block by 6000 block square.

Zoomed cartograph view of our Minecraft town.

The first problem to tackle in this project was just getting Minecraft world data into a usable format. Pixar provides a free, non-commercial version of Renderman for Maya, and I’m very familiar with Maya, so my entire workflow for this project was based around good ol’ Maya. Maya doesn’t know how to read Minecraft data though… in fact, Minecraft’s chunked data format is a fascinating rabbit hole to read about in its own right. I briefly entertained the idea of writing my own Minecraft to Maya importer, but then I found a number of Minecraft to Obj exporters that other folks have already written. I first tried jmc2obj, but the section of the Minecraft world that I wanted to export was so large that jmc2obj kept running out of memory and crashing. Instead, I found that Eric Haines’s Mineways exporter was able to handle the data load well (incidentally, Eric Haines is also a Cornell Program of Computer Graphics alum; I inherited a pile of his ACM Transactions on Graphics hardcopies while at Cornell). The chunk of the world I wanted to export was pretty large; in the Mineways screenshot below, the area outlined in red is the part of the world that I wanted:

Section of the map for export is outlined in red.

The area outlined above is significantly larger than the area I wound up rendering… initially I was thinking of a very different camera angle from the ground with the mountains in the background before I picked an aerial view much later. The size of the exported obj mesh was about 1.5 GB. Mineways exports the world as a single mesh, optimized to remove all completely occluded internal faces (so the final mesh is hollow instead of containing useless faces for all of the internal blocks). Each visible block face is uv’d into a corresponding square on a single texture file. This approach produces an efficient mesh, but I realized early on that I would need water in a separate mesh containing completely enclosed volumes for each body of water (Mineways only provides geometry for the top surface of water). Glass had to be handled similarly; both water and glass need special handling for the same reasons that I mentioned immediately after the first diagram in my attenuated transmission blog post. Mineways allows for exporting different block types as separate meshes (but still with internal faces removed), so I simply deleted the water and glass meshes after exporting. Luckily, jmc2obj allows exporting individual block types as closed meshes, so I went back to jmc2obj for just the water and glass. Since just the water and glass is a much smaller data set than the whole world, jmc2obj was able to export without a problem. Since rendering refractive interfaces correctly requires expanding out the refractive mesh slightly at the interfaces, I wrote a custom program based on Takua Render’s obj mesh processing library to push out all of the vertices of the water and glass meshes slightly along the average of the face normals at each vertex.

Next up was shading everything in Maya. Renderman 20 ships with an implementation of Disney’s Principled Brdf, which I’ve gotten very familiar with using, so I went with Renderman’s PxrDisney Bxdf. The Disney Brdf allows for quickly creating very good looking materials using a fairly small parameter set. Overall I tried to stick close to the in-game aesthetic, which meant using all of the standard in-game textures instead of a custom resource pack, and I also wound up having to reign back a bit on making materials look super realistic. Everything basically has some varied roughness and specularity, and that’s pretty much it. I did add a subtle bump map to everything though; I made the bump map by simply making a black and white version of the default texture pack and messing with the brightness and contrast a bit. Below is a render of a test world created by Minecraft Forum user QMagnet specifically for testing resource packs. I lit the test scene using a single IBL (HDRI Sky 141 from the HDRI-Skies library). The test render below isn’t using the final specialized water and leaf shaders I created, which I’ll describe a bit further down, and there are also some resolution problems on the alpha masks for the leaf blocks, but overall this test render gives an idea of what my final materials look like:

Final materials on a resource pack test world.

One detail worth going into a bit more detail about are the glowing blocks. The glowstone, lantern, and various torch blocks use a trick based on something that I have seen lighters use in production. The basic idea is to decouple the direct and indirect visibility for the light. I got this decoupling to work in RIS by making all of the glowing blocks into pairs of textured PxrMeshLights. Using PxrMeshLights is necessary in order to allow for efficient light sampling; however, the actual exposures the lights are at make the textures blow out in camera. In order to make the textures discernible in camera, a second PxrMeshLights is needed for each glowing object; one of the lights is at the correct exposure but is marked visible to only indirect rays and invisible to direct camera rays, and the other light is at a much lower exposure but is also only visible to direct camera rays. This trick is a totally non-physical cheaty-hack, but it allows for a believable visual appearance if the exposures are chosen carefully.

In the final renders a few pictures down, I also used a more specialized shader for leaves and vines and tall grass and whatnot. The leaf block shader uses a PxrLMPlastic material instead of PxrDisney; this is because the leaf block shader has a slight amount of diffuse transmission (translucency) and also has more specialized diffuse/specular roughness maps.

For the water shader in the final render, I used a PxrLMGlass material with an IOR of 1.325, a slightly blue tinted refraction color, and a light blue absorption color. Using slightly different colors for the refraction and absorption colors allows for the water to transition to a slightly different hue at deeper depths than at the surface (as opposed to just a more saturated version of the same color). I also added a simple water surface displacement map to get the wavy surface effect. Combined with the refractive interface stuff mentioned before, the final water looks like this:

Water test render, using a PxrLMGlass material. Unfortunately, no true caustics here...

Note the total lack of real caustics in the water… I wound up just using the basic pathtracer built into RIS instead of Pixar’s VCM implementation. Pixar’s VCM implementation is one of the first commercial VCM implementations out there, but as of Renderman 20, it has no adaptivity in its light path distribution whatsoever. As a result, the Renderman 20 VCM integrator is not really suitable for use on huge scenes; most of the light paths end up getting wasted on areas of the scene that aren’t even close to being in-camera, which means that all of the efficiency in rendering caustics is lost. This problem is fundamental to lighttracing-based techniques (meaning that bidirectional techniques inherit the same problem), and solving it remains a relatively open problem (Takua has some basic photon targeting mechanisms for PPM/VCM that I’ll write about at some point). Apparently, this large-scene problem was a major challenge on Finding Dory and is one of the main reasons why Pixar didn’t use VCM heavily on Dory; Dory relied mostly on projected and pre-baked caustics.

I should also note that Renderman 21 does away with the PxrLM and PxrDisney materials entirely and instead introduces the shader set that Christophe Hery and Ryusuke Villemin wrote for Finding Dory. I haven’t tried the Renderman 21 shading system yet, but I would be very curious to compare against our Disney Brdf.

The final lighting setup I used was very simple. There are two main lights in the scene: an IBL dome light for sky illumination, and a 0.5 degree distant light as a sun stand-in. The IBL is another free sky from the HDRI-Skies library; this time, I used HDRI Sky 84. There is also a third spotlight used for getting long, dramatic shadows out of the fog, which I’ll talk about a bit later. Here is a lighting test with just the dome and distant lights on a grey clay version of the scene:

Grey clay render lit using the final distant and dome light setup.

For efficiency reasons, I broke out the fog into a separate pass entirely and added it back in comp afterwards. At the time that I did this project, Renderman 20’s volume system was still relatively new (Renderman 21 introduces a significantly overhauled, much faster volume system, but Renderman 21 wasn’t out yet when I did this project), and while perfectly capable, wasn’t necessarily super fast. Iterating on the look of the fog separately from the main render was simply a more efficient workflow. Here is the raw render directly out of RIS:

Raw render of the main render pass, straight out of RIS.

For the fog, I initially wanted to do fully simulated fog in Houdini. I experimented with using a point SOP to control wind direction and to drive a wind DOP and have fog flow through the scene, but the sheer scale of the scene made this approach impracticable on my home computers. Instead, I wound up just creating a static procedural volume noise field and dumping it out to VDB. I then brought the VDB back into Maya for RIS rendering. Initially I tried rendering the fog pass without the additional spotlight and got something that looked like this:

My initial attempt at the fog pass.

After getting this first fog attempt rendered, I did a first pass at a final comp and color grade. I wound up using a very different color grade on this earlier attempt. This earlier version is the version that I shared in some places, such as the Nerd.nu subreddit and on Twitter:

First comp and grade attempt, using old version of fog.

This first attempt looked okay, but didn’t quite hit what I was going for. I wanted something with much more dramatic shadow beams, and I also felt that the fog didn’t really look settled into the terrain. Eventually I realized that I needed to make the fog sparser and that the fog should start thinning out after rising just a bit off of the ground. After adjusting the fog and adding in a spotlight with a bit of a cooler temperature than the sun, I got the image below. I’m pretty happy with how the fog looks like it is settling in the river valley and is pouring out of the forested hill in the upper left of the image, even though none of the fog is actually simulated!

Final fog pass, with extra spotlight. Note how the fog seems to sit in the lower river valley and pour out of the forest.

Finally, I brought everything together in comp and added a color grading pass in Lightroom. The grade that I went with is much much more heavy-handed than what I usually like to use, but it felt appropriate for this image. I also added a slight amount of vignetting and grain in the final image. The final image is at the top of this post, but here it is again for convenience:

Final composite with fog, color grading, and vignetting/grain.

I learned a lot about using RIS from this project! By my estimation, RIS is orders of magnitude easier to use than old REYES Renderman; the overall experience was fairly similar to my previous experiences with Vray and Arnold. Both Takua and Hyperion make some similar choices and some very different choices in comparison, but then again, every renderer has large similarities and large differences from every other renderer out there. Rendering a Minecraft world was a lot of fun; I definitely am looking forward to doing more Minecraft renders using this pipeline again sometime in the future.

Also, here’s a shameless plug for the Nerd.nu Minecraft server that this data set is from. If you like playing Minecraft and are looking for a fast, free, friendly community to build with, you should definitely come check out the Nerd.nu PvE server, located at p.nerd.nu. The little town in this post is not even close to the most amazing thing that people have built on that server.

A final note on the (lack of) activity on my blog recently: we’ve been extremely busy at Walt Disney Animation Studios for the past year trying to release both Zootopia and Moana in the same year. Now that we’re closing in on the release of Moana, hopefully I’ll find time to post more. I have a lot of cool posts about Takua Render in various states of drafting; look for them soon!


Addendum 10/02/2016: After I published this post, Eric Haines wrote to me with a few typo corrections and, more importantly, to tell me about a way to get completely enclosed meshes from Mineways using the color schemes feature. Serves me right for not reading the documentation completely before starting! The color schemes feature allows assigning a color and alpha value to each block type; the key part of this feature for my use case is that Mineways will delete blocks with a zero alpha value when exporting. Setting all blocks except for water to have an alpha of zero allows for exporting water as a complete enclosed mesh; the same trick applies for glass or really any other block type.

One of the neat things about this feature is that the Mineways UI draws the map respecting assigned alpha values from the color scheme being used. As a result, setting everything except for water to have a zero alpha produces a cool view that shows only all of the water on the map:

Mineways map view showing only water blocks. This image shows the same exact area of the map as the other Mineways screenshot earlier in the post.

Going forward, I’ll definitely be adopting this technique to get water meshes instead of using jmc2obj. Being able to handle all of the mesh exporting work in a single program makes for a nicer, more streamlined pipeline. Of course both jmc2obj and Mineways are excellent pieces of software, but in my testing Mineways handles large map sections much better, and I also think that Mineways produces better water meshes compared to jmc2obj. As a result, my pipeline is now entirely centered around Mineways.

Zootopia

Walt Disney Animation Studios’ newest film, Zootopia, will be releasing in the United States three weeks from today. I’ve been working at Walt Disney Animation Studios on the the core development team for Disney’s Hyperion Renderer since July of last year, and the release of Zootopia is really special for me; Zootopia is the first feature film I’ve worked on. My actual role on Zootopia was fairly limited; so far, I’ve been spending most of my time and effort on the version of Hyperion for our next film, Moana (coming out November of this year). On Zootopia I basically only did support and bugfixes for Zootopia’s version of Hyperion (and I actually don’t even have a credit in Zootopia, since I hadn’t been at the studio for very long when the credits were compiled). Nonetheless, I’m incredibly proud of all of the work and effort that has been put into Zootopia, and I consider myself very fortunate to have been able to play even a small role in making the film!

Zootopia is a striking film in every way. The story is fantastic and original and relevant, the characters are all incredibly appealing, the setting is fascinating and immensely clever, the music is wonderful. However, on this blog, we are more interested in the technical side of things; luckily, the film is just as unbelievable in its technology. Quite simply, Zootopia is a breathtakingly beautiful film. In the same way that Big Hero 6 was several orders of magnitude more complex and technically advanced than Frozen in every way, Zootopia represents yet another enormous leap over Big Hero 6 (which can be hard to believe, considering how gorgeous Big Hero 6 is).

I can’t go far into detail here (I’m sure we’ll be presenting much more in-depth stuff at SIGGRAPH this year), but I think I can safely say that Zootopia is the most technically advanced animated film ever made to date. The fur and cloth (and cloth on top of fur!) systems on Zootopia are beyond anything I’ve ever seen, the sets and environments are simply ludicrous in both detail and scale, and of course the shading and lighting and rendering are jaw-dropping. Disney isn’t paying me to write this on my personal blog, and I don’t write any of this to make myself look grand either. I played only a small role, and really the amazing quality of the film is a testament to the capabilities of the hundreds of artists that actually made the final frames. I’m deeply humbled to see what amazing things great artists can do with the tools that my team makes.

Okay, enough rambling. Here are some stills from the film, 100% rendered with Hyperion, of course. Go see the film on March 4th; these images only scratch the surface in conveying how gorgeous the film is.

All images in this post are courtesy of and the property of Walt Disney Animation Studios.

Attenuated Transmission

Blue liquid in a glass box, with attenuated transmission. Simulated using PIC/FLIP in Ariel, rendered in Takua a0.5 using VCM.

A few months ago I added attenuation to Takua a0.5’s Fresnel refraction BSDF. Adding attenuation wound up being more complex than originally anticipated because handling attenuation through refractive/transmissive mediums requires volumetric information in addition to the simple surface differential geometry. In a previous post about my BSDF system, I mentioned that the BSDF system only considered surface differential geometry information; adding attenuation meant extending my BSDF system to also consider volume properties and track more information about previous ray hits.

First off, what is attenuation? Within the context of rendering and light transport, attenuation is when light is progressively absorbed within a medium, which results in a decrease in light intensity as one goes further and further into a medium away from a light source. One simple example is in deep water- near the surface, most of the light that has entered the water remains unabsorbed, and so the light intensity is fairly high and the water is fairly clear. Going deeper and deeper into the water though, more and more light is absorbed and the water becomes darker and darker. Clear objects gain color when light is attenuated at different rates according to different wavelengths. Combined with scattering, attenuation is a major contributing property to the look of transmissive/refractive materials in real life.

Attenuation is described using the Beer-Lambert Law. The part of the Beer-Lambert Law we are concerned with is the definition of transmittance:

\[ T = \frac{\Phi_{e}^{t}}{\Phi_{e}^{i}} = e^{-\tau}\]

The above equation states that the transmittance of a material is equal to the transmitted radiant flux over the received radiant flux, which in turn is equal to e raised to the power of the negative of the optical depth. If we assume uniform attenuation within a medium, the Beer-Lambert law can be expressed in terms of an attenuation coefficient μ as:

\[ T = e^{-\mu\ell} \]

From these expressions, we can see that light is absorbed exponentially as distance into an absorbing medium increases. Returning back to building a BSDF system, supporting attenuation therefore means having to know not just the current intersection point and differential geometry, but also the distance a ray has traveled since the previous intersection point. Also, if the medium’s attenuation rate is not constant, then an attenuating BSDF not only needs to know the distance since the previous intersection point, but also has to sample along the incoming ray at some stepping increment and calculate the attenuation at each step. In other words, supporting attenuation required BSDFs to know the previous hit point in addition to the current one and also requires BSDFs to be able to ray march from the previous hit point to the current one.

Adding previous hit information and ray march support to my BSDF system was a very straightforward task. I also added volumetric data support to Takua, allowing for the following attenuation test with a glass Stanford Dragon filled with a checkerboard red and blue medium. The red and blue medium is ray marched through to calculate the total attenuation. Note how the thinner parts of the dragon allow more light through resulting in a lighter appearance, while thicker parts of the dragon attenuate more light resulting in a darker appearance. Also note the interesting red and blue caustics below the dragon:

Glass Stanford Dragon filled with a red and blue volumetric checkerboard attenuating medium. Rendered in Takua a0.5 using VCM.

Things got much more complicated once I added support for what I call “deep attenuation”- that is, attenuation through multiple mediums embedded inside of each other. A simple example is an ice cube floating in a glass of liquid, which one might model in the following way:

Diagram of glass-fluid-ice interfaces. Arrows indicate normal directions.

There are two things in the above diagram that make deep attenuation difficult to implement. First, note that the ice cube is modeled without a corresponding void in the liquid- that is, a ray path that travels through the ice cube records a sequence of intersection events that goes something like “enter water, enter ice cube, exit ice cube, exit water”, as opposed to “enter water, exit water, enter ice cube, exit ice cube, enter water, exit water”. Second, note that the liquid boundary is actually slightly inside of the inner wall of the glass. Intuitively, this may seem like a mistake or an odd property, but this is actually the correct way to model a liquid-glass interface in computer graphics- see this article and this other article for details on why.

So why do these two cases complicate things? As a ray enters each new medium, we need to know what medium the ray is in so that we can execute the appropriate BSDF and get the correct attenuation for that medium. We can only evaluate the attenuation once the ray exits the medium, since attenuation is dependent on how far through the medium the ray traveled. The easy solution is to simply remember what the BSDF is when a ray enters a medium, and then use the remembered BSDF to evaluate attenuation upon the next intersection. For example, imagine the following sequence of intersections:

  1. Intersect glass upon entering glass.
  2. Intersect glass upon exiting glass.
  3. Intersect water upon entering water.
  4. Intersect water upon exiting water.

This sequence of intersections is easy to evaluate. The evaluation would go something like:

  1. Enter glass. Store glass BSDF.
  2. Exit glass. Evaluate attenuation from stored glass BSDF.
  3. Enter water. Store water BSDF.
  4. Exit water. Evaluate attenuation from stored water BSDF.

So far so good. However, remember that in the first case, sometimes we might not have a surface intersection to mark that we’ve exited one medium before entering another. The following scenario demonstrates how this first case results in missed attenuation evaluations:

  1. Intersect water upon entering water.
  2. Exit water, but no intersection!
  3. Intersect ice upon entering ice.
  4. Intersect ice upon exiting ice.
  5. Enter water again, but no intersection either!
  6. Intersect water upon exiting water.

The evaluation sequence ends up playing out okay:

  1. Enter water. Store water BSDF.
  2. Exit water, but no intersection. No BSDF evaluated.
  3. Enter ice. Intersection occurs, so evaluate attenuation from stored water BSDF. Store ice BSDF.
  4. Exit ice. Evaluate attenuation from stored ice BSDF.
  5. Enter water again, but no intersection, so no BSDF stored.
  6. Exit water…. but there is no previous BSDF stored! No attenuation is evaluated!

Alternatively, in step 6, instead of no previous BSDF, we might still have the ice BSDF stored and evaluate attenuation based on the ice. However, this result is still wrong, since we’re now using the ice BSDF for the water.

One simple solution to this problem is to keep a stack of previously seen BSDFs with each ray instead of just storing the previously seen BSDF. When the ray enters a medium through an intersection, we push a BSDF onto the stack. When the ray exits a medium through an intersection, we evaluate whatever BSDF is on the top of the stack and pop the stack. Keeping a stack works well for the previous example case:

  1. Enter water. Push water BSDF on stack.
  2. Exit water, but no intersection. No BSDF evaluated.
  3. Enter ice. Intersection occurs, so evaluate water BSDF from top of stack. Push ice BSDF on stack.
  4. Exit ice. Evaluate ice BSDF from top of stack. Pop ice BSDF off stack.
  5. Enter water again, but no intersection, so no BSDF stored.
  6. Exit water. Intersection occurs, so evaluate water BSDF from top of stack. Pop ice BSDF off stack.

Excellent, we now have evaluated different medium attenuations in the correct order, haven’t missed any evaluations or used the wrong BSDF for a medium, and as we exit the water and ice our stack is now empty as it should be. The first case from above is now solved… what happens with the second case though? Imagine the following sequence of intersections where the liquid boundary is inside of the two glass boundaries:

  1. Intersect glass upon entering glass.
  2. Intersect water upon entering water.
  3. Intersect glass upon exiting glass.
  4. Intersect water upon exiting water.

The evaluation sequence using a stack is:

  1. Enter glass. Push glass BSDF on stack.
  2. Enter water. Evaluate glass attenuation from top of stack. Push water BSDF.
  3. Exit glass. Evaluate water attenuation from top of stack, pop water BSDF.
  4. Exit water. Evaluate glass attenuation from top of stack, pop glass BSDF.

The evaluation sequence is once again in the wrong order- we just used the glass attenuation when we were traveling through water at the end! Solving this second case requires a modification to our stack based scheme. Instead of popping the top of the stack every time we exit a medium, we should scan the stack from the top down and pop the first instance of a BSDF matching the BSDF of the surface we just exited through. This modified stack results in:

  1. Enter glass. Push glass BSDF on stack.
  2. Enter water. Evaluate glass attenuation from top of stack. Push water BSDF.
  3. Exit glass. Evaluate water attenuation from top of stack. Scan stack and find first glass BSDF matching the current surface’s glass BSDF and pop that BSDF.
  4. Exit water. Evaluate water attenuation from top of stack. Scan stack and pop first matching water BSDF.

At this point, I should mention that pushing/popping onto the stack should only occur when a ray travels through a surface. When the ray simply reflects off of a surface, an intersection has occurred and therefore attenuation from the top of the stack should still be evaluated, but the stack itself should not be modified. This way, we can support diffuse inter-reflections inside of an attenuating medium and get the correct diffuse inter-reflection with attenuation between diffuse bounces! Using this modified stack scheme for attenuation evaluation, we can now correctly handle all deep attenuation cases and embed as many attenuating mediums in each other as we could possibly want.

…or at least, I think so. I plan on running more tests before conclusively deciding this all works. So there may be a followup to this post later if I have more findings.

A while back, I wrote a PIC/FLIP fluid simulator. However, at the time, Takua Render didn’t have attenuation support, so I wound up rendering my simulations with Vray. Now that Takua a0.5 has robust deep attenuation support, I went back and used some frames from my fluid simulator as tests. The image at the top of this post is a simulation frame from my fluid simulator, rendered entirely with Takua a0.5. The water is set to attenuate red and green light more than blue light, resulting in the blue appearance of the water. In addition, the glass has a slight amount of hazy green attenuation too, much like real aquarium glass. As a result, the glass looks greenish from the ends of each glass plate, but is clear when looking through each plate, again much like real glass. Here are two more renders:

Simulated using PIC/FLIP in Ariel, rendered in Takua a0.5 using VCM.

Simulated using PIC/FLIP in Ariel, rendered in Takua a0.5 using VCM.