Shadow Terminator in Takua

I recently implemented two techniques in Takua for solving the harsh shadow terminator problem; I implemented both the Disney Animation solution [Chiang et al. 2019] that we published at SIGGRAPH 2019, and the Sony Imageworks technique [Estevez et al. 2019] published in Ray Tracing Gems. We didn’t show too many comparisons between the two techniques (which I’ll refer to as the Chiang and Estevez approaches, respectively) in our SIGGRAPH 2019 presentation, and we didn’t show comparisons on any actual “real-world” scenes, so I thought I’d do a couple of my own renders using Takua as a bit of a mini-followup and share a handful of practical implementation tips. For a recap of the harsh shadow terminator problem, please see either the Estevez paper or the slides from the Chiang talk, which both do excellent jobs of describing the problem and why it happens in detail. Here’s a small scene that I made for this post, thrown together using some Evermotion assets that I had sitting around:

Figure 1: A simple bedroom scene, rendered in Takua Renderer. This image was rendered using the Chiang 2019 shadow terminator solution.

In this scene, all of the blankets and sheets and pillows on the bed use a fabric material that uses extremely high-frequency, high-resolution normal maps to achieve the fabric-y fiber-y look. Because of these high-frequency normal maps, the bedding is susceptible to the harsh shadow terminator problem. All of the bedding also has diffuse transmission and a very slight amount of high roughness specularity to emulate the look of a sheen lobe, making the material (and therefore this comparison) overall more interesting than just a single diffuse lobe.

Since the overall scene is pretty brightly lit and the bed is lit from all directions either by direct illumination from the window or bounce lighting from inside of the room, the shadow terminator problem is not as apparent in this scene; it’s still there, but it’s much more subtle than in the examples we showed in our talk. Below are some interactive comparisons between renders using Chiang 2019, Estevez 2019, and no shadow terminator fix; drag the slider left and right to compare:

Figure 2: The bedroom scene rendered in Takua Renderer using Chiang 2019 (left) and no harsh shadow terminator fix (right). For a full screen comparison, click here.

Figure 3: The bedroom scene rendered in Takua Renderer using Chiang 2019 (left) and Estevez 2019 (right). For a full screen comparison, click here.

Figure 4: The bedroom scene rendered in Takua Renderer using no normal mapping (left) and normal mapping with no harsh shadow terminator fix (right). For a full screen comparison, click here.

If you would like to compare the 4K renders directly, they are located here: Chiang 2019, Estevez 2019, No Fix, No Normal Mapping. As mentioned above, due to this scene being brightly lit, differences between the two techniques and not having any harsh shadow terminator fix at all will be a bit more subtle. However, differences are still visible, especially in brighter areas of the blanket and white pillows. Note that in this scenario, the difference between Chiang 2019 and Estevez 2019 is fairly small, while the difference between using either shadow terminator fix and not having a fix is more apparent. Also note how both Chiang 2019 and Estevez 2019 produce results that come pretty close to matching the reference image with no normal mapping; this is good, since we would expect fix techniques to match the reference image more closely than not having a fix!

If we remove the bedroom set and put the bed onto more of a studio lighting setup with two area lights and a seamless grey backdrop, we can start seeing more prominent differences between the two techniques and between either technique and no fix. Seeing how everything plays out in this type of a lighting setup is useful, since this is the type of render that one often sees as part of a standard lookdev department’s workflow:

Figure 5: The bed in a studio lighting setup, rendered in Takua Renderer using Chiang 2019 (left) and no harsh shadow terminator fix (right). For a full screen comparison, click here.

Figure 6: The bed in a studio lighting setup, rendered in Takua Renderer using Chiang 2019 (left) and Estevez 2019 (right). For a full screen comparison, click here.

Figure 7: The bed in a studio lighting setup, rendered in Takua Renderer using no normal mapping (left) and normal mapping with no harsh shadow terminator fix (right). For a full screen comparison, click here.

If you would like to compare the 4K renders directly for the studio lighting setup, they are located here: Chiang 2019, Estevez 2019, No Fix, No Normal Mapping. In this setup, we can now see differences between the four images much more clearly. Compared to the no normal mapping reference, the render with no fix produces considerably more darkening on silhouettes, and the harsh sudden transition from bright to shadowed areas is much more apparent. In the render with no fix, the bedding suddenly looks a lot less soft and starts to look a little more like a hard solid surface instead of like fabric.

Chiang 2019 and Estevez 2019 both restore more of the soft fabric look by softening out the harsh shadow terminator areas, but the differences between Chiang 2019 and Estevez 2019 become more apparent and interesting in this setting. Chiang 2019 produces an overall softer look that has shadow terminators that more closely match the reference with no normal mapping, but Chiang 2019 produces a slightly darker look overall compared to Estevez 2019. Estevez 2019 doesn’t match the reference’s shadow terminators quite as closely as Chiang 2019, but manages to preserve more of the overall energy. In Figure 5 in the Chiang 2019 paper, we explain where this difference comes from: for small shading normal deviations, Estevez 2019 produces less shadowing than our method, whereas for larger shading normal deviations, Estevez 2019 produces more shadowing than our method. As a result, Estevez 2019 generally produces a higher contrast look compared to Chiang 2019.

All of these differences are more apparent in a close-up crop of the full 4K render. Here are comparisons of the same studio lighting setup from above, but cropped in; pay close attention to slightly right of center of the image, where the white blanket overhangs the edge of the bed:

Figure 8: Crop of the studio lighting setup render from earlier, using Chiang 2019 (left) and no harsh shadow terminator fix (right). For a larger comparison, click here.

Figure 9: Crop of the studio lighting setup render from earlier, using Chiang 2019 (left) and Estevez 2019 (right). For a larger comparison, click here.

Figure 10: Crop of the studio lighting setup render from earlier, using no normal mapping (left) and normal mapping with no harsh shadow terminator fix (right). For a larger comparison, click here.

Of course, the scenario that makes the harsh shadow terminator problem the most apparent is when there is a single strong light source and we are viewing the scene from an angle from which we can see areas where the light hits surfaces at a glancing angle. These types of lighting setups are often used for checking silhouettes and backlighting and whatnot in modeling and lookdev turntable renders. In the comparisons below, the differences are most noticeable in the folds and on the shadowed sides of all of the bedding:

Figure 11: The bed lit with a single very bright light, rendered in Takua Renderer using Chiang 2019 (left) and no harsh shadow terminator fix (right). For a full screen comparison, click here.

Figure 12: The bed lit with a single very bright light, rendered in Takua Renderer using Chiang 2019 (left) and Estevez 2019 (right). For a full screen comparison, click here.

Figure 13: The bed lit with a single very bright light, rendered in Takua Renderer using no normal mapping (left) and normal mapping with no harsh shadow terminator fix (right). For a full screen comparison, click here.

If you would like to compare the 4K renders directly for the single light source renders, they are located here: Chiang 2019, Estevez 2019, No Fix, No Normal Mapping. With a single light source, the differences between the four images are now very clear, since a single light setup produces strong contrast between the lit and shadowed parts of the image. The harsh shadow terminator problem is especially visible in the folds of the blanket, where we can see one side of the fold fully lit and one side of the fold in shadow (although because the bedding all has diffuse transmission, the harsh shadow terminator is still not as prevalent as it would be for a purely diffuse reflecting surface). Something else that is interesting is how the bedding with no shadow terminator fix overall appears slightly brighter than the bedding with no normal mapping; this is because the shading normals “bend” more light towards the light source. Chiang 2019 restores the overall brightness of the bedding back to something closer to the reference with no normal mapping but softens out more of the fine detail from the normal mapping, while Estevez 2019 preserves more of the fine details but has a brightness level closer to the render with no fix.

Just like in the studio lighting renders, differences become more apparent in close-up crops of the full 4K render. Here are some cropped in comparisons, this time centered more on the top of the bed than on the edge. In these crops, the glancing light angles make the shadow terminators more apparent in the folds of the blankets and such:

Figure 14: Crop of the single light render from earlier, using Chiang 2019 (left) and no harsh shadow terminator fix (right). For a larger comparison, click here.

Figure 15: Crop of the single light render from earlier, using Chiang 2019 (left) and Estevez 2019 (right). For a larger comparison, click here.

Figure 16: Crop of the single light render from earlier, using no normal mapping (left) and normal mapping with no harsh shadow terminator fix (right). For a larger comparison, click here.

In the end, I don’t think either approach is better than the other, and from a physical basis there really isn’t a “right” answer since nothing about shading normals is physical to begin with; I think it’s up to a matter of personal preference and the requirements of the art direction on a given project. Our artists at Walt Disney Animation Studios generally prefer the look of Chiang 2019 because of the lighting setups they usually work with, but I know that other artists prefer the look of Estevez 2019 because they have different requirements to meet.

Fortunately, both Chiang 2019 and Estevez 2019 are both really easy to implement! Both techniques can be implemented in a handful of lines of code, and are easy to apply to any modern physically based shading model. We didn’t actually include source code in our SIGGRAPH talk, mostly because we figured that translating the math from our short paper into code should be very straightforward and thus, including source code that is basically a direct transcription of the math into C++ would almost be insulting to the intelligence of the reader. However, since then, I’ve gotten a surprising number of emails asking for source code, so here’s the math and the corresponding C++ code from my implementation in Takua Renderer. Let G’ be the additional shadow terminator term that we will multiply the Bsdf result with:

\[ G = \min\bigg[1, \frac{\langle\omega_g,\omega_i\rangle}{\langle\omega_s,\omega_i\rangle\langle\omega_g,\omega_s\rangle}\bigg] \]
\[ G' = - G^3 + G^2 + G \]
float calculateChiang2019ShadowTerminatorTerm(const vec3& outputDirection,
                                              const vec3& shadingNormal,
                                              const vec3& geometricNormal) {
    float NDotL = max(0.0f, dot(shadingNormal, outputDirection));
    float NGeomDotL = max(0.0f, dot(geometricNormal, outputDirection));
    float NGeomDotN = max(0.0f, dot(geometricNormal, shadingNormal));
    if (NDotL == 0.0f || NGeomDotL == 0.0f || NGeomDotN == 0.0f) {
        return 0.0f;
    } else {
        float G = NGeomDotL / (NDotL * NGeomDotN);
        if (G <= 1.0f) {
            float smoothTerm = -(G * G * G) + (G * G) + G; // smoothTerm is G' in the math
            return smoothTerm;
        }
    }
    return 1.0f;
}

That’s all there is to it! Source code for Estevez 2019 is provided as part of the Ray Tracing Gems Github repository, but for the sake of completeness, my implementation is included below. My implementation is just the sample implementation streamlined into a single function:

float calculateEstevez2019ShadowTerminatorTerm(const vec3& outputDirection,
                                               const vec3& shadingNormal,
                                               const vec3& geometricNormal) {
    float cos_d = min(abs(dot(geometricNormal, shadingNormal)), 1.0f);
    float tan2_d = (1.0f - cos_d * cos_d) / (cos_d * cos_d);
    float alpha2 = clamp(0.125f * tan2_d, 0.0f, 1.0f);

    float cos_i = max(abs(dot(geometricNormal, outputDirection)), 1e-6f);
    float tan2_i = (1.0f - cos_i * cos_i) / (cos_i * cos_i);
    float spi_shadow_term = 2.0f / (1.0f + sqrt(1.0f + alpha2 * tan2_i));
    return spi_shadow_term;
}

Finally, I have a handful of small implementation notes. First, to apply either Chiang 2019 or Estevez 2019 to your existing physically based shading model, just multiply the additional shadow terminator term with the contribution for each lobe that needs adjusting. Technically speaking G’ is an adjustment to the G shadowing term in a standard microfacet model, but multiplying there versus multiplying with the overall lobe contribution works out to be the same thing. If your Bsdf supports multiple shading normals for different specular lobes, you’ll need to calculate a separate shadow terminator term for each shading normal. Second, note that both Chiang 2019 and Estevez 2019 are described with respect to unidirectional path tracing from the camera. This frame of reference is very important; both techniques work specifically based on the outgoing direction being the direction towards a potential light source, meaning that this technique actually isn’t reciprocal by default. The Estevez 2019 paper found that the shadow terminator term can be made reciprocal by just applying the term to both incoming and outgoing directions, but they also found that this adjustment can make edges too dark. Instead, in order to make both techniques compatible with bidirectional path tracing integrators, I add in a check for whether the incoming or outgoing direction is pointed at a light, and feed the appropriate direction into the shadow terminator function. Doing this check is enough to make my bidirectional renders match my unidirectional ones; intuitively this approach is similar to the check one has to carry out when applying adjoint Bsdf adjustments [Veach 1996] for shading normals and refraction.

That’s pretty much it! If you want the details for how these two techniques are derived and why they work, I strongly encourage reading the Estevez 2019 chapter in Ray Tracing Gems and reading through both the short paper and the presentation slides / notes for the Chiang 2019 SIGGRAPH talk.

References

Matt Jen-Yuan Chiang, Yining Karl Li, and Brent Burley. 2019. Taming the Shadow Terminator. In ACM SIGGRAPH 2019 Talks. Article 71.

Alejandro Conty Estevez, Pascal Lecocq, and Clifford Stein. 2019. A Microfacet-Based Shadowing Function to Solve the Bump Terminator Problem. Ray Tracing Gems (2019), 149-158.

Eric Veach. 1996. Non-Symmetric Scattering in Light Transport Algorithms. In Proc. of Eurographics Workshop on Rendering (Rendering Techniques 1996). 82-91.

Errata

Thanks to Matt Pharr for noticing and pointing out a minor bug in the calculateChiang2019ShadowTerminatorTerm() implementation; the code has been updated with a fix.

RenderMan Art Challenge: Woodville

Table of Contents

Introduction

Every once in a while, I make a point of spending some significant personal time working on a personal project that uses tools outside of the stuff I’m used to working on day-to-day (Disney’s Hyperion renderer professionally, Takua Renderer as a hobby). A few times each year, Pixar’s RenderMan group holds an art challenge contest where Pixar provides a un-shaded un-uv’d base model and contestants are responsible for layout, texturing, shading, lighting, additional modeling of supporting elements and surrounding environment, and producing a final image. I thought the most recent RenderMan art challenge, “Woodville”, would make a great excuse for playing with RenderMan 22 for Maya; here’s the final image I came up with:

Figure 1: My entry to Pixar's RenderMan Woodville Art Challenge, titled "Morning Retreat". Base treehouse model is from Pixar; all shading, lighting, additional modeling, and environments are mine. Concept by Vasylina Holod. Model by Alex Shilt © Disney / Pixar - RenderMan "Woodville" Art Challenge.

One big lesson I have learned since entering the rendering world is that there is no such thing as the absolute best overall renderer- there are only renderers that are the best suited for particular workflows, tasks, environments, people, etc. Every in-house renderer is the best renderer in the world for the particular studio that built that renderer, and every commercial renderer is the best renderer in the world for the set of artists that have chosen that renderer as their tool of choice. Another big lesson that I have learned is that even though the Hyperion team at Disney Animation has some of the best rendering engineers in the world, so do all of the other major rendering teams, both commercial and in-house. These lessons are humbling to learn, but also really cool and encouraging if you think about it- these lessons means that for any given problem that arises in the rendering world, as an academic field and as an industry, we get multiple attempts to solve it from many really brilliant minds from a variety of background and a variety of different contexts and environments!

As a result, something I’ve come to strongly believe is that for rendering engineers, there is enormous value in learning to use outside renderers that are not the one we work on day-to-day ourselves. At any given moment, I try to have at least a working familiarity with the latest versions of Pixar’s RenderMan, Solid Angle (Autodesk)’s Arnold, and Chaos Group’s Vray and Corona renderers. All of these renderers are excellent, cutting edge tools, and when new artists join our studio, these are the most common commercial renderers that new artists tend to know how to use. Therefore, knowing how these four renderers work and what vocabulary is associated with them tends to be useful when teaching new artists how to use our in-house renderer, and for providing a common frame of reference when we discuss potential improvements and changes to our in-house renderer. All of the above is the mindset I went into this project with, so this post is meant to be something of a breakdown of what I did, along with some thoughts and observations made along the way. This was a really fun exercise, and I learned a lot!

Layout and Framing

For this art challenge, Pixar supplied a base model without any sort texturing or shading or lighting or anything else. The model is by Alex Shilt, based on a concept by Vasylina Holod. Here is a simple render showing what is provided out of the box:

Figure 2: Base model provided by Pixar, rendered against a white cyclorama background using a basic skydome.

I started with just scouting for some good camera angles. Since I really wanted to focus on high-detail shading for this project, I decided from close to the beginning to pick a close-up camera angle that would allow for showcasing shading detail, at the trade-off of not depicting the entire treehouse. A nice (lazy) bonus is that picking a close-up camera angle meant that I didn’t need to shade the entire treehouse; just the parts in-frame. Instead of scouting using just the GL viewport in Maya, I tried using RenderMan for Maya 22’s IPR mode, which replaces the Maya viewport with a live RenderMan render. This mode wound up being super useful for scouting; being able to interactively play with depth of field settings and see even basic skydome lighting helped a lot in getting a feel for each candidate camera angle. Here are a couple of different white clay test renders I did while trying to find a good camera position and framing:

Figure 3: Candidate camera angle with a close-up focus on the entire top of the treehouse.

Figure 4: Candidate camera angle with a close-up focus on a specific triangular A-frame treehouse cabin.

Figure 5: Candidate camera angle looking down from the top of the treehouse.

Figure 6: Candidate camera angle with a close-up focus on the lower set of treehouse cabins.

I wound up deciding to go with the camera angle and framing in Figure 6 for several reasons. First off, there are just a lot of bits that looked fun to shade, such as the round tower cabin on the left side of the treehouse. Second, I felt that this angle would allow me to limit how expansive of an environment I would need to build around the treehouse. I decided around this point to put the treehouse in a big mountainous mixed coniferous forest, with the reasoning being that tree trunks as large as the ones in the treehouse could only come from huge redwood trees, which only grow in mountainous coniferous forests. With this camera angle, I could make the background environment a single mountainside covered in trees and not have to build a wider vista.

UVs and Geometry

The next step that I took was to try to shade the main tree trunks, since the scale of the tree trunks worried me the most about the entire project. Before I could get to texturing and shading though, I first had to UV-map the tree trunks, and I quickly discovered that before I could even UV-map the tree trunks, I would have to retopologize the meshes themselves, since the tree trunk meshes came with some really messy topology that was basically un-UV-able. I retoplogized the mesh in ZBrush and exported it lower res than the original mesh, and then brought it back into Maya, where I used a shrink-wrap deformer to conform the lower res retopologized mesh back onto the original mesh. The reasoning here was that a lower resolution mesh would be easier to UV unwrap and that displacement later would restore missing detail. Figure 7 shows the wireframe of the original mesh on the left, and the wireframe of my retopologized mesh on the right:

Figure 7: Original mesh wireframe on the left, my retopologized version on the right.

In previous projects, I’ve found a lot of success in using Wenzel Jakob’s Instance Meshes application to retopologize messy geometry, but this time around I used ZBrush’s ZRemesher tool since I wanted as perfect a quad grid as possible (at the expense of losing some mesh fidelity) to make UV unwrapping easier. I UV-unwrapped the remeshed tree trunks by hand; the general approach I took was to slice the tree trunks into a series of stacked cylinders and then unroll each cylinder into as rectangular of a UV shell as I could. For texturing, I started with some photographs of redwood bark I found online, turned them greyscale in Photoshop and adjusted levels and contrast to produce height maps, and then took the height maps and source photographs into Substance Designer, where I made the maps tile seamlessly and also generated normal maps. I then took the tileable textures into Substance Painter and painted the tree trunks using a combination of triplanar projections and manual painting. At this point, I had also blocked in a temporary forest in the background made from just instancing two or three tree models all over the place, which I found useful for being able to help get a sense of how the shading on the treehouse was working in context:

Figure 8: In-progress test render with shaded tree trunks and temporary background forest blocked in.

Next up, I worked on getting base shading done for the cabins and various bits and bobs on the treehouse. The general approach I took for the entire treehouse was to do base texturing and shading in Substance Painter, and then add wear and tear, aging, and moss in RenderMan through procedural PxrLayerSurface layers driven by a combination of procedural PxrRoundCube and PxrDirt nodes and hand-painted dirt and wear masks. First though, I had to UV-unwrap all of the cabins and stuff. I tried using Houdini’s Auto UV SOP that comes with Houdini’s Game Tools package… the result (for an example, see Figure 9) was really surprisingly good! In most cases I still had to do a lot of manual cleanup work, such as re-stitching some UV shells together and re-laying-out all of the shells, but the output from Houdini’s Auto UV SOP provided a solid starting point. For each cabin, I grouped surfaces that were going to have a similar material into a single UDIM tile, and sometimes I split similar materials across multiple UDIM tiles if I wanted more resolution. This entire process was… not really fun… it took a lot of time and was basically just busy-work. I vastly prefer being able to paint Ptex instead of having to UV-unwrap and lay out UDIM tiles, but since I was using Substance Painter, Ptex wasn’t an option on this project.

Figure 9: Example of one of the cabins run through Houdini's Auto UV SOP. The cabin is on the left; the output UVs are on the right.

Texturing in Substance Painter and Shading

In Substance Painter, the general workflow I used was to start with multiple triplanar projections of (heavily edited) Quixel Megascans surfaces masked and oriented to different sections of a surface, and then paint on top. Through this process, I was able to get bark to flow with the curves of each log and whatnot. Then, in RenderMan for Maya, I took all of the textures from Substance Painter and used them to drive the base layer of a PxrLayeredSurface shader. All of the textures were painted to be basically greyscale or highly desaturated, and then in Maya I used PxrColorCorrect and PxrVary nodes to add in color. This way, I was able to iteratively play with and dial in colors in RenderMan’s IPR mode without having to roundtrip back to Substance Painter too much. Since the camera in my frame is relatively close to the treehouse, having lots of detail was really important. I put high-res displacement and normal maps on almost everything, which I found helpful for getting that extra detail in. I found that setting micropolygon length to be greater than 1 polygon per pixel was useful for getting extra detail in with displacement, at the cost of a bit more memory usage (which was perfectly tolerable in my case).

One of the unfortunate things about how I chose to UV-unwrap the tree trunks is that UV seams cut across parts of the tree trunks that are visible to the camera; as a result, if you zoom into the final 4K renders, you can see tiny line artifacts in the displacement where UV seams meet. These artifacts arise from displacement values not interpolating smoothly across UV seams when texture filtering is in play; this problem can sometimes be avoided by very carefully hiding UV seams, but sometimes there is no way. The problem in my case is somewhat reduced by expanding displacement values beyond the boundaries of each UV shell in the displacement textures (most applications like Substance Painter can do this natively), but again, this doesn’t completely solve the problem, since expanding values beyond boundaries can only go so far until you run into another nearby UV shell and since texture filtering widths can be variable. This problem is one of the major reasons why we use Ptex so heavily at Disney Animation; Ptex’s robust cross-face filtering functionality sidesteps this problem entirely. I really wish Substance Painter could output !

For dialing in the colors of the base wood shaders, I created versions of the wood shader base color textures that looked like newer wood and older sun-bleached wood, and then I used a PxrBlend node in each wood shader to blend between the newer and older looking wood, along with procedural wear to make sure that the blend wasn’t totally uniform. Across all of the various wood shaders in the scene, I tied all of the blend values to a single PxrToFloat node, so that I could control how aged all wood across the entire scene looks with a single value. For adding moss to everything, I used a PxrRoundCube triplanar to set up a base mask for where moss should go. The triplanar mask was set up so that moss appears heavily on the underside of objects, less on the sides, and not at all on top. The reasoning for making moss appear on undersides is because in the type of conifer forest I set my scene in, moss tends to grow where moisture and shade are available, which tends to be on the underside of things. The moss itself was also driven by a triplanar projection and was combined into each wood shader as a layer in PxrLayerSurface. I also did some additional manual mask painting in Substance Painter to get moss into some more crevices and corners and stuff on all of the wooden sidings and the wooden doors and whatnot. Finally, the overall amount of moss across all of the cabins is modulated by another single PxrToFloat node, allowing me to control the overall amount of moss using another single value. Figure 10 shows how I could vary the age of the wood on the cabins, along with the amount of moss.

Figure 10: Example of age and moss controllability on one of the cabins. The top row shows, going from left to right, 0% aged, 50% aged, and 100% aged. The bottom row shows, going from left to right, 0% moss, 50% moss, and 100% moss. The final values used were close to 60% for both age and moss.

The spiral staircase initially made me really worried; I originally thought I was going to have to UV unwrap the whole thing, and stuff like the railings are really not easy to unwrap. But then, after a bit of thinking, I realized that the spiral staircase is likely a fire escape staircase, and so it could be wrought iron or something. Going with a wrought iron look allowed me to handle the staircase mostly procedurally, which saved a lot of time. Going along with the idea of the spiral staircase being a fire escape, I figured that the actual main way to access all of the different cabins in the treehouse must be through staircases internal to the tree trunks. This idea informed how I handled that long skinny window above the front door; I figured it must be a window into a stairwell. So, I put a simple box inside the tree behind that window, with a light at the top. That way, a hint of inner space would be visible through the window:

Figure 11: Simple box inside the tree behind the lower window, to give a hint of inner space.

In addition to shading everything, I also had to make some modifications to the provided treehouse geometry. I that in the provided model, the satellite dish floats above its support pole without any actual connecting geometry, so I modeled a little connecting bit for the satellite dish. Also, I thought it would be fun to put some furniture in the round cabin, so I decided to make the walls into plate glass. Once I made the walls into plate glass, I realized that I needed to make a plausible interior for the round cabin. Since the only way into the round cabin must be through a staircase in the main tree trunk, I modeled a new door in the back of the round cabin. With everything shaded and the geometric modifications in place, here is how everything looked at this point:

Figure 12: In-progress test render with initial fully shaded treehouse, along with geoemtric modifications. Click for 4K version.

Set Dressing the Treehouse

The next major step was adding some story elements. I wanted the treehouse to feel lived in, like the treehouse is just somebody’s house (a very unusual house, but a house nonetheless). To help convey that feeling, my plan was to rely heavily on set dressing to hint at the people living here. So the goal was to add stuff like patio furniture, potted plants, laundry hanging on lines, furniture visible through windows, the various bits and bobs of life, etc.

I started by adding a nice armchair and a lamp to the round tower thing. Of course the chair is an Eames Lounge Chair, and to match, the lamp is a modern style tripod floor lamp type thing. I went with a chair and a lamp because I think that round tower would be a lovely place to sit and read and look out the window at the surrounding nature. I thought it would be kind of fun to make all of the furniture kind of modern and stylish, but have all of the modern furniture be inside of a more whimsical exterior. Next, I extended the front porch part of the main cabin, so that I could have some room to place furniture and props and stuff. Of course any good front porch should have some nice patio furniture, so I added some chairs and a table. I also put in a hanging round swing chair type thing with a bit poofy blue cushion; this entire area should be a fun place to sit around and talk in. Since the entire treehouse sits on the edge of a pond, I figured that maybe the people living here like to sit out on the front porch, relax, shoot the breeze, and fish from the pond. Since my scene is set in the morning, I figured maybe it’s late in the morning and they’ve set up some fishing lines to catch some fish for dinner later. To help sell the idea that it’s a lazy fishing morning, I added a fishing hat on one of the chairs and put a pitcher of ice tea and some glasses on the table. I also added a clothesline with some hanging drying laundry, along with a bunch of potted and hanging plants, just to add a bit more of that lived-in feel. For the plants and several of the furniture pieces that I knew I would want to tweak later, I built in controls to their shading graphs using PxrColorCorrect nodes to allow me to adjust hue and saturation later. Many of the furniture, plant and prop models are highly modified, kitbashed, re-textured versions of assets from Evermotion and CGAxis, although some of them (notably the Eames Lounge Chair) are entirely my own.

Figure 13: In-progress test render closeup crop of the lower main cabin, with furniture and plants and props.

Figure 14: In-progress test render closeup crop of the glass round cabin and the upper smaller cabin, with furniture and plants and props.

Building the Background Forest

The last step before final lighting was to build a more proper background forest, as a replacement for the temporary forest I had used up until this point for blocking purposes. For this step, I relied heavily on Maya’s MASH toolset, which I found to provide a great combination of power and ease-of-use; for use cases involving tons of instanced geometry, I certainly found it much easier than Maya’s older Xgen toolset. MASH felt a lot more native to Maya, as opposed to Xgen, which requires a bunch of specific external file paths and file formats and whatnot. I started with just getting some kind of reasonable base texturing down onto the groundplane. In all of the in-progress renders up until this point, the ground plane was just white… you can actually tell if you look closely enough! I eventually got to a place I was happy with using a bunch of different PxrRoundCubes with various rotations, all blended on top of each other using various noise projections. I also threw in some rocks from Quixel Megascans, just to add a bit of variety. I then laid down some low-level ground vegetation, which was meant to peek through the larger trees in various areas. The base vegetation was made up of various ferns, shrubs, and small sapling-ish young conifers placed using Maya’s MASH Placer node:

Figure 15: In-progress test render of the forest floor and under-canopy vegetation.

In the old temporary background forest, the entire forest is made up of only three different types of trees, and it really shows; there was a distinct lack of color variation or tree diversity. So, for the new forest, I decided to use a lot more types of trees. Here is a rough lineup (not necessarily to scale with each other) of how all of the new tree species looked:

Figure 16: Test render of a lineup of the trees used in the final forest.

For the main forest, I hand-placed trees onto the mountain slope as instanced. One cool thing I built in to the forest was PxrColorCorrect nodes in all of the tree shading graphs, with all controls wired up to single master controls for hue/saturation/value so that I could shift the entire forest’s colors easily if necessary. This tool proved to be very useful for tuning the overall vegetation colors later while still maintaining a good amount of variation. I also intentionally left gaps in the forest around the rock formations to give some additional visual variety. Building up the entire under-layer of shrubs and saplings and stuff also paid off, since a lot of that stuff wound up peeking through various gaps between the larger trees:

Figure 17: In-progress test render of the background forest.

The last step for the main forest was adding some mist and fog, which is common in Pacific Northwest type mountainous conifer forests in the morning. I didn’t have extensive experience working with volumes in RenderMan before this, so there was definitely something of a learning curve for me, but overall it wasn’t too hard to learn! I made the mist by just having a Maya Volume Noise node plug into the density field of a PxrVolume; this isn’t anything fancy, but it provided a great start for the mist/fog:

Figure 18: In-progress test render of the background forest with an initial version of mist and fog.

Lighting and Compositing

At this point, I think the entire image together was starting to look pretty good, although, without any final shot lighting, the overall vibe felt more like a spread out of an issue of National Geographic than a more cinematic still out of a film. Normally my instinct is to go with a more naturalistic look, but since part of the objective for this project was to learn to use RenderMan’s lighting toolset for more cinematic applications, I wanted to push the overall look of the image beyond this point:

Figure 19: In-progress test render with everything together, before final shot lighting.

From this point onwards, following a tutorial made by Jeremy Heintz, I broke out the volumetric mist/fog into a separate layer and render pass in Maya, which allowed for adjusting the mist/fog in comp without having to re-render the entire scene. This strategy proved to be immensely useful and a huge time saver in final lighting. Before starting final lighting, I made a handful of small tweaks, which included reworking the moss on the front cabin’s lower support frame to get rid of some visible repetition, tweaking and adding dirt on all of the windows, and dialing in saturation and hue on the clothesline and potted plants a bit more. I also changed the staircase to have aged wooden steps instead of all black cast iron, which helped blend the staircase into the overall image a bit more, and finally added some dead trees in the background forest. Finally, in a last-minute change, I wound up upgrading a lot of the moss on the main tree trunk and on select parts of the cabins to use instanced geometry instead of just being a shading effect. The geometric moss used atlases from Quixel Megascans, bunched into little moss patches, and then hand-scattered using the Maya MASH Placer tool. Upgrading to geometric moss overall provided only a subtle change to the overall image, but I think helped enormously in selling some of the realism and detail; I find it interesting how small visual details like this often can have an out-sized impact on selling an overall image.

For final lighting, I added an additional uniform atmospheric haze pass to help visually separate the main treehouse from the background forest a bit more. I also added a spotlight fog pass to provide some subtle godrays; the spotlight is a standard PxrRectLight oriented to match the angle of the sun. The PxrRectLight also has the cone modified enabled to provide the spot effect, and also has a PxrCookieLightFilter applied with a bit of a cucoloris pattern applied to provide the breakup effect that godrays shining through a forest canopy should have. To provide a stronger key light, I rotated the skydome until I found something I was happy with, and then I split out the sun from the skydome into separate passes. I split out the sun by painting the sun out of the skydome texture and then creating a PxrDistantLight with an exposure, color, and angle matched to what the sun had been in the skydome. Splitting out the sun then allowed me to increase the size of the sun (and decrease the exposure correspondingly to maintain overall the same brightness), which helped soften some otherwise pretty harsh sharp shadows. I also used a good number of PxrRodLightFilters to help take down highlights in some areas, lighten shadows in others, and provide overall light shaping to areas like the right hand side of the right tree trunk. I’ve conceptually known why artists like rods for some time now (especially since rods are heavily used feature in Hyperion at my day job at Disney Animation), but I think this project helped me really understand at a more hands-on level why rods are so great for hitting specific art direction.

After much iteration, here is the final set of render passes I wound up with going into final compositing:

Figure 19: Final render, sun (key) pass. Click for 4K version.

Figure 20: Final render, sky (fill) pass. Click for 4K version.

Figure 21: Final render, practical lights pass. Click for 4K version.

Figure 22: Final render, mist/fog pass. Click for 4K version.

Figure 23: Final render, atmospheric pass. Click for 4K version.

Figure 24: Final render, spotlight pass. Click for 4K version.

In final compositing, since I had everything broken out into separate passes, I was able to quickly make a number of adjustments that otherwise would have been much slower to iterate on if I had done them in-render. I tinted the sun pass to be warmer (which is equivalent to changing the sun color in-render and re-rendering) and tweaked the exposures of the sun pass up and some of the volumetric passes down to balance out the overall image. I also applied a color tint to the mist/fog pass to be cooler, which would have been very slow to experiment with if I had changed the actual fog color in-render. I did all of the compositing in Photoshop, since I don’t have a Nuke license at home. Not having a node-based compositing workflow was annoying, so next time I’ll probably try to learn DaVinci Resolve Fusion (which I hear is pretty good).

For color grading, I mostly just fiddled around in Lightroom. I also added in a small amount of bloom by just duplicating the sun pass, clipping it to only really bright highlight values by adjusting levels in Photoshop, applying a Gaussian blur, exposing down, and adding back over the final comp. Finally, I adjusted the gamma by 0.8 and exposed up by half a stop to give some additional contrast and saturation, which helped everything pop a bit more and feel a bit more moody and warm. Figure 25 shows what all of the lighting, comp, and color grading looks like applied to a 50% grey clay shaded version of the scene, and if you don’t want to scroll all the way back to the top of this post to see the final image, I’ve included it again as Figure 26.

Figure 25: Final lighting, comp, and color grading applied to a 50% grey clay shaded version. Click for 4K version.

Figure 26: Final image. Click for 4K version.

Conclusion

Overall, I had a lot of fun on this project, and I learned an enormous amount! This project was probably the most complex and difficult art project I’ve ever done. I think working on this project has shed a lot of light for me on why artists like certain workflows, which is an incredibly important set of insights for my day job as a rendering engineer. I won’t grumble as much about having to support rods in production rendering now!

Here is a neat progression video I put together from all of the test and in-progress renders that I saved throughout this entire project:

I owe several people an enormous debt of thanks on this project. My wife, Harmony Li, deserves all of my gratitude for her patience with me during this project, and also for being my art director and overall sanity checker. My coworker at Disney Animation, lighting supervisor Jennifer Yu, gave me a lot of valuable critiques, advice, and suggestions, and acted as my lighting director during the final lighting and compositing stage. Leif Pederson from Pixar’s RenderMan group provided a lot of useful tips and advice on the RenderMan contest forum as well.

Finally, my final image somehow managed to score an honorable mention in Pixar’s Art Challenge Final Results, which was a big, unexpected, pleasant surprise, especially given how amazing all of the other entries in the contest are! Since the main purpose of this project for me was as a learning exercise, doing well in the actual contest was a nice bonus, and kind of makes me think I’ll likely give the next RenderMan Art Challenge a shot too with a more serious focus on trying to put up a good showing. If you’d like to see more about my contest entry, check out the work-in-progress thread I kept up in Pixar’s Art Challenge forum; some of the text for this post was adapted from updates I made in my forum thread.

Frozen 2

Table of Contents

The 2019 film from Walt Disney Animation Studios is, of course, Frozen 2, which really does not need any additional introduction. Instead, here is a brief personal anecdote. I remember seeing the first Frozen in theaters the day it came out, and at some point halfway through the movie, it dawned on me that what was unfolding on the screen was really something special. By the end of the first Frozen, I was convinced that I had to somehow get myself a job at Disney Animation some day. Six years later, here we are, with Frozen 2’s release imminent, and here I am at Disney Animation. Frozen 2 is my fourth credit at Disney Animation, but somehow seeing my name in the credits at the wrap party for this film was even more surreal than seeing my name in the credits on my first film. Working with everyone on Frozen 2 was an enormous privilege and thrill; I’m incredibly proud of the work we have done on this film!

Under team lead Dan Teece’s leadership, for Frozen 2 we pushed Disney’s Hyperion Renderer the hardest and furthest yet to date, and I think the result really shows in the final film. Frozen 2 is stunningly beautiful to look at it; seeing it for the first time in its completed form was a humbling experience, since there were many moments where I realized I honestly had no idea how our artists had managed to push the renderer as far as they did. During the production of Frozen 2, we also welcomed three superstar rendering engineers to the rendering team: Mark Lee, Joe Schutte, and Wei-Feng Wayne Huang; their contributions to our team and to Frozen 2 simply cannot be overstated!

On Frozen 2, I got to play a part on several fun and interesting initiatives! Hyperion’s modern volume rendering system saw a number of major improvements and advancements for Frozen 2, mostly centered around rendering optically thin volumes. Hyperion’s modern volume rendering system is based on null-collision tracking theory [Kutz et al. 2017], which is exceptionally well suited for dense volumes dominated by high-order scattering (such as clouds and snow). However, as anyone with experience developing a volume rendering system knows, optically thin volumes (such as mist and fog) are a major weak point for null-collision techniques . Wayne was responsible for a number of major advancements that allowed us to efficiently render mist and fog on Frozen 2 using the modern volume rendering system, and Wayne was kind enough to allow me to play something of an advisory / consulting role on that project. Also, Frozen 2 is the first feature film on which we’ve deployed Hyperion’s path guiding implementation into production; this project was the result of some very tight collaboration between Disney Animation and Disney Research Studios. Last summer, I worked with Peter Kutz, our summer intern Laura Lediaev, and with Thomas Müller from ETH Zürich / Disney Research Studios to prototype an implementation of Practical Path Guiding [Müller et al. 2017] in Hyperion. Joe Schutte then took on the massive task (as one of his first tasks on the team, no less!) of turning the prototype into a production-quality feature, and Joe worked with Thomas to develop a number of improvements to the original paper [Müller 2019]. Finally, I worked on some lighting / shading improvements for Frozen 2, which included developing a new spot light implementation for theatrical lighting, and, with Matt Chiang and Brent Burley, a solution to the long-standing normal / bump mapped shadow terminator problem [Chiang et al. 2019]. We also benefited from more improvements in our denoising tech [Dahlberg et al. 2019] which arose as a joint effort between our own David Adler, ILM, Pixar and the Disney Research Studios rendering team.

I think Frozen projects provide an interesting window into how far rendering has progressed at Disney Animation over the past six years. We’ve basically had some Frozen project going on every few years, and each Frozen project upon completion has represented the most cutting edge rendering capabilities we’ve had at the time. The original Frozen in 2013 was the studio’s last project rendered using Renderman, and also the studio’s last project to not use path tracing. Frozen Fever in 2015, by contrast, was one of the first projects (alongside Big Hero 6) to use Hyperion and full path traced global illumination. The jump in visual quality between Frozen and Frozen Fever was enormous, especially considering that they were released only a year and a half apart. Olaf’s Frozen Adventure, which I’ve written about before, served as the testbed for a number of enormous changes and advancements that were made to Hyperion in preparation for Ralph Breaks the Internet. Frozen 2 represents the full extent of what Hyperion can do today, now that Hyperion is a production-hardened, mature renderer backed by a team that is now very experienced. The original Frozen looked decent when it first came out, but since it was the last non-path-traced film we made, it looked dated visually just a few years later. Comparing the original Frozen with Frozen 2 is like night and day; I’m very confident that Frozen 2 will still look visually stunning and hold up well long into the future. A great example is in all of the clothing in Frozen 2; when watching the film, take a close look at all of the embroidery on all of the garments. In the original Frozen, a lot of the embroidery work is displacement mapped or even just normal mapped, but in Frozen 2, all of the embroidery is painstakingly constructed from actual geometric curves [Liu et al. 2020], and as a result every bit of embroidery is rendered in incredible detail!

One particular thing in Frozen 2 that makes me especially happy is how all of the water looks in the film, and especially how the water looks in the dark seas sequence. On Moana, we really struggled with getting whitewater and foam to look appropriately bright and white. Since that bright white effect comes from high-order scattering in volumes and at the time we were still using our old volume rendering system that couldn’t handle high-order scattering well, the artists on Moana wound up having to rely on a lot of ingenious trickery to get whitewater and foam to look just okay. I think Moana is a staggeringly beautiful film, but if you know where to look, you may be able to tell that the foam looks just a tad bit off. On Frozen 2, however, we were able to do high-order scattering, and as a result, all of the whitewater and foam in the dark seas sequence looks just absolutely amazing. No spoilers, but all I’ll say is that there’s another part in the movie that isn’t in any trailer where my jaw was just on the floor in terms of water rendering; you’ll know it when you see it. A similar effect has been done before in a previous CG Disney Animation movie, but the effect in Frozen 2 is on a far grander, far more impressive, far more amazing scale [Tollec et al. 2020].

In addition to the rendering tech advancements we made on Frozen 2, there are a bunch of other cool technical initiatives that I’d recommend reading about! Each of our films has its own distinct world and look, and the style requirements on Frozen 2 often required really cool close collaborations between the lighting and look departments and the rendering team; the “Show Yourself” sequence near the end of the film was a great example of the amazing work these collaborations can produce [Sathe et al. 2020]. Frozen 2 had a lot of characters that were actually complex effects, such as the Wind Spirit [Black et al. 2020] and the Nokk water horse [Hutchins et al. 2020]; these characters required tight collaborations between a whole swath of departments ranging from animation to simulation to look to effects to lighting. Even the forest setting of the film required new tech advancements; we’ve made plenty of forests before, but integrating huge-scale effects into the forest resulted in some cool new workflows and techniques [Joseph et al. 2020].

To give a sense of just how gorgeous Frozen 2 looks, below are some stills from the movie pulled from Blu-ray, in no particular order, 100% rendered using Hyperion. If you love seeing cutting edge rendering in action, I strongly encourage going to see Frozen 2 on the biggest screen you can find! The film has wonderful songs, a fantastic story, and developed, complex, funny characters, and of course there is not a single frame in the movie that isn’t stunningly beautiful.

Here is the part of the credits with Disney Animation’s rendering team, kindly provided by Disney! I always encourage sitting through the credits for movies, since everyone in the credits put so much hard work and passion into what you see onscreen, but I especially recommend it for Frozen 2 since there’s also a great post-credits scene.

All images in this post are courtesy of and the property of Walt Disney Animation Studios.

References

Cameron Black, Trent Correy, and Benjamin Fiske. 2020. Frozen 2: Creating the Wind Spirit. In ACM SIGGRAPH 2020 Talks. Article 22.

Matt Jen-Yuan Chiang, Yining Karl Li, and Brent Burley. 2019. Taming the Shadow Terminator. In ACM SIGGRAPH 2019 Talks. Article 71.

Henrik Dahlberg, David Adler, and Jeremy Newlin. 2019. Machine-Learning Denoising in Feature Film Production. In ACM SIGGRAPH 2019 Talks. Article 21.

David Hutchins, Cameron Black, Marc Bryant, Richard Lehmann, and Svetla Radivoeva. 2020. Frozen 2”: Creating the Water Horse . In ACM SIGGRAPH 2020 Talks. Article 23.

Norman Moses Joseph, Vijoy Gaddipati, Benjamin Fiske, Marie Tollec, and Tad Miller. 2020. Frozen 2: Effects Vegetation Pipeline. In ACM SIGGRAPH 2020 Talks. Article 7.

Peter Kutz, Ralf Habel, Yining Karl Li, and Jan Novák. 2017. Spectral and Decomposition Tracking for Rendering Heterogeneous Volumes. ACM Transactions on Graphics (Proc. of SIGGRAPH) 36, 4 (Aug. 2017), Article 111.

Ying Liu, Jared Wright, and Alexander Alvarado. 2020. Making Beautiful Embroidery for “Frozen 2. In ACM SIGGRAPH 2020 Talks. Article 73.

Thomas Müller. 2019. Practical Path Guiding in Production. In ACM SIGGRAPH 2019 Course Notes: Path Guiding in Production. 37-50.

Thomas Müller, Markus Gross, and Jan Novák. 2017. Practical Path Guiding for Efficient Light-Transport Simulation. Computer Graphics Forum (Proc. of Eurographics Symposium on Rendering) 36, 4 (Jun. 2017), 91-100.

Amol Sathe, Lance Summers, Matt Jen-Yuan Chiang, and James Newland. 2020. The Look and Lighting of “Show Yourself” in “Frozen 2. In ACM SIGGRAPH 2020 Talks. Article 71.

Marie Tollec, Sean Jenkins, Lance Summers, and Charles Cunningham-Scott. 2020. Deconstructing Destruction: Making and Breaking of ”Frozen 2”’s Dam. In ACM SIGGRAPH 2020 Talks. Article 24.

SIGGRAPH 2019 Talk- Taming the Shadow Terminator

This year at SIGGRAPH 2019, Matt Jen-Yuan Chiang, Brent Burley, and I had a talk that presents a technique for smoothing out the harsh shadow terminator problem that often arises when high-frequency bump or normal mapping is used in ray tracing. We developed this technique as part general development on Disney’s Hyperion Renderer for the production of Frozen 2. This work is mostly Matt’s; Matt was very kind in allowing me to help out and play a small role on this project.

This work is contemporaneous with the recent work on the same shadow terminator problem that was carried out and published by Estevez et al. from Sony Pictures Imageworks and published in Ray Tracing Gems. We actually found out about the Estevez et al. technique at almost exactly the same time that we submitted our SIGGRAPH talk, which proved to be very fortunate, since after our talk was accepted, we were than able to update our short paper with additional comparisons between Estevez et al. and our technique. I think this is a great example of how having multiple rendering teams in the field tackling similar problems and sharing results provides a huge benefit to the field as a whole- we now have two different, really good solutions to what used to be a big shading problem!

A higher-res version of Figure 1 from the paper: (left) <a href="https://blog.yiningkarlli.com/content/images/2019/Aug/header_shadingnormals.png">shading normals</a> exhibiting the harsh shadow terminator problem, (center) <a href="https://blog.yiningkarlli.com/content/images/2019/Aug/header_chiang.png">our technique</a>, and (right) <a href="https://blog.yiningkarlli.com/content/images/2019/Aug/header_estevez.png">Estevez et al.'s technique</a>.

Here is the paper abstract:

A longstanding problem with the use of shading normals is the discontinuity introduced into the cosine falloff where part of the hemisphere around the shading normal falls below the geometric surface. Our solution is to add a geometrically derived shadowing function that adds minimal additional shadowing while falling smoothly to zero at the terminator. Our shadowing function is simple, robust, efficient and production proven.

The paper and related materials can be found at:

Matt Chiang presented the paper at SIGGRAPH 2019 in Los Angeles as part of the “Lucy in the Sky with Diamonds - Processing Visuals” Talks session. A pdf version of the presentation slides, along with presenter notes, are available on my project page for the paper. I’d also recommend getting the author’s version of the short paper instead of the official version as well, since the author’s version includes some typo fixes made after the official version was published.

Work on this project started early in the production of Frozen 2, when our look artists started to develop the shading of the dresses and costumes in Frozen 2. Because intricate woven fabrics and patterns are an important part of the Scandinavian culture that Frozen 2 is inspired by, the shading in Frozen 2 pushed high-resolution high-frequency displacing and normal mapping further than we ever had before with Hyperion in order to make convincing looking textiles. Because of how high-frequency the normal mapping was pushed, the bump/normal mapped shadow terminator problem became worse and worse and proved to be a major pain point for our look and lighting artists. In the past, our look and lighting artists have worked around shadow terminator issues using a combination of techniques, such as falling back to full displacement, or using larger area lights to try to soften the shadow terminator. However, these techniques can be problematic when they are in conflict with art direction, and force artists to think about an additional technical dimension when they otherwise would rather be focused on the artistry.

Our search for a solution began with Peter Kutz looking at “Microfacet-based Normal Mapping for Robust Monte Carlo Path Tracing” by Schüssler et al., which focused on addressing energy loss when rendering shading normals. The Schüssler et al. 2017 technique solved the energy loss problem by constructing a microfacet surface comprised of two facets per shading point, instead the the usual one. The secondary facet is used to account for things like inter-reflections between the primary and secondary facets. However, the Schüssler et al. 2017 technique wound up not solving the shadow terminator problems we were facing; using their shadowing function produced a look that was too flat.

Matt Chiang then realized that the secondary microfacet approach could be used to solve the shadow terminator problem using a different secondary microfacet configuration; instead of using a vertical second facet as in Schüssler, Matt made the secondary facet perpendicular to the shading normal. By making the secondary facet perpendicular, as a light source slowly moves towards the grazing angle relative to the microfacet surface, peak brightness is maintained when the light is parallel to the shading normal, while additional shadowing is introduced beyond the parallel angle. This solution worked extremely well, and is the technique presented in our talk / short paper.

The final piece of the puzzle was addressing a visual discontinuity produced by Matt’s technique when the light direction reaches and moves beyond the shading normal. Instead of falling smoothly to zero, the shape of the shadow terminator undergoes a hard shift from a cosing fall-off formed by the dot product of the shading normal and light direction to a linear fall-off. Matt and I played with a number of different interpolation schemes to smooth out this transition, and eventually settled on a custom smooth-step function. During this process, I made the observation that whatever blending function we used needed to introduce C1 continuity in order to remove the visual discontinuity. This observation led Brent Burley to realize that instead of a complex custom smooth-step function, a simple Hermite interpolation would be enough; this Hermite interpolation is the one presented in the talk / short paper.

For a much more in-depth view at all of the above, complete with diagrams and figures and examples, I highly recommend looking at Matt’s presentation slides and presenter notes.

Here is a test render of the Iduna character’s costume from Frozen 2, from before we had this technique implemented in Hyperion. The harsh shadow terminator produces an illusion that makes her arms and torso look boxier than the actual underlying geometry is:

Iduna's costume without our shadow terminator technique. Note how boxy the arms and torso look.

…and here is the same test render, but now with our soft shadow terminator fix implemented and enabled. Note how her arms and torso now look properly rounded, instead of boxy!

Iduna's costume with our shadow terminator technique. The arms and torso look correctly rounded now.

This technique is now enabled by default across the board in Hyperion, and any article of clothing or costume you see in Frozen 2 is using this technique. So, through this project, we got to play a small role in making Elsa, Anna, Kristoff, and everyone else look like themselves!