Physically Based Rendering 3rd Edition

Today is the release date for the digital version of the new Physically Based Rendering 3rd Edition, by Matt Pharr, Wenzel Jakob, and Greg Humphreys. As anyone in the rendering world knows, Physically Based Rendering is THE reference book for the field; for novices, Physically Based Rendering is the best introduction one can get to the field, and for experts, Physically Based Rendering is an invaluable reference book to consult and check. I share a large office with three other engineers on the Hyperion team, and I think between the four of us, we actually have an average of more than one copy per person (of varying editions). I could not recommend this book enough. The third edition adds Wenzel Jakob as an author; Wenzel is the author of the research-oriented Mitsuba Renderer and is one of the most prominent new researchers in rendering in the past decade. There is a lot of great new light transport stuff in the third edition, which I’m guessing comes from Wenzel. Both Wenzel’s work and the previous editions of Physically Based Rendering were instrumental in influencing my path in rendering, so of course I’ve already had the third edition on pre-order since it was announced over a year ago.

Each edition of Physically Based Rendering is accompanied by a major release of the PBRT renderer, which implements the book. The PBRT renderer is a major research resource for the community, and basically everyone I know in the field has learned something or another from looking through and taking apart PBRT. As part of the drive towards PBRT-v3, Matt Pharr made a call for interesting scenes to provide as demo scenes with the PBRT-v3 release. I offered Matt the PBRT-v2 scene I made a while back, because how could that scene not be rendered in PBRT? I’m very excited that Matt accepted and included the scene as part of PBRT-v3’s example scenes! You can find the example scenes here on the PBRT website.

Converting the scene to PBRT’s format required a lot of manual work, since PBRT’s scene specification and shading system is very different from Takua’s. As a result, the image that PBRT renders out looks slightly different from Takua’s version, but that’s not a big deal. Here is the scene rendered using PBRT-v3:

Physically Based Rendering 2nd Edition, rendered using PBRT-v3.

…and for comparison, the same scene rendered using Takua:

Physically Based Rendering 2nd Edition, rendered using Takua Renderer a0.5.

Really, it’s just the lighting that is a bit different; the Takua version is slightly warmer and slightly underexposed in comparison.

At some point I should make an updated version of this scene using the third edition book. I’m hoping to be able to contribute more of my Takua test scenes to the community in PBRT-v3 format in the future; giving back to such a major influence on my own career is extremely important. As part of the process of porting the scene over to PBRT-v3, I also wrote a super-hacky render viewer for PBRT that shows the progress of the render as the renderer runs. Unfortunately, this viewer is mega-hacky, and I don’t have time at the moment to clean it up and release it. Hopefully at some point I’ll be able to; alternatively, anyone else who wants to take a look and give it a stab, feel free to contact me.


Addendum 04/28/2017: Matt was recently looking for some interesting water-sim scenes to demonstrate dielectrics and glass materials and refraction and whatnot. I contributed a few frames from my PIC/FLIP fluid simulator, Ariel. Most of the data from Ariel doesn’t exist in meshed format anymore; I still have all of the raw VDBs and stuff, but the meshes took up way more storage space than I could afford at the time. I can still regenerate all of the meshes though, and I also have a handleful of frames in mesh form still from my attenuated transmission blog post. The frame from the first image in that post is now also included in the PBRT-v3 example scene suite! The PBRT version looks very different since it is intended to demonstrate and test something very different from what I was doing in that blog post, but it still looks great!

A frame from my Ariel fluid simulator, rendered using PBRT-v3.

Rendering Minecraft in Renderman/RIS

The vast majority of my computer graphics time is spent developing renderers (Disney’s Hyperion renderer as a professional, Takua Renderer as a hobbyist). However, I think having experience using renderers as an artist is an important part of knowing what to focus on as a renderer developer. I also think that knowing how a variety of different renderers work and how they are used is important; a lot of artists are used to using several different renderers, and each renderer has its own vocabulary and tried and true workflows and whatnot. Finally, there are a lot of really smart people working on all of the major production renderers out there, and seeing the cool things everyone is doing is fun and interesting! Because of all of these reasons, I like putting some time aside every once in a while to tinker with other renderers. I usually don’t write about my art projects that much anymore, but this project was particularly fun and produced some nice looking images, so I thought I’d write it up. As usual, before we dive into the post, here is the final image I made, rendered using Pixar’s Photorealistic Renderman 20 in RIS mode:

A Minecraft town from the pve.nerd.nu Minecraft server, rendered in Renderman 20/RIS.

About two years ago, Pixar’s Photorealistic Renderman got a new rendering mode called RIS. PRman was one of the first production renderers ever developed, and historically PRman has always been a REYES-style rasterization renderer. Over time though, PRman has gained a whole bunch of added on features. At the time of Monsters University, PRman was actually a kind of hybrid rasterizer and raytracer; the rendering system on Monsters University used raytracing to build a multiresolution radiosity cache that was then used for calculating GI contributions in the shading part of REYES rasterization. That approach worked well and produced beautiful images, but it was also really complicated and had a number of drawbacks! RIS replaces all of that with a brand new, pure pathtracing system. In fact, while RIS is marketed as a new mode in PRman, RIS is actually a completely new renderer written almost completely from scratch; it just happens to be able to read Renderman RIB files as input.

Recently, I wanted to try rendering a Minecraft world from a Minecraft server that I play on. There are a lot of great Minecraft rendering tools available these days (Chunky comes to mind), but I wanted much more production-like control over the look of the render, so I decided to do everything using a normal CG production workflow instead of a prebuilt dedicated Minecraft rendering tool. I thought that I would use the project as a chance to give RIS a spin. At Cornell’s Program of Computer Graphics, Pixar was kind enough to provide us with access to the Renderman 19 beta program, which included the first version of RIS. I tinkered with the PRman 19 beta quite a lot at Cornell, and being an early beta, RIS had some bugs and incomplete bits back then. Since then though, the Renderman team has followed up PRman 19 with versions 20 and 21, which introduced a number of new features and speed/stability improvements to RIS. Since joining the Hyperion team, I’ve had the chance to meet and talk to various (really smart!) folks on the Renderman team since they are a sister team to us, but I haven’t actually had time to try the new versions of RIS. This project was a fun way to try the newest version of RIS on my own!

The Minecraft data for this project comes from the Nerd.nu community Minecraft server, which is run by a collective of players for free. I’ve been playing on the Nerd.nu PvE (Player versus Environment) server for years and years now, and players have built a mind-boggling number of amazing detailed creations. Every couple of months, the server is reset with a fresh map; I wanted to render a town that fellow player Avi_Dangerstein and I built on the previous map revision. Fortunately, all previous Nerd.nu map revisions are available for download in the server archives (the specific map I used is labeled pve-rev17). Here is an overview of the map revision I wanted to pull data from:

Cartograph view of Revision 17 of the Nerd.nu PvE server, located at p.nerd.nu. Click through to go to the full, zoomable cartograph.

…and here is a zoomed in view of the part of the map that contains our town. The vast majority of the town was built by two players over the course of about 4 months. Our town is about 250 blocks long; the entire server map is a 6000 block by 6000 block square.

Zoomed cartograph view of our Minecraft town.

The first problem to tackle in this project was just getting Minecraft world data into a usable format. Pixar provides a free, non-commercial version of Renderman for Maya, and I’m very familiar with Maya, so my entire workflow for this project was based around good ol’ Maya. Maya doesn’t know how to read Minecraft data though… in fact, Minecraft’s chunked data format is a fascinating rabbit hole to read about in its own right. I briefly entertained the idea of writing my own Minecraft to Maya importer, but then I found a number of Minecraft to Obj exporters that other folks have already written. I first tried jmc2obj, but the section of the Minecraft world that I wanted to export was so large that jmc2obj kept running out of memory and crashing. Instead, I found that Eric Haines’s Mineways exporter was able to handle the data load well (incidentally, Eric Haines is also a Cornell Program of Computer Graphics alum; I inherited a pile of his ACM Transactions on Graphics hardcopies while at Cornell). The chunk of the world I wanted to export was pretty large; in the Mineways screenshot below, the area outlined in red is the part of the world that I wanted:

Section of the map for export is outlined in red.

The area outlined above is significantly larger than the area I wound up rendering… initially I was thinking of a very different camera angle from the ground with the mountains in the background before I picked an aerial view much later. The size of the exported obj mesh was about 1.5 GB. Mineways exports the world as a single mesh, optimized to remove all completely occluded internal faces (so the final mesh is hollow instead of containing useless faces for all of the internal blocks). Each visible block face is uv’d into a corresponding square on a single texture file. This approach produces an efficient mesh, but I realized early on that I would need water in a separate mesh containing completely enclosed volumes for each body of water (Mineways only provides geometry for the top surface of water). Glass had to be handled similarly; both water and glass need special handling for the same reasons that I mentioned immediately after the first diagram in my attenuated transmission blog post. Mineways allows for exporting different block types as separate meshes (but still with internal faces removed), so I simply deleted the water and glass meshes after exporting. Luckily, jmc2obj allows exporting individual block types as closed meshes, so I went back to jmc2obj for just the water and glass. Since just the water and glass is a much smaller data set than the whole world, jmc2obj was able to export without a problem. Since rendering refractive interfaces correctly requires expanding out the refractive mesh slightly at the interfaces, I wrote a custom program based on Takua Renderer’s obj mesh processing library to push out all of the vertices of the water and glass meshes slightly along the average of the face normals at each vertex.

Next up was shading everything in Maya. Renderman 20 ships with an implementation of Disney’s Principled Brdf, which I’ve gotten very familiar with using, so I went with Renderman’s PxrDisney Bxdf. The Disney Brdf allows for quickly creating very good looking materials using a fairly small parameter set. Overall I tried to stick close to the in-game aesthetic, which meant using all of the standard in-game textures instead of a custom resource pack, and I also wound up having to reign back a bit on making materials look super realistic. Everything basically has some varied roughness and specularity, and that’s pretty much it. I did add a subtle bump map to everything though; I made the bump map by simply making a black and white version of the default texture pack and messing with the brightness and contrast a bit. Below is a render of a test world created by Minecraft Forum user QMagnet specifically for testing resource packs. I lit the test scene using a single IBL (HDRI Sky 141 from the HDRI-Skies library). The test render below isn’t using the final specialized water and leaf shaders I created, which I’ll describe a bit further down, and there are also some resolution problems on the alpha masks for the leaf blocks, but overall this test render gives an idea of what my final materials look like:

Final materials on a resource pack test world.

One detail worth going into a bit more detail about are the glowing blocks. The glowstone, lantern, and various torch blocks use a trick based on something that I have seen lighters use in production. The basic idea is to decouple the direct and indirect visibility for the light. I got this decoupling to work in RIS by making all of the glowing blocks into pairs of textured PxrMeshLights. Using PxrMeshLights is necessary in order to allow for efficient light sampling; however, the actual exposures the lights are at make the textures blow out in camera. In order to make the textures discernible in camera, a second PxrMeshLights is needed for each glowing object; one of the lights is at the correct exposure but is marked visible to only indirect rays and invisible to direct camera rays, and the other light is at a much lower exposure but is also only visible to direct camera rays. This trick is a totally non-physical cheaty-hack, but it allows for a believable visual appearance if the exposures are chosen carefully.

In the final renders a few pictures down, I also used a more specialized shader for leaves and vines and tall grass and whatnot. The leaf block shader uses a PxrLMPlastic material instead of PxrDisney; this is because the leaf block shader has a slight amount of diffuse transmission (translucency) and also has more specialized diffuse/specular roughness maps.

For the water shader in the final render, I used a PxrLMGlass material with an IOR of 1.325, a slightly blue tinted refraction color, and a light blue absorption color. Using slightly different colors for the refraction and absorption colors allows for the water to transition to a slightly different hue at deeper depths than at the surface (as opposed to just a more saturated version of the same color). I also added a simple water surface displacement map to get the wavy surface effect. Combined with the refractive interface stuff mentioned before, the final water looks like this:

Water test render, using a PxrLMGlass material. Unfortunately, no true caustics here...

Note the total lack of real caustics in the water… I wound up just using the basic pathtracer built into RIS instead of Pixar’s VCM implementation. Pixar’s VCM implementation is one of the first commercial VCM implementations out there, but as of Renderman 20, it has no adaptivity in its light path distribution whatsoever. As a result, the Renderman 20 VCM integrator is not really suitable for use on huge scenes; most of the light paths end up getting wasted on areas of the scene that aren’t even close to being in-camera, which means that all of the efficiency in rendering caustics is lost. This problem is fundamental to lighttracing-based techniques (meaning that bidirectional techniques inherit the same problem), and solving it remains a relatively open problem (Takua has some basic photon targeting mechanisms for PPM/VCM that I’ll write about at some point). Apparently, this large-scene problem was a major challenge on Finding Dory and is one of the main reasons why Pixar didn’t use VCM heavily on Dory; Dory relied mostly on projected and pre-baked caustics.

I should also note that Renderman 21 does away with the PxrLM and PxrDisney materials entirely and instead introduces the shader set that Christophe Hery and Ryusuke Villemin wrote for Finding Dory. I haven’t tried the Renderman 21 shading system yet, but I would be very curious to compare against our Disney Brdf.

The final lighting setup I used was very simple. There are two main lights in the scene: an IBL dome light for sky illumination, and a 0.5 degree distant light as a sun stand-in. The IBL is another free sky from the HDRI-Skies library; this time, I used HDRI Sky 84. There is also a third spotlight used for getting long, dramatic shadows out of the fog, which I’ll talk about a bit later. Here is a lighting test with just the dome and distant lights on a grey clay version of the scene:

Grey clay render lit using the final distant and dome light setup.

For efficiency reasons, I broke out the fog into a separate pass entirely and added it back in comp afterwards. At the time that I did this project, Renderman 20’s volume system was still relatively new (Renderman 21 introduces a significantly overhauled, much faster volume system, but Renderman 21 wasn’t out yet when I did this project), and while perfectly capable, wasn’t necessarily super fast. Iterating on the look of the fog separately from the main render was simply a more efficient workflow. Here is the raw render directly out of RIS:

Raw render of the main render pass, straight out of RIS.

For the fog, I initially wanted to do fully simulated fog in Houdini. I experimented with using a point SOP to control wind direction and to drive a wind DOP and have fog flow through the scene, but the sheer scale of the scene made this approach impracticable on my home computers. Instead, I wound up just creating a static procedural volume noise field and dumping it out to VDB. I then brought the VDB back into Maya for RIS rendering. Initially I tried rendering the fog pass without the additional spotlight and got something that looked like this:

My initial attempt at the fog pass.

After getting this first fog attempt rendered, I did a first pass at a final comp and color grade. I wound up using a very different color grade on this earlier attempt. This earlier version is the version that I shared in some places, such as the Nerd.nu subreddit and on Twitter:

First comp and grade attempt, using old version of fog.

This first attempt looked okay, but didn’t quite hit what I was going for. I wanted something with much more dramatic shadow beams, and I also felt that the fog didn’t really look settled into the terrain. Eventually I realized that I needed to make the fog sparser and that the fog should start thinning out after rising just a bit off of the ground. After adjusting the fog and adding in a spotlight with a bit of a cooler temperature than the sun, I got the image below. I’m pretty happy with how the fog looks like it is settling in the river valley and is pouring out of the forested hill in the upper left of the image, even though none of the fog is actually simulated!

Final fog pass, with extra spotlight. Note how the fog seems to sit in the lower river valley and pour out of the forest.

Finally, I brought everything together in comp and added a color grading pass in Lightroom. The grade that I went with is much much more heavy-handed than what I usually like to use, but it felt appropriate for this image. I also added a slight amount of vignetting and grain in the final image. The final image is at the top of this post, but here it is again for convenience:

Final composite with fog, color grading, and vignetting/grain.

I learned a lot about using RIS from this project! By my estimation, RIS is orders of magnitude easier to use than old REYES Renderman; the overall experience was fairly similar to my previous experiences with Vray and Arnold. Both Takua and Hyperion make some similar choices and some very different choices in comparison, but then again, every renderer has large similarities and large differences from every other renderer out there. Rendering a Minecraft world was a lot of fun; I definitely am looking forward to doing more Minecraft renders using this pipeline again sometime in the future.

Also, here’s a shameless plug for the Nerd.nu Minecraft server that this data set is from. If you like playing Minecraft and are looking for a fast, free, friendly community to build with, you should definitely come check out the Nerd.nu PvE server, located at p.nerd.nu. The little town in this post is not even close to the most amazing thing that people have built on that server.

A final note on the (lack of) activity on my blog recently: we’ve been extremely busy at Walt Disney Animation Studios for the past year trying to release both Zootopia and Moana in the same year. Now that we’re closing in on the release of Moana, hopefully I’ll find time to post more. I have a lot of cool posts about Takua Renderer in various states of drafting; look for them soon!


Addendum 10/02/2016: After I published this post, Eric Haines wrote to me with a few typo corrections and, more importantly, to tell me about a way to get completely enclosed meshes from Mineways using the color schemes feature. Serves me right for not reading the documentation completely before starting! The color schemes feature allows assigning a color and alpha value to each block type; the key part of this feature for my use case is that Mineways will delete blocks with a zero alpha value when exporting. Setting all blocks except for water to have an alpha of zero allows for exporting water as a complete enclosed mesh; the same trick applies for glass or really any other block type.

One of the neat things about this feature is that the Mineways UI draws the map respecting assigned alpha values from the color scheme being used. As a result, setting everything except for water to have a zero alpha produces a cool view that shows only all of the water on the map:

Mineways map view showing only water blocks. This image shows the same exact area of the map as the other Mineways screenshot earlier in the post.

Going forward, I’ll definitely be adopting this technique to get water meshes instead of using jmc2obj. Being able to handle all of the mesh exporting work in a single program makes for a nicer, more streamlined pipeline. Of course both jmc2obj and Mineways are excellent pieces of software, but in my testing Mineways handles large map sections much better, and I also think that Mineways produces better water meshes compared to jmc2obj. As a result, my pipeline is now entirely centered around Mineways.

Zootopia

Walt Disney Animation Studios’ newest film, Zootopia, will be releasing in the United States three weeks from today. I’ve been working at Walt Disney Animation Studios on the the core development team for Disney’s Hyperion Renderer since July of last year, and the release of Zootopia is really special for me; Zootopia is the first feature film I’ve worked on. My actual role on Zootopia was fairly limited; so far, I’ve been spending most of my time and effort on the version of Hyperion for our next film, Moana (coming out November of this year). On Zootopia I basically only did support and bugfixes for Zootopia’s version of Hyperion (and I actually don’t even have a credit in Zootopia, since I hadn’t been at the studio for very long when the credits were compiled). Nonetheless, I’m incredibly proud of all of the work and effort that has been put into Zootopia, and I consider myself very fortunate to have been able to play even a small role in making the film!

Zootopia is a striking film in every way. The story is fantastic and original and relevant, the characters are all incredibly appealing, the setting is fascinating and immensely clever, the music is wonderful. However, on this blog, we are more interested in the technical side of things; luckily, the film is just as unbelievable in its technology. Quite simply, Zootopia is a breathtakingly beautiful film. In the same way that Big Hero 6 was several orders of magnitude more complex and technically advanced than Frozen in every way, Zootopia represents yet another enormous leap over Big Hero 6 (which can be hard to believe, considering how gorgeous Big Hero 6 is).

The technical advances made on Zootopia are far beyond what I can go into detail here since I don’t think I can describe them in a way that does them justice, but I think I can safely say that Zootopia is the most technically advanced animated film ever made to date. The fur and cloth (and cloth on top of fur!) systems on Zootopia are beyond anything I’ve ever seen, the sets and environments are simply ludicrous in both detail and scale, and of course the shading and lighting and rendering are jaw-dropping. In a lot of ways, many of the technical challenges that had to be solved on Zootopia can be summarized in a single word: complexity. Enormous care had to be put into creating believable fur and integrating different furry characters into different environments [Burkhard et al. 2016], and the huge quantities of fur in the movie required developing new level-of-detail approaches [Palmer and Litaker 2016] to make the fur manageable on both the authoring and rendering sides. The sheer number of crowds characters in the film also required developing a new crowds workflow [El-Ali et al. 2016], again to make both authoring and rendering tractable, and the complex jungle environments seen throughout most of the film similarly required new approaches to procedural vegetation [Keim et al. 2016]. Complexity wasn’t just a problem on a large scale though; Zootopia is also incredible rich in the smaller details. Zootopia was the first movie that Disney Animation deployed a flesh simulation system on [Milne et al. 2016] in order to create convincing muscular movement under the skin and fur of the animal characters. Even individual effects such as scooping ice cream [Byun et al. 2016] sometimes required innovative new CG techniques. On the rendering side the Hyperion team developed a brand new BSDF for shading hair and fur [Chiang et al. 2016], with a specific focus on balencing artistic controllability, physical plausibility, and render efficiency. Disney isn’t paying me to write this on my personal blog, and I don’t write any of this to make myself look grand either. I played only a small role, and really the amazing quality of the film is a testament to the capabilities of the hundreds of artists that actually made the final frames. I’m deeply humbled to see what amazing things great artists can do with the tools that my team makes.

Okay, enough rambling. Here are some stills from the film, 100% rendered with Hyperion, of course. Go see the film; these images only scratch the surface in conveying how gorgeous the film is.

All images in this post are courtesy of and the property of Walt Disney Animation Studios.

References

Nicholas Burkard, Hans Keim, Brian Leach, Sean Palmer, Ernest J. Petti, and Michelle Robinson. 2016. From Armadillo to Zebra: Creating the Diverse Characters and World of Zootopia. In ACM SIGGRAPH 2016 Production Sessions. 24:1-24:2.

Dong Joo Byun, James Mansfield, and Cesar Velazquez. 2016. Delicious Looking Ice Cream Effects with Non-Simulation Approaches. In ACM SIGGRAPH 2016 Talks. 25:1-25:2.

Matt Jen-Yuan Chiang, Benedikt Bitterli, Chuck Tappan, and Brent Burley. 2016. A Practical and Controllable Hair and Fur Model for Production Path Tracing. Computer Graphics Forum. 35, 2 (2016), 275-283.

Moe El-Ali, Joyce Le Tong, Josh Richards, Tuan Nguyen, Alberto Luceño Ros, and Norman Moses Joseph. 2016. Zootopia Crowd Pipeline. In ACM SIGGRAPH 2016 Talks. 59:1-59:2.

Hans Keim, Maryann Simmons, Daniel Teece, and Jared Reisweber. 2016. Art-Directable Procedural Vegetation in Disney’s Zootopia. In ACM SIGGRAPH 2016 Talks. 18:1-18:2.

Andy Milne, Mark McLaughlin, Rasmus Tamstorf, Alexey Stomakhin, Nicholas Burkard, Mitch Counsell, Jesus Canal, David Komorowski, and Evan Goldberg. 2016. Flesh, Flab, and Fascia Simulation on Zootopia. In ACM SIGGRAPH 2016 Talks. 34:1-34:2.

Sean Palmer and Kendall Litaker. 2016. Artist Friendly Level-of-Detail in a Fur-Filled World. In ACM SIGGRAPH 2016 Talks. 32:1-32:2.

Attenuated Transmission

Blue liquid in a glass box, with attenuated transmission. Simulated using PIC/FLIP in Ariel, rendered in Takua a0.5 using VCM.

A few months ago I added attenuation to Takua a0.5’s Fresnel refraction BSDF. Adding attenuation wound up being more complex than originally anticipated because handling attenuation through refractive/transmissive mediums requires volumetric information in addition to the simple surface differential geometry. In a previous post about my BSDF system, I mentioned that the BSDF system only considered surface differential geometry information; adding attenuation meant extending my BSDF system to also consider volume properties and track more information about previous ray hits.

First off, what is attenuation? Within the context of rendering and light transport, attenuation is when light is progressively absorbed within a medium, which results in a decrease in light intensity as one goes further and further into a medium away from a light source. One simple example is in deep water- near the surface, most of the light that has entered the water remains unabsorbed, and so the light intensity is fairly high and the water is fairly clear. Going deeper and deeper into the water though, more and more light is absorbed and the water becomes darker and darker. Clear objects gain color when light is attenuated at different rates according to different wavelengths. Combined with scattering, attenuation is a major contributing property to the look of transmissive/refractive materials in real life.

Attenuation is described using the Beer-Lambert Law. The part of the Beer-Lambert Law we are concerned with is the definition of transmittance:

\[ T = \frac{\Phi_{e}^{t}}{\Phi_{e}^{i}} = e^{-\tau}\]

The above equation states that the transmittance of a material is equal to the transmitted radiant flux over the received radiant flux, which in turn is equal to e raised to the power of the negative of the optical depth. If we assume uniform attenuation within a medium, the Beer-Lambert law can be expressed in terms of an attenuation coefficient μ as:

\[ T = e^{-\mu\ell} \]

From these expressions, we can see that light is absorbed exponentially as distance into an absorbing medium increases. Returning back to building a BSDF system, supporting attenuation therefore means having to know not just the current intersection point and differential geometry, but also the distance a ray has traveled since the previous intersection point. Also, if the medium’s attenuation rate is not constant, then an attenuating BSDF not only needs to know the distance since the previous intersection point, but also has to sample along the incoming ray at some stepping increment and calculate the attenuation at each step. In other words, supporting attenuation required BSDFs to know the previous hit point in addition to the current one and also requires BSDFs to be able to ray march from the previous hit point to the current one.

Adding previous hit information and ray march support to my BSDF system was a very straightforward task. I also added volumetric data support to Takua, allowing for the following attenuation test with a glass Stanford Dragon filled with a checkerboard red and blue medium. The red and blue medium is ray marched through to calculate the total attenuation. Note how the thinner parts of the dragon allow more light through resulting in a lighter appearance, while thicker parts of the dragon attenuate more light resulting in a darker appearance. Also note the interesting red and blue caustics below the dragon:

Glass Stanford Dragon filled with a red and blue volumetric checkerboard attenuating medium. Rendered in Takua a0.5 using VCM.

Things got much more complicated once I added support for what I call “deep attenuation”- that is, attenuation through multiple mediums embedded inside of each other. A simple example is an ice cube floating in a glass of liquid, which one might model in the following way:

Diagram of glass-fluid-ice interfaces. Arrows indicate normal directions.

There are two things in the above diagram that make deep attenuation difficult to implement. First, note that the ice cube is modeled without a corresponding void in the liquid- that is, a ray path that travels through the ice cube records a sequence of intersection events that goes something like “enter water, enter ice cube, exit ice cube, exit water”, as opposed to “enter water, exit water, enter ice cube, exit ice cube, enter water, exit water”. Second, note that the liquid boundary is actually slightly inside of the inner wall of the glass. Intuitively, this may seem like a mistake or an odd property, but this is actually the correct way to model a liquid-glass interface in computer graphics- see this article and this other article for details on why.

So why do these two cases complicate things? As a ray enters each new medium, we need to know what medium the ray is in so that we can execute the appropriate BSDF and get the correct attenuation for that medium. We can only evaluate the attenuation once the ray exits the medium, since attenuation is dependent on how far through the medium the ray traveled. The easy solution is to simply remember what the BSDF is when a ray enters a medium, and then use the remembered BSDF to evaluate attenuation upon the next intersection. For example, imagine the following sequence of intersections:

  1. Intersect glass upon entering glass.
  2. Intersect glass upon exiting glass.
  3. Intersect water upon entering water.
  4. Intersect water upon exiting water.

This sequence of intersections is easy to evaluate. The evaluation would go something like:

  1. Enter glass. Store glass BSDF.
  2. Exit glass. Evaluate attenuation from stored glass BSDF.
  3. Enter water. Store water BSDF.
  4. Exit water. Evaluate attenuation from stored water BSDF.

So far so good. However, remember that in the first case, sometimes we might not have a surface intersection to mark that we’ve exited one medium before entering another. The following scenario demonstrates how this first case results in missed attenuation evaluations:

  1. Intersect water upon entering water.
  2. Exit water, but no intersection!
  3. Intersect ice upon entering ice.
  4. Intersect ice upon exiting ice.
  5. Enter water again, but no intersection either!
  6. Intersect water upon exiting water.

The evaluation sequence ends up playing out okay:

  1. Enter water. Store water BSDF.
  2. Exit water, but no intersection. No BSDF evaluated.
  3. Enter ice. Intersection occurs, so evaluate attenuation from stored water BSDF. Store ice BSDF.
  4. Exit ice. Evaluate attenuation from stored ice BSDF.
  5. Enter water again, but no intersection, so no BSDF stored.
  6. Exit water…. but there is no previous BSDF stored! No attenuation is evaluated!

Alternatively, in step 6, instead of no previous BSDF, we might still have the ice BSDF stored and evaluate attenuation based on the ice. However, this result is still wrong, since we’re now using the ice BSDF for the water.

One simple solution to this problem is to keep a stack of previously seen BSDFs with each ray instead of just storing the previously seen BSDF. When the ray enters a medium through an intersection, we push a BSDF onto the stack. When the ray exits a medium through an intersection, we evaluate whatever BSDF is on the top of the stack and pop the stack. Keeping a stack works well for the previous example case:

  1. Enter water. Push water BSDF on stack.
  2. Exit water, but no intersection. No BSDF evaluated.
  3. Enter ice. Intersection occurs, so evaluate water BSDF from top of stack. Push ice BSDF on stack.
  4. Exit ice. Evaluate ice BSDF from top of stack. Pop ice BSDF off stack.
  5. Enter water again, but no intersection, so no BSDF stored.
  6. Exit water. Intersection occurs, so evaluate water BSDF from top of stack. Pop ice BSDF off stack.

Excellent, we now have evaluated different medium attenuations in the correct order, haven’t missed any evaluations or used the wrong BSDF for a medium, and as we exit the water and ice our stack is now empty as it should be. The first case from above is now solved… what happens with the second case though? Imagine the following sequence of intersections where the liquid boundary is inside of the two glass boundaries:

  1. Intersect glass upon entering glass.
  2. Intersect water upon entering water.
  3. Intersect glass upon exiting glass.
  4. Intersect water upon exiting water.

The evaluation sequence using a stack is:

  1. Enter glass. Push glass BSDF on stack.
  2. Enter water. Evaluate glass attenuation from top of stack. Push water BSDF.
  3. Exit glass. Evaluate water attenuation from top of stack, pop water BSDF.
  4. Exit water. Evaluate glass attenuation from top of stack, pop glass BSDF.

The evaluation sequence is once again in the wrong order- we just used the glass attenuation when we were traveling through water at the end! Solving this second case requires a modification to our stack based scheme. Instead of popping the top of the stack every time we exit a medium, we should scan the stack from the top down and pop the first instance of a BSDF matching the BSDF of the surface we just exited through. This modified stack results in:

  1. Enter glass. Push glass BSDF on stack.
  2. Enter water. Evaluate glass attenuation from top of stack. Push water BSDF.
  3. Exit glass. Evaluate water attenuation from top of stack. Scan stack and find first glass BSDF matching the current surface’s glass BSDF and pop that BSDF.
  4. Exit water. Evaluate water attenuation from top of stack. Scan stack and pop first matching water BSDF.

At this point, I should mention that pushing/popping onto the stack should only occur when a ray travels through a surface. When the ray simply reflects off of a surface, an intersection has occurred and therefore attenuation from the top of the stack should still be evaluated, but the stack itself should not be modified. This way, we can support diffuse inter-reflections inside of an attenuating medium and get the correct diffuse inter-reflection with attenuation between diffuse bounces! Using this modified stack scheme for attenuation evaluation, we can now correctly handle all deep attenuation cases and embed as many attenuating mediums in each other as we could possibly want.

…or at least, I think so. I plan on running more tests before conclusively deciding this all works. So there may be a followup to this post later if I have more findings.

A while back, I wrote a PIC/FLIP fluid simulator. However, at the time, Takua Renderer didn’t have attenuation support, so I wound up rendering my simulations with Vray. Now that Takua a0.5 has robust deep attenuation support, I went back and used some frames from my fluid simulator as tests. The image at the top of this post is a simulation frame from my fluid simulator, rendered entirely with Takua a0.5. The water is set to attenuate red and green light more than blue light, resulting in the blue appearance of the water. In addition, the glass has a slight amount of hazy green attenuation too, much like real aquarium glass. As a result, the glass looks greenish from the ends of each glass plate, but is clear when looking through each plate, again much like real glass. Here are two more renders:

Simulated using PIC/FLIP in Ariel, rendered in Takua a0.5 using VCM.

Simulated using PIC/FLIP in Ariel, rendered in Takua a0.5 using VCM.

Complex Room Renders

Rendered in Takua a0.5 using VCM. Model credits in the post below.

I realize I have not posted in some weeks now, which means I still haven’t gotten around to writing up Takua a0.5’s architecture and VCM integrator. I’m hoping to get to that once I’m finished with my thesis work. In the meantime, here are some more pretty pictures rendered using Takua a0.5.

A few months back, I made a high-complexity scene designed to test Takua a0.5’s capability for handling “real-world” workloads. The scene was also designed to have an extremely difficult illumination setup. The scene is an indoor room that is lit primarily from outside through glass windows. Yes, the windows are actually modeled as geometry with a glass BSDF! This means everything seen in these renders is being lit primarily through caustics! Of course, no real production scene would be set up in this manner, but I chose this difficult setup specifically to test the VCM integrator. There is a secondary source of light from a metal cylindrical lamp, but this light source is also difficult since the actual light is emitted from a sphere light inside of a reflective metal cylinder that blocks primary visibility from most angles.

The flowers and glass vase are the same ones from my earlier Flower Vase Renders post. The original flowers and vase are by Andrei Mikhalenko, with custom textures of my own. The amazing, colorful Takua poster on the back wall is by my good friend Alice Yang. The two main furniture pieces are by ODESD2, and the Braun SK4 record player model is by one of my favorite archviz artists, Bertrand Benoit. The teapot is, of course, the famous Utah teapot. All textures, shading, and other models are my own.

As usual, all depth of field is completely in-camera and in-renderer. Also, all BSDFs in this scene are fairly complex; there is not a single simple diffuse surface anywhere in the scene! Instancing is used very heavily; the wicker baskets, notebooks, textbooks, chess pieces, teacups, and tea dishes are all instanced from single pieces of geometry. The floorboards are individually modeled but not instanced, since they all vary in length and slightly in width.

A few more pretty renders, all rendered in Takua a0.5 using VCM:

Closeup of Braun SK4 record player with DOF. Rendered using VCM.

Flower vase and tea set. Rendered using VCM

Floorboards, textbooks, and rough metal bin with DOF. The book covers are entirely made up. Rendered using VCM.

Note On Images

Just a quick note on images on this blog. So far, I’ve generally been embedding full resolution, losslessly compressed PNG format images in the blog. I prefer having the full resolution, lossless images available on the blog since they are the exact output from my renderer. However, full resolution lossless PNGs can get fairly large (several MB for a single 1920x1080 frame), which is dragging down the load times for the blog.

Going forward, I’ll be embedding lossy compressed JPG images in blog posts, but the JPGs will link through to the full resolution, lossless PNG originals. Fortunately, high quality JPG compression is quite good these days at fitting an image with nearly imperceptible compression differences into a much smaller footprint. I’ll also be going back and applying this scheme to old posts too at some point.


Addendum 04/08/2016: Now that I am doing some renders in 4K resolution (3840x2160), it’s time for an addendum to this policy. I won’t be uploading full resolution lossless PNGs for 4K images, due to the overwhelming file size (>30MB for a single image, which means a post with just a handful of 4K images can easily add up to hundreds of MB). Instead, for 4K renders, I will embed a downsampled 1080P JPG image in the post, and link through to a 4K JPG compressed to balance image quality and file size.