Attenuated Transmission

Blue liquid in a glass box, with attenuated transmission. Simulated using PIC/FLIP in Ariel, rendered in Takua a0.5 using VCM.

A few months ago I added attenuation to Takua a0.5’s Fresnel refraction BSDF. Adding attenuation wound up being more complex than originally anticipated because handling attenuation through refractive/transmissive mediums requires volumetric information in addition to the simple surface differential geometry. In a previous post about my BSDF system, I mentioned that the BSDF system only considered surface differential geometry information; adding attenuation meant extending my BSDF system to also consider volume properties and track more information about previous ray hits.

First off, what is attenuation? Within the context of rendering and light transport, attenuation is when light is progressively absorbed within a medium, which results in a decrease in light intensity as one goes further and further into a medium away from a light source. One simple example is in deep water- near the surface, most of the light that has entered the water remains unabsorbed, and so the light intensity is fairly high and the water is fairly clear. Going deeper and deeper into the water though, more and more light is absorbed and the water becomes darker and darker. Clear objects gain color when light is attenuated at different rates according to different wavelengths. Combined with scattering, attenuation is a major contributing property to the look of transmissive/refractive materials in real life.

Attenuation is described using the Beer-Lambert Law. The part of the Beer-Lambert Law we are concerned with is the definition of transmittance:

\[ T = \frac{\Phi_{e}^{t}}{\Phi_{e}^{i}} = e^{-\tau}\]

The above equation states that the transmittance of a material is equal to the transmitted radiant flux over the received radiant flux, which in turn is equal to e raised to the power of the negative of the optical depth. If we assume uniform attenuation within a medium, the Beer-Lambert law can be expressed in terms of an attenuation coefficient μ as:

\[ T = e^{-\mu\ell} \]

From these expressions, we can see that light is absorbed exponentially as distance into an absorbing medium increases. Returning back to building a BSDF system, supporting attenuation therefore means having to know not just the current intersection point and differential geometry, but also the distance a ray has traveled since the previous intersection point. Also, if the medium’s attenuation rate is not constant, then an attenuating BSDF not only needs to know the distance since the previous intersection point, but also has to sample along the incoming ray at some stepping increment and calculate the attenuation at each step. In other words, supporting attenuation required BSDFs to know the previous hit point in addition to the current one and also requires BSDFs to be able to ray march from the previous hit point to the current one.

Adding previous hit information and ray march support to my BSDF system was a very straightforward task. I also added volumetric data support to Takua, allowing for the following attenuation test with a glass Stanford Dragon filled with a checkerboard red and blue medium. The red and blue medium is ray marched through to calculate the total attenuation. Note how the thinner parts of the dragon allow more light through resulting in a lighter appearance, while thicker parts of the dragon attenuate more light resulting in a darker appearance. Also note the interesting red and blue caustics below the dragon:

Glass Stanford Dragon filled with a red and blue volumetric checkerboard attenuating medium. Rendered in Takua a0.5 using VCM.

Things got much more complicated once I added support for what I call “deep attenuation”- that is, attenuation through multiple mediums embedded inside of each other. A simple example is an ice cube floating in a glass of liquid, which one might model in the following way:

Diagram of glass-fluid-ice interfaces. Arrows indicate normal directions.

There are two things in the above diagram that make deep attenuation difficult to implement. First, note that the ice cube is modeled without a corresponding void in the liquid- that is, a ray path that travels through the ice cube records a sequence of intersection events that goes something like “enter water, enter ice cube, exit ice cube, exit water”, as opposed to “enter water, exit water, enter ice cube, exit ice cube, enter water, exit water”. Second, note that the liquid boundary is actually slightly inside of the inner wall of the glass. Intuitively, this may seem like a mistake or an odd property, but this is actually the correct way to model a liquid-glass interface in computer graphics- see this article and this other article for details on why.

So why do these two cases complicate things? As a ray enters each new medium, we need to know what medium the ray is in so that we can execute the appropriate BSDF and get the correct attenuation for that medium. We can only evaluate the attenuation once the ray exits the medium, since attenuation is dependent on how far through the medium the ray traveled. The easy solution is to simply remember what the BSDF is when a ray enters a medium, and then use the remembered BSDF to evaluate attenuation upon the next intersection. For example, imagine the following sequence of intersections:

  1. Intersect glass upon entering glass.
  2. Intersect glass upon exiting glass.
  3. Intersect water upon entering water.
  4. Intersect water upon exiting water.

This sequence of intersections is easy to evaluate. The evaluation would go something like:

  1. Enter glass. Store glass BSDF.
  2. Exit glass. Evaluate attenuation from stored glass BSDF.
  3. Enter water. Store water BSDF.
  4. Exit water. Evaluate attenuation from stored water BSDF.

So far so good. However, remember that in the first case, sometimes we might not have a surface intersection to mark that we’ve exited one medium before entering another. The following scenario demonstrates how this first case results in missed attenuation evaluations:

  1. Intersect water upon entering water.
  2. Exit water, but no intersection!
  3. Intersect ice upon entering ice.
  4. Intersect ice upon exiting ice.
  5. Enter water again, but no intersection either!
  6. Intersect water upon exiting water.

The evaluation sequence ends up playing out okay:

  1. Enter water. Store water BSDF.
  2. Exit water, but no intersection. No BSDF evaluated.
  3. Enter ice. Intersection occurs, so evaluate attenuation from stored water BSDF. Store ice BSDF.
  4. Exit ice. Evaluate attenuation from stored ice BSDF.
  5. Enter water again, but no intersection, so no BSDF stored.
  6. Exit water…. but there is no previous BSDF stored! No attenuation is evaluated!

Alternatively, in step 6, instead of no previous BSDF, we might still have the ice BSDF stored and evaluate attenuation based on the ice. However, this result is still wrong, since we’re now using the ice BSDF for the water.

One simple solution to this problem is to keep a stack of previously seen BSDFs with each ray instead of just storing the previously seen BSDF. When the ray enters a medium through an intersection, we push a BSDF onto the stack. When the ray exits a medium through an intersection, we evaluate whatever BSDF is on the top of the stack and pop the stack. Keeping a stack works well for the previous example case:

  1. Enter water. Push water BSDF on stack.
  2. Exit water, but no intersection. No BSDF evaluated.
  3. Enter ice. Intersection occurs, so evaluate water BSDF from top of stack. Push ice BSDF on stack.
  4. Exit ice. Evaluate ice BSDF from top of stack. Pop ice BSDF off stack.
  5. Enter water again, but no intersection, so no BSDF stored.
  6. Exit water. Intersection occurs, so evaluate water BSDF from top of stack. Pop ice BSDF off stack.

Excellent, we now have evaluated different medium attenuations in the correct order, haven’t missed any evaluations or used the wrong BSDF for a medium, and as we exit the water and ice our stack is now empty as it should be. The first case from above is now solved… what happens with the second case though? Imagine the following sequence of intersections where the liquid boundary is inside of the two glass boundaries:

  1. Intersect glass upon entering glass.
  2. Intersect water upon entering water.
  3. Intersect glass upon exiting glass.
  4. Intersect water upon exiting water.

The evaluation sequence using a stack is:

  1. Enter glass. Push glass BSDF on stack.
  2. Enter water. Evaluate glass attenuation from top of stack. Push water BSDF.
  3. Exit glass. Evaluate water attenuation from top of stack, pop water BSDF.
  4. Exit water. Evaluate glass attenuation from top of stack, pop glass BSDF.

The evaluation sequence is once again in the wrong order- we just used the glass attenuation when we were traveling through water at the end! Solving this second case requires a modification to our stack based scheme. Instead of popping the top of the stack every time we exit a medium, we should scan the stack from the top down and pop the first instance of a BSDF matching the BSDF of the surface we just exited through. This modified stack results in:

  1. Enter glass. Push glass BSDF on stack.
  2. Enter water. Evaluate glass attenuation from top of stack. Push water BSDF.
  3. Exit glass. Evaluate water attenuation from top of stack. Scan stack and find first glass BSDF matching the current surface’s glass BSDF and pop that BSDF.
  4. Exit water. Evaluate water attenuation from top of stack. Scan stack and pop first matching water BSDF.

At this point, I should mention that pushing/popping onto the stack should only occur when a ray travels through a surface. When the ray simply reflects off of a surface, an intersection has occurred and therefore attenuation from the top of the stack should still be evaluated, but the stack itself should not be modified. This way, we can support diffuse inter-reflections inside of an attenuating medium and get the correct diffuse inter-reflection with attenuation between diffuse bounces! Using this modified stack scheme for attenuation evaluation, we can now correctly handle all deep attenuation cases and embed as many attenuating mediums in each other as we could possibly want.

…or at least, I think so. I plan on running more tests before conclusively deciding this all works. So there may be a followup to this post later if I have more findings.

A while back, I wrote a PIC/FLIP fluid simulator. However, at the time, Takua Renderer didn’t have attenuation support, so I wound up rendering my simulations with Vray. Now that Takua a0.5 has robust deep attenuation support, I went back and used some frames from my fluid simulator as tests. The image at the top of this post is a simulation frame from my fluid simulator, rendered entirely with Takua a0.5. The water is set to attenuate red and green light more than blue light, resulting in the blue appearance of the water. In addition, the glass has a slight amount of hazy green attenuation too, much like real aquarium glass. As a result, the glass looks greenish from the ends of each glass plate, but is clear when looking through each plate, again much like real glass. Here are two more renders:

Simulated using PIC/FLIP in Ariel, rendered in Takua a0.5 using VCM.

Simulated using PIC/FLIP in Ariel, rendered in Takua a0.5 using VCM.

Complex Room Renders

Rendered in Takua a0.5 using VCM. Model credits in the post below.

I realize I have not posted in some weeks now, which means I still haven’t gotten around to writing up Takua a0.5’s architecture and VCM integrator. I’m hoping to get to that once I’m finished with my thesis work. In the meantime, here are some more pretty pictures rendered using Takua a0.5.

A few months back, I made a high-complexity scene designed to test Takua a0.5’s capability for handling “real-world” workloads. The scene was also designed to have an extremely difficult illumination setup. The scene is an indoor room that is lit primarily from outside through glass windows. Yes, the windows are actually modeled as geometry with a glass BSDF! This means everything seen in these renders is being lit primarily through caustics! Of course, no real production scene would be set up in this manner, but I chose this difficult setup specifically to test the VCM integrator. There is a secondary source of light from a metal cylindrical lamp, but this light source is also difficult since the actual light is emitted from a sphere light inside of a reflective metal cylinder that blocks primary visibility from most angles.

The flowers and glass vase are the same ones from my earlier Flower Vase Renders post. The original flowers and vase are by Andrei Mikhalenko, with custom textures of my own. The amazing, colorful Takua poster on the back wall is by my good friend Alice Yang. The two main furniture pieces are by ODESD2, and the Braun SK4 record player model is by one of my favorite archviz artists, Bertrand Benoit. The teapot is, of course, the famous Utah teapot. All textures, shading, and other models are my own.

As usual, all depth of field is completely in-camera and in-renderer. Also, all BSDFs in this scene are fairly complex; there is not a single simple diffuse surface anywhere in the scene! Instancing is used very heavily; the wicker baskets, notebooks, textbooks, chess pieces, teacups, and tea dishes are all instanced from single pieces of geometry. The floorboards are individually modeled but not instanced, since they all vary in length and slightly in width.

A few more pretty renders, all rendered in Takua a0.5 using VCM:

Closeup of Braun SK4 record player with DOF. Rendered using VCM.

Flower vase and tea set. Rendered using VCM

Floorboards, textbooks, and rough metal bin with DOF. The book covers are entirely made up. Rendered using VCM.

Note On Images

Just a quick note on images on this blog. So far, I’ve generally been embedding full resolution, losslessly compressed PNG format images in the blog. I prefer having the full resolution, lossless images available on the blog since they are the exact output from my renderer. However, full resolution lossless PNGs can get fairly large (several MB for a single 1920x1080 frame), which is dragging down the load times for the blog.

Going forward, I’ll be embedding lossy compressed JPG images in blog posts, but the JPGs will link through to the full resolution, lossless PNG originals. Fortunately, high quality JPG compression is quite good these days at fitting an image with nearly imperceptible compression differences into a much smaller footprint. I’ll also be going back and applying this scheme to old posts too at some point.


Addendum 04/08/2016: Now that I am doing some renders in 4K resolution (3840x2160), it’s time for an addendum to this policy. I won’t be uploading full resolution lossless PNGs for 4K images, due to the overwhelming file size (>30MB for a single image, which means a post with just a handful of 4K images can easily add up to hundreds of MB). Instead, for 4K renders, I will embed a downsampled 1080P JPG image in the post, and link through to a 4K JPG compressed to balance image quality and file size.

Hyperion

Just a quick update on future plans. Starting in July, I’m going to be working full time for Walt Disney Animation Studios as a software engineer on their custom, in-house Hyperion Renderer. I couldn’t be more excited about working with everyone on the Hyperion team; ever since the Sorted Deferred Shading paper was published two years ago, I’ve thought that the Hyperion team is doing some of the most interesting work there is in the rendering field right now.

I owe an enormous thanks to everyone that’s advised and supported and encouraged me to continue exploring the rendering and graphics world. Thanks, Joe, Don, Peter, Tony, Mark, Christophe, Amy, Fran, Gabriel, Harmony, and everyone else!

Normally as a rule I only post images to this blog that I made or have a contribution to, but this time I’ll make an exception. Here’s one of my favorite stills from Big Hero 6, rendered entirely using Hyperion and lit by Angela McBride, a friend from PUPs 2011! Images like this one are an enormous source of inspiration to me, so I absolutely can’t wait to get started at Disney and help generate more gorgeous imagery like this!

A still from Big Hero 6, rendered entirely using Hyperion. Property of Walt Disney Animation Studios.

BSDF System

Takua a0.5’s BSDF system was particularly interesting to build, especially because in previous versions of Takua Renderer, I never really had a good BSDF system. Previously, my BSDFs were written in a pretty ad-hoc way and were somewhat hardcoded into the pathtracing integrator, which made BSDF extensibility very difficult and multi-integrator support nearly impossible without significant duplication of BSDF code. In Takua a0.5, I’ve written a new, extensible, modularized BSDF system that is inspired by Mitsuba and Renderman 19/RIS. In this post, I’ll write about how Takua a0.5’s BSDF system works and show some pretty test images generated during development with some interesting models and props.

First, here’s a still-life sort of render showcasing a number of models with a number of interesting materials, all using Takua a0.5’s BSDF system and rendered using my VCM integrator. All of the renders in this post are rendered either using my BDPT integrator or my VCM integrator.

Still-life scene with a number of interesting, complex materials created using Takua a0.5's BSDF system. The chess pieces and notebooks make use of instancing. Rendered in Takua a0.5 using VCM.

BSDFs in Takua a0.5 are designed to support bidirectional evaluation and importance sampling natively. Basically, this means that all BSDFs need to implement five basic functions. These five basic functions are:

  • Evaluate, which takes input and output directions of light and a normal, and returns the BSDF weight, cosine of the angle of the input direction, and color absorption of the scattering event. Evaluate can also optionally return the probability of the output direction given the input direction, with respect to solid angle.
  • CalculatePDFW, which takes the input and output directions of light and a normal, and returns the forward probability of the output direction given the input direction. In order to make the BSDF operate bidirectionally, this function also needs to be able to return the backwards probability if the input and output are reversed.
  • Sample, which takes in an input direction, a normal, and a random number generator and returns an output direction, the BSDF weight, the forward probability of the output direction, and the cosine of the input angle.
  • IsDelta, which returns true if the BSDF’s probability distribution function is a Dirac delta function and false otherwise. This attribute is important for allowing BDPT and VCM to handle perfectly specular BSDFs correctly, since perfectly specular BSDFs are something of a special case.
  • GetContinuationProbability, which takes in an input direction and normal and returns the probability of ending a ray path at this BSDF. This function is used for Russian Roulette early path termination.

In order to be correct and bididirectional, each of these functions should return results that agree with the other functions. For example, taking the output direction generated by Sample and calling Evaluate with the Sample output direction should produce the same color absorption and forward probability and other attributes as Sample. Sample, Evaluate, and CalculatePDFW are all very similar functions and often can share a large amount of common code, but each one is tailored to a slightly different purpose. For example, Sample is useful for figuring out a new random ray direction along a ray path, while Evaluate is used for calculating BSDF weights while importance sampling light sources.

Small note: I wrote that these five functions all take in a normal, which is technically all they need in terms of differential geometry. However, in practice, passing in a surface point and UV and other differential geometry information is very useful since that allows for various properties to be driven by 2D and 3D textures. In Takua a0.5, I pass in a normal, surface point, UV coordinate, and a geom and primitive ID for future PTEX support, and allow every BSDF attribute to be driven by a texture.

One of the test props I made is the PBRT book, since I thought rendering the Physically Based Rendering book with a physically based renderer and physically based shading would be amusing. The base diffuse color is driven by a texture map, and the interesting rippled and variation in the glossiness of the book cover comes from driving additional gloss and specular properties with more texture maps.

Physically Based Rendering book, rendered with my physically based renderer. Note the texture-driven gloss and specular properties. Rendered using BDPT.

In order to be physically correct, BSDFs should also fulfill the following three properties:

  • Positivity, meaning that the return value of the BSDF should always be positive or equal to 0.
  • Helmholtz Reciprocity, which means the return value of the BSDF should not be changed by switching the input and output directions (although switching the input and output CAN change how things are calculated internally, such as in perfectly specular refractive materials).
  • Energy Conservation, meaning the surface cannot reflect more light than arrives.

At the moment, my base BSDFs are not actually the best physically based BSDFs in the world… I just have Lambertian diffuse, normalized Blinn-Phong, and Fresnel-based perfectly specular reflection/refraction. At a later point I’m planning on adding Beckmann and Disney’s Principled BSDF, and possibly others such as GGX and Ward. However, for the time being, I can still create highly complex and interesting materials because of the modular nature of Takua a0.5’s BSDF system; one of the most powerful uses of this modular system is combining base BSDFs into more complex BSDFs. For example, I have another BSDF called FresnelPhong, which internally calls normalized Blinn-Phong BSDF but also calls the Fresnel code from my Fresnel specular BSDF to account for an output direction with the Fresnel effect with glossy surfaces. Since the Fresnel specular BSDF handles refractive materials, FresnelPhong allows for creating glossy transmissive surfaces such as frosted glass (albeit not as accurate to reality as one would get with Beckmann or GGX).

Another one of my test props is a glass chessboard, where half of the pieces and board squares are using frosted glass. Needless to say, this scene is very difficult to render using unidirectional pathtracing. I only have one model of each chess piece type, and all of the pieces on the board are instances with varying materials per instance.

Chessboard with ground glass squares and clear glass squares. Rendered using BDPT.

Chessboard with ground glass and clear glass pieces. Rendered using BDPT.

Another interesting use of modular BSDFs and embedding BSDFs inside of other BSDFs is in implementing bump mapping. Takua a0.5 implements bump mapping as a simple BSDF wrapper that calculates the bump mapped normal and passes that normal into whatever the underlying BSDF is. This approach allows for any BSDF to have a bump map, and even allows for applying multiple bump maps to the same piece of geometry. In addition to specifying bump maps as wrapper BSDFs, Takua a0.5 also allows attaching bump maps to individual geometry so that the same BSDF can be reused with a number of different bump maps attached to a number of different geometries, but under the hood this system works exactly the same as the BSDF wrapper bump map.

This notebook prop’s leathery surface detail comes entirely from a BSDF wrapper bump map:

Notebook with a leathery surface. All surface detail comes from bump mapping. Rendered using BDPT.

Finally, one of the most useful and interesting features of Takua a0.5’s BSDF system is the layered BSDF. The layered BSDF is a special BSDF that allows arbitrary combining, layering, and mixing between different BSDFs, much like Vray’s BlendMtl or Renderman 19/RIS’s LM shader system. Any BSDF can be used as a layer in a layered BSDF, including entire other layered BSDF networks. The Takua layered BSDF consists of a base substrate BSDF, and an arbitrary number of coat layers on top of the substrate. Each coat is given a texture-drive weight which determines how much of the final output BSDF is from the current coat layer versus from all of the layers and substrate below the current coat layer. Since the weight for each coat layer must be between 0 and 1, the result layered BSDF maintains physical correctness as long as all of the component BSDFs are also physically correct. Practically, the layered BSDF is implemented so that with each iteration, only one of the component BSDFs is evaluated and sampled, with the particular component BSDF per iteration chosen randomly based on each component BSDF’s weighting.

The layered BSDF system is what allows the creation of truly interesting and complex materials, since objects in reality often have complex materials consisting of a number of different scattering event types. For example, a real object may have a diffuse base with a glossy clear coat, but there may also be dust and fingerprints on top of the clear coat contributing to the final appearance. The globe model seen in my adaptive sampling post uses a complex layered BSDF; the base BSDF is ground glass, with the continents layered on top as a perfectly specular mirror BSDF, and then an additional dirt and fingerprints layer on top made up of diffuse and varying glossy BSDFs:

Glass globe using Takua's layered BSDF system. The globe has a base ground glass layer, a mirror layer for continents, and a dirt/fingerprints layer for additional detail. Rendered using VCM.

Here’s an additional close-up render of the globe that better shows off some of the complex surface detail:

Close-up of the globe. Rendered using VCM.

Going forward, I’m planning on adding a number of better BSDFs to Takua a0.5 (as mentioned before). Since the BSDF system is so modular and extensible, adding new BSDFs should be relatively simple and should require little to no additional work to integrate into the renderer. Because of how I designed BSDF wrappers, any new BSDF I add will automatically work with the bump map BSDF wrapper and the layered BSDF system. I’m also planning on adding interesting effects to the refractive/transmission BSDF, such as absorption based on Beer’s law and spectral diffraction.

After I finish work on my thesis, I also intend on adding more complex materials for subsurface scattering and volume rendering. These additions will be much more involved than just adding GGX or Beckmann, but I have a rough roadmap for how to proceed and I’ve already built a lot of supporting infrastructure into Takua a0.5. The plan for now is to implement a unified SSS/volume system based on the Unified Points, Beams, and Paths presented at SIGGRAPH 2014. UPBP can be thought of as extending VCM to combine a number of different volumetric rendering techniques. I can’t wait to get started on that over the summer!

Adaptive Sampling

Adaptive sampling is a relatively small and simple but very powerful feature, so I thought I’d write briefly about how adaptive sampling works in Takua a0.5. Before diving into the details though, I’ll start with a picture. The scene I’ll be using for comparisons in this post is a globe of the Earth, made of a polished ground glass with reflective metal insets for the landmasses and with a rough scratched metal stand. The globe is on a white backdrop and is lit by two off-camera area lights. The following render is the fully converged reference baseline for everything else in the post, rendered using VCM:

Fully converged reference baseline. Rendered in Takua a0.5 using VCM.

As mentioned before, in pathtracing based renderers, we solve the path integral through Monte Carlo sampling, which gives us an estimate of the total integral per sample thrown. As we throw more and more samples at the scene, we get a better and better estimate of the total integral, which explains why pathtracing based integrators start out producing a noisy image but eventually converge to a nice, smooth image if enough rays are traced per pixel.

In a naive renderer, the number of samples traced per pixel is usually just a fixed number, equal for all pixels. However, not all parts of the image are necessarily equally difficult to sample; for example, in the globe scene, the backdrop should require fewer samples than the ground glass globe to converge, and the ground glass globe in turn should require fewer samples than the two caustics on the ground. This observation means that a fixed sampling strategy can potentially be quite wasteful. Instead, computation can be used much more efficiently if the sampling strategy can adapt and drive more samples towards pixels that require more work to converge, while driving fewer samples towards pixels that have already converged mid-render. Such a sample can also be used to automatically stop the renderer once the sampler has detected that the entire render has converged, without needing user guesswork for how many samples to use.

The following image is the same globe scene as above, but limited to 5120 samples per pixel using bidirectional pathtracing and a fixed sampler. Note that most of the image is reasonable converged, but there is still noise visible in the caustics:

Fixed sampling, 5120 samples per pixel, BDPT.

Since it may be difficult to see the difference between this image and the baseline image on smaller screens, here is a close-up crop of the same caustic area between the two images:

500% crop. Left: converged baseline render. Right: fixed sampling, 5120 samples per pixel, BDPT.

The difficult part of implementing an adaptive sampler is, of course, figuring out a metric for convergence. The PBRT book presents a very simple adaptive sampling strategy on page 388 of the 2nd edition: for each pixel, generate some minimum number of initial samples and record the radiances returned by each initial sample. Then, take the average of the luminances of the returned radiances, and compute the contrast between each initial sample’s radiance and the average luminance. If any initial sample has a contrast from the average luminance above some threshold (say, 0.5), generate more samples for the pixel up until some maximum number of samples per pixel is reached. If all of the initial samples have contrasts below the threshold, then the sampler can mark the pixel as finished and move onto the next pixel. The idea behind this strategy is to try to eliminate fireflies, since fireflies result from statistically improbably samples that are significantly above the true value of the pixel.

The PBRT adaptive sampler works decently, but has a number of shortcomings. First, the need to draw a large number of samples per pixel simultaneously makes this approach less than ideal for progressive rendering; while well suited to a bucketed renderer, a progressive renderer prefers to draw a small number of samples per pixel per iteration, and return to each pixel to draw more samples in subsequent iterations. In theory, the PBRT adaptive sampler could be made to work with a progressive renderer if sample information was stored from each iteration until enough samples were accumulated to run an adaptive sampling check, but this approach would require storing a lot of extra information. Second, while the PBRT approach can guarantee some degree of per-pixel variance minimization, each pixel isn’t actually aware of what its neighbours look like, meaning that there still can be visual noise across the image. A better, global approach would have to take into account neighbouring pixel radiance values as a second check for whether or not a pixel is sufficiently sampled.

My first attempt at a global approach (the test scene in this post is a globe, but that pun was not intended) was to simply have the adaptive sampler check the contrast of each pixel with it’s immediate neighbours. Every N samples, the adaptive sampler would pull the accumulated radiances buffer and flag each pixel as unconverged if the pixel has a contrast greater than some threshold from at least one of its neighbours. Pixels marked unconverged are sampled for N more iterations, while pixels marked as converged are skipped for the next N iterations. After another N iterations, the adaptive sampler would go back and reflag every pixel, meaning that a pixel previously marked as converged could be reflagged as unconverged if its neighbours changed enormously. Generally N should be a rather large number (say, 128 samples per pixel), since doing convergence checks is meaningless if the image is too noisy at the time of the check.

Using this strategy, I got the following image, which was set to run for a maximum of 5120 samples per pixel but wound up averaging 4500 samples per pixel, or about a 12.1% reduction in samples needed:

Adaptive sampling per pixel, average 4500 samples per pixel, BDPT.

At an initial glance, this looks pretty good! However, as soon as I examined where the actual samples went, I realized that this strategy doesn’t work. The following image is a heatmap showing where samples were driven, with brighter areas indicating more samples per pixel:

Sampling heatmap for adaptive sampling per pixel. Brighter areas indicate more samples.

Generally, my per-pixel adaptive sampler did correctly identify the caustic areas as needing more samples, but a problem becomes apparent in the backdrop areas: the per-pixel adaptive sampler drove samples at clustered “chunks” evenly, but not evenly across different clusters. This behavior happens because while the per-pixel sampler is now taking into account variance across neighbours, it still doesn’t have any sort of global sense across the entire image! Instead, the sampler is finding localized pockets where variance seems even across pixels, but those pockets can be quite disconnected from further out areas. While the resultant render looks okay at a glance, clustered variance patterns becomes apparent if the image contrast is increased:

Adaptive sampling per pixel, with enhanced contrast. Note the local clustering artifacts.

Interestingly, these artifacts are reminiscent of the artifacts that show up in not-fully-converged Metropolis Light Transport renders. This similarity makes sense, since in both cases they arise from uneven localized convergence.

The next approach that I tried is a more global approach adapted from Dammertz et al.’s paper, “A Hierarchical Automatic Stopping Condition for Monte Carlo Global Illumination”. For the sake of simplicity, I’ll refer to the approach in this paper as Dammertz for the rest of this post. Dammertz works by considering the variance across an entire block of pixels at once and flagging the entire block as converged or unconverged, allowing for much more global analysis. At the first variance check, the only block considered is the entire image as one enormous block; if the total variance eb in the entire block is below a termination threshold et, the block is flagged as converged and no longer needs to be sampled further. If eb is greater than et but still less than a splitting threshold es, then the block will be split into two non-overlapping child blocks for the next round of variance checking after N iterations have passed. At each variance check, this process is repeated for each block, meaning the image eventually becomes split into an ocean of smaller blocks. Blocks are kept inside of a simple unsorted list, require no relational information to each other, and are removed from the list once marked as converged, making the memory requirements very simple. Blocks are split along their major axis, with the exact split point chosen to keep error as equal as possible across the two sides of the split.

The actual variance metric used is also very straightforward; instead of trying to calculate an estimate of variance based on neighbouring pixels, Dammertz stores two framebuffers: one buffer I for all accumulated radiances so far, and a second buffer A for accumulated radiances from every other iteration. As the image approaches full convergence, the differences between I and A should shrink, so an estimation of variance can be found simply by comparing radiance values between I and A. The specific details and formulations can be found in section 2.1 of the paper.

I made a single modification to the paper’s algorithm: I added a lower bound to the block size. Instead of allowing blocks to split all the way to a single pixel, I stop splitting after a block reaches 64 pixels in a 8x8 square. I found that splitting down to single pixels could sometimes cause false positives in convergence flagging, leading to missed pixels similar to in the PBRT approach. Forcing blocks to stop splitting at 64 pixels means there is a chance of false negatives for convergence, but a small amount of unnecessary oversampling is preferable to undersampling.

Using this per-block adaptive sampler, I got the following image, which again is superficially extremely similar to the fixed sampler result. This render was also set to run for a maximum of 5120 samples, but wound up averaging just 2920 samples per pixel, or about a 42.9% reduction in samples needed:

Adaptive sampling per block, average 2920 samples per pixel, BDPT.

The sample heatmap looks good too! The heatmap shows that the sampler correctly identified the caustic and highlight areas as needing more samples, and doesn’t have clustering issues in areas that needed fewer samples:

Sampling heatmap for adaptive sampling per block. Brighter areas indicate more samples.

Boosting the image contrast shows that the image is free of local clustering artifacts and noise is even across the entire image, which is what we would expect:

Adaptive sampling per block, with enhanced contrast. Note the even noise spread and lack of local clustering artifacts.

Looking at the same 500% crop area as earlier, the adaptive per-block and fixed sampling renders are indistinguishable:

500% crop. Left: fixed sampling, 5120 samples per pixel, BDPT. Right: adaptive per-block sampling, average 2920 samples per pixel, BDPT.

So with that, I think Dammertz works pretty well! Also, the computational and memory overhead required for the Dammertz approach is basically negligible relative to the actual rendering process. This approach is the one that is currently in Takua a0.5.

I actually have an additional adaptive sampling trick designed specifically for targeting fireflies. This additional trick works in conjunction with the Dammertz approach. However, this post is already much longer than I originally planned, so I’ll save that discussion for a later post. I’ll also be getting back to the PPM/VCM posts in my series of integrator posts shortly; I have not had much time to write on my blog since the vast majority of my time is currently focused on my thesis, but I’ll try to get something posted soon!