Flower Vase Renders

Rendered in Takua a0.5 using BDPT. Nearly a quarter of a billion triangles.

In order to test Takua a0.5, I’ve been using my renderer on some quick little “pretty picture” projects. I recently ran across a fantastic flower vase model by artist Andrei Mikhalenko and used Andrei’s model as the basis for a shading exercise. The above and following images are rendered entirely in Takua a0.5 using bidirectional pathtracing. I textured and shaded everything using Takua a0.5’s layered material system, and also made some small modifications to the model (moved some flowers around, extended the stems to the bottom of the vase, and thickened the bottom of the vase). Additionally, I further subdivided the flower petals to gain additional detail and smoothness, meaning the final rendered model weighs in at nearly a quarter of a billion triangles. Obviously using such heavy models is not practical for a single prop in real world production, but I wanted to push the amount of geometry my renderer can handle. Overall, total memory usage for each of these renders hovered around 10.5 GB. All images were rendered at 1920x1080 resolution; click on each image to see the full resolution renders.

For the flowers, I split all of the flowers into five randomly distributed groups and assigned each group a different flower material. Each material is a two-sided material with a different BSDF assigned to each side, with side determined by the surface normal direction. For each flower, the outside BSDF has a slightly darker reflectance than the inner BSDF, which efficiently approximates the subsurface scattering effect real flowers have, but without actually having to use subsurface scattering. In this case, using a two-sided material to fake the effect of subsurface scattering is desirable since the model is so complex and heavy. Also, the stems and branches are all bump mapped.

Rendered in Takua a0.5 using BDPT. Note the complex caustics from the vase and water.

This set of renders was a good test for bidirectional pathtracing because of the complex nature of the caustics in the vase and water; note that the branches inside of the vase and water cannot be efficiently rendered by unidirectional pathtracing since they are in glass and therefore cannot directly sample the light sources. The scene is lit by a pair of rectlights, one warmer and one cooler in temperature. This lighting setup, combined with the thick glass and water volume at the bottom of the vase, produces some interesting caustic on the ground beneath the vase.

The combination of the complex caustics and the complex geometry in the bouquet itself meant that a fairly deep maximum ray path length was required (16 bounces). Using BDPT helped immensely with resolving the complex bounce lighting inside of the bouquet, but the caustics proved to be difficult for BDPT; in all of these renders, everything except for the caustics converged within about 30 minutes on a quad-core Intel Core i7 machine, but the caustics took a few hours to converge in the top image, and a day to converge for the second image. I’ll discuss caustic performance in BDPT compared to PPM and VCM in some upcoming posts.

Rendered in Takua a0.5 using BDPT. Depth of field and circular bokeh entirely in-camera.

All depth of field is completely in-camera and in-renderer as well. No post processed depth of field whatsoever! For the time being, Takua a0.5 only supports circular apertures and therefore only circular bokeh, but I plan on adding custom aperture shapes after I finish my thesis work. In general, I think that testing my own renderer with plausibly real-world production quality scenes is very important. After all, having just a toy renderer with pictures of spheres is not very fun… the whole point of a renderer is to generate some really pretty pictures! For my next couple of posts, I’m planning on showing some more complex material/scene tests, and then moving onto discussing the PPM and VCM integrators in Takua.

Addendum: I should comment on the memory usage a bit more, since some folks have expressed interest in what I’m doing there. By default, the geometry actually weighs in closer to 30 GB in memory usage, so I had to implement some hackery to get this scene to fit in memory on a 16 GB machine. The hack is really simple: I added an optional half-float mode for geometry storage. In practice, using half-floats for geometry is usually not advisable due to precision loss, but in this particular scene, that precision loss becomes more acceptable due to a combination of depth of field hiding most alignment issues closer to camera, and sheer visual complexity making other alignment issues hard to spot without looking too closely. Additionally, for the flowers I also threw away all of the normals and recompute them on the fly at render-time. Recomputing normals on the fly results in a small performance hit, but it vastly preferable to going out of core.

Multiple Importance Sampling

A key tool introduced by Veach as part of his bidirectional pathtracing formulation is multiple importance sampling (MIS). As discussed in my previous post, the entire purpose of rendering from a mathematical perspective is to solve the light transport equation, which in the case of all pathtracing type renderers means solving the path integral formulation of light transport. Since the path integral does not have a closed form solution in all but the simplest of scenes, we have to estimate the full integral using various sampling techniques in path space, hence unidirectional pathtracing and bidirectional pathtracing and metropolis based techniques, etc. As we saw with the light source in glass case and with SDS paths, often a single path sampling technique is not sufficient for capturing a good estimate of the path integral. Instead, a good estimate often requires a combination of a number of different path sampling techniques; MIS is a critical mechanism for combining multiple sampling techniques in a manner that reduces total variance. Without MIS, directly combining sampling techniques through averaging can often have the opposite effect and increase total variance.

The following image is a recreation of the test scene used in the Veach thesis to demonstrate MIS. The scene consists of four glossy bars going from less glossy at the top to more glossy at the bottom, and four sphere lights of increasing size. The smallest sphere light has the highest emission intensity, and the largest sphere light has the lowest emission. I modified the scene to add in a large rectangular area light off camera on each side of the scene, and I added an additional bar to the bottom of the scene with gloss driven by a texture map:

Figure 1: Recreation of the Veach MIS test scene. Rendered in Takua.

The above scene is difficult to render using any single path sampling technique because of the various combinations of surface glossiness and emitter size/intensity. For large emitter/low gloss combinations, importance sampling by the BSDF tends to result in lower variance. In the case, the reason is that a given random ray direction is more likely to hit the large light than it is to fall within a narrow BSDF lobe, so matching the sample distribution to the BSDF lobe is more efficient. However, for small emitter/high gloss combinations, the reverse is true. If we take the standard Veach scene and sample by only BSDF and then only by light source, we can see how each strategy fails in different cases. Both of these renders would eventually converge if left to render for long enough, but the rate of convergence in difficult areas would be extremely slow:

Figure 2: BSDF sampling only, 64 iterations.

Figure 3: Light sampling only, 64 iterations.

MIS allows us to combine m different sampling strategies to produce a single unbiased estimator by weighting each sampling strategy by its probability distribution function (pdf). Mathematically, this is expressed as:

\[ \langle I_{j} \rangle_{MIS} = \sum_{i=1}^{m} \frac{1}{n_{i}} \sum_{j=1}^{n_{i}} w_{i}(X_{i,j}) \frac{f(X_{i,j})}{p_{i}(X_{i,j})} \]

where Xi,j are independent random variables drawn from some distribution function pi and wi(Xi,j) is some heuristic for weighting each sampling technique with respect to pdf. The reason MIS is able to significantly lower variance is because a good MIS weighting function should dampen contributions with low pdfs. The Veach thesis presents two good weighting heuristics, the power heuristic and the balance heuristic. The power heuristic is defined as:

\[ w_{i}(x) = \frac{[n_{i}p_{i}(x)]^{\beta}}{\sum_{n}^{k=1}[n_{k}p_{k}(x)]^{\beta}}\]

The power heuristic states that the weight for a given sampling technique should be the pdf of the sampling technique raised to a power β divided by the sum of the pdfs of all considered sampling techniques, with each sampling technique also raised to β. For the power heuristic, β is typically set to 2. The balance heuristic is simply the power heuristic for β=1. In the vast majority of cases, the balance heuristic is a near optimal weighting heuristic (and the power heuristic can cover most remaining edge cases), assuming that the base sampling strategies are decent to begin with.

For the standard Veach MIS demo scene, the best result is obtained by using MIS to combine BSDF and light sampling. The following image is the Veach scene again, this time rendered using MIS with 64 iterations. Note that all highlights are now roughly equally converged and the entire image matches the reference render above, apart from noise:

Figure 4: Light and BSDF sampling combined using MIS, 64 iterations.

BDPT inherently does not necessarily have an improved convergence rate over vanilla unidirectional pathtracing; BDPT gains its significant edge in convergence rate only once MIS is applied since BDPT’s efficiency comes from being able to extract a large number of path sampling techniques out of a single bidirectional path. To demonstrate the impact of MIS on BDPT, I rendered the following images using BDPT with and without MIS. The scene is a standard Cornell Box, but I replaced the back wall with a more complex scratched, glossy surface. The first image is the fully converged ground truth render, followed by with and without MIS:

Figure 5: Cornell Box with scratched glossy back wall. Rendered using BDPT with MIS.

Figure 6: BDPT with MIS, 16 iterations.

Figure 7: BDPT without MIS, 16 iterations.

As seen above, the version of BDPT without MIS is significantly less converged. BDPT without MIS will still converge to the correct solution, but in practice can often be only as good as, or worse than unidirectional pathtracing.

Later on, we’ll discuss MIS beyond bidirectional pathtracing. In fact, MIS is the critical component to making VCM possible!


Addendum 01/12/2018: A reader noticed some brightness inconsistencies in the original versions of Figures 2 and 3, which came from bugs in Takua’s light sampling code without MIS at the time. I have replaced the original versions of Figures 1, 2, 3, and 4 with new, correct versions rendered using the current version of Takua as of the time of writing for this addendum.

Because of how much noise there is in Figures 2 and 3, seeing that they converge to the reference image might be slightly harder. To make the convergence clearer, I rendered out each sampling strategy using 1024 samples per pixel, instead of just 64:

Figure 8: BSDF sampling only, 1024 iterations.

Figure 9: Light sampling only, 1024 iterations.

Note how Figures 8 and 9 match Figure 1 exactly, aside from noise. In Figure 9, the reflection of the right-most sphere light on the top-most bar is still extremely noisy because of the extreme difficulty of finding a random light sample that happens to produce a valid bsdf response for the near-perfect specular lobe.

One last minor note: I’m leaving the main text of this post unchanged, but the updated renders use Takua’s modern shading system instead of the old one from 2015; in the new shading system, the metal bars use roughness instead of gloss, and use GGX instead of a normalized Phong variant.

Bidirectional Pathtracing Integrator

As part of Takua a0.5’s complete rewrite, I implemented the vertex connection and merging (VCM) light transport algorithm. Implementing VCM was one of the largest efforts of this rewrite, and is probably the single feature that I am most proud of. Since VCM subsumes bidirectional pathtracing and progressive photon mapping, I also implemented Veach-style bidirectional pathtracing (BDPT) with multiple importance sampling (MIS) and Toshiya Hachisuka’s stochastic progressive photon mapping (SPPM) algorithm. Since each one of these integrators is fairly complex and interesting by themselves, I’ll be writing a series of posts on my BDPT and SPPM implementations before writing about my full VCM implementation. My plan is for each integrator to start with a longer post discussing the algorithm itself and show some test images demonstrating interesting properties of the algorithm, and then follow up with some shorter posts detailing specific tricky or interesting pieces and also show some pretty real-world production-plausible examples of when each algorithm is particularly useful.

As usual, we’ll start off with an image. Of course, all images in this post are rendered entirely using Takua a0.5. The following image is a Cornell Box lit by a textured sphere light completely encased in a glass sphere, rendered using my bidirectional pathtracer integrator. For reasons I’ll discuss a bit later in this post, this scene belongs to a whole class of scenes that unidirectional pathtracing is absolutely abysmal; these scenes require a bidirectional integrator to converge in any reasonable amount of time:

Room lit with a textured sphere light enclosed in a glass sphere, converged result rendered using bidirectional pathtracing.

To understand why BDPT is a more robust integrator than unidirectional pathtracing, we need to start by examining the light transport equation and its path integral formulation. The light transport equation was introduced by Kajiya and is typically presented using the formulation from Eric Veach’s thesis:

\[ L_{\text{o}}(\mathbf{x},\, \omega_{\text{o}}) \,=\, L_e(\mathbf{x},\, \omega_{\text{o}}) \ +\, \int_{\mathcal{S}^2} L_{\text{o}}(\mathbf{x}_\mathcal{M}(\mathbf{x},\, \omega_{i}),\, -\omega_{i}) \, f_s(\mathbf{x},\, \omega_{i} \rightarrow \omega_{\text{o}}) \, d \sigma_{\mathbf{x}}^{\perp} (\omega_{i}) \]

Put into words instead of math, the light transport equation simply states that the amount of light leaving any point is equal to the amount of light emitted at that point plus the total amount of light arriving at that point from all directions, weighted by the surface reflectance and absorption at that point. Combined with later extensions to account for effects such as volume scattering and subsurface scattering and diffraction, the light transport equation serves as the basis for all of modern physically based rendering. In order to solve the light transport equation in a practical manner, Veach presents the path integral formulation of light transport:

\[ I_{j} = \int_{\Omega}^{} L_{e}(\mathbf{x}_{0})G(\mathbf{x}_{0}\leftrightarrow \mathbf{x}_{1})[\prod_{i=1}^{k-1}\rho(\mathbf{x}_{i})G(\mathbf{x}_{i}\leftrightarrow \mathbf{x}_{i+1})]W_{e}(\mathbf{x}_{k}) d\mu(\bar{\mathbf{x}}) \]

The path integral states that for a given pixel on an image, the amount of radiance arriving at that pixel is the integral of all radiance coming in through all paths in path space, where a path is the route taken by an individual photon from the light source through the scene to the camera/eye/sensor, and path space simply encompasses all possible paths. Since there is no closed form solution to the path integral, the goal of modern physically based ray-tracing renderers is to sample a representative subset of path space in order to produce a reasonably accurate estimate of the path integral per pixel; progressive renderers estimate the path integral piece by piece, producing a better and better estimate of the full integral with each new iteration.

At this point, we should take a brief detour to discuss the terms “unbiased” versus “biased” rendering. Within the graphics world, there’s a lot of confusion and preconceptions about what each of these terms mean. In actuality, an unbiased rendering algorithm is simply one where each iteration produces an exact result for the particular piece of path space being sampled. A biased rendering algorithm is one where at each iteration, an approximate result is produced for the piece of path space being sampled. However, biased algorithms are not necessarily a bad thing; a biased algorithm can be consistent, that is, converges in the limit to the same result as an unbiased algorithm. Consistency means that an estimator arrives at the accurate result in the limit; so in practice, we should care less about whether or not an algorithm is biased or unbiased so long as it is consistent. BDPT is an unbiased, consistent integrator whereas SPPM is a biased but still consistent integrator.

Going back to the path integral, we can quickly see where unidirectional pathtracing comes from once we view light transport through the path integral. The most obvious way to evaluate the path integral is to do exactly as the path integral says: trace a path starting from a light source, through the scene, and if the path eventually hits the camera, accumulate the radiance along the path. This approach is one form of unidirectional pathtracing that is typically referred to as light tracing (LT). However, since the camera is a fairly small target for paths to hit, unidirectional pathtracing is typically implemented going in reverse: start each path at the camera, and trace through the scene until each path hits a light source or goes off into empty space and is lost. This approach is called backwards pathtracing and is what people usually are referring to when they use the term pathtracing (PT).

As I discussed a few years back in a previous post, pathtracing with direct light importance sampling is pretty efficient at a wide variety of scene types. However, pathtracing with direct light importance sampling will fail for any type of path where the light source cannot be directly sampled; we can easily construct a number of plausible, common setups where this situation occurs. For example, imagine a case where a light source is completely enclosed within a glass container, such as a glowing filament within a glass light bulb. In this case, for any pair consisting of a point in space and a point on the light source, the direction vector to hit the light point from the surface point through glass is not just the light point minus the surface point normalized, but instead has to be at an angle such that the path hits the light point after refracting through glass. Without knowing the exact angle required to make this connection beforehand, the probability of a random direct light sample direction arriving at the glass interface at the correct angle is extremely small; this problem is compounded if the light source itself is very small to begin with.

Taking the sphere light in a glass sphere scene from earlier, we can compare the result of pathtracing without glass around the light versus with glass around the light. The following comparison shows 16 iterations each, and we can see that the version with glass around the light is significantly less converged:

Pathtracing, 16 iterations, with glass sphere.

Pathtracing, 16 iterations, without glass sphere.

Generally, pathtracing is terrible at resolving caustics, and the glass-in-light scenario is one where all illumination within the scene is through caustics. Conversely, light tracing is quite good at handling caustics and can be combined with direct sensor importance sampling (same idea as direct light importance sampling, just targeting the camera/eye/sensor instead of a light source). However, light tracing in turn is bad at handling certain scenarios that pathtracing can handle well, such as small distant spherical lights.

The following image again shows the sphere light in a glass sphere scene, but is now rendered for 16 iterations using light tracing. Note how the render is significantly more converged, for approximately the same computational cost. The glass sphere and sphere light render as black since in light tracing, the camera cannot be directly sampled from a specular surface.

Light tracing, 16 iterations, with glass sphere.

Since bidirectional pathtracing subsumes both pathtracing and light tracing, I implemented pathtracing and light tracing simultaneously and used each integrator as a check on the other, since correct integrators should converge to the same result. Implementing light tracing requires BSDFs and emitters to be a bit more robust than in vanilla pathtracing; emitters have to support both emission and illumination, and BSDFs have to support bidirectional evaluation. Light tracing also requires the ability to directly sample the camera and intersect the camera’s image plane to figure out what pixel to contribute a path to; as such, I implemented a rasterize function for my thin-lens and fisheye camera models. My thin-lens camera’s rasterization function supports the same depth of field and bokeh shape capabilities that the thin-lens camera’s raycast function supports.

The key insight behind bidirectional pathtracing is that since light tracing and vanilla pathtracing each have certain strengths and weaknesses, combining the two sampling techniques should result in a more robust path sampling technique. In BDPT, for each pixel per iteration, a path is traced starting from the camera and a second path is traced starting from a point on a light source. The two paths are then joined into a single path, conditional on an unoccluded line of sight from the end vertices of the two paths to each other. A BDPT path of length k with k+1 vertices can then be used to generate up to k+2 path sampling techniques by connecting each vertex on each subpath to every other vertex on the other subpath. While BDPT per iteration is much more expensive than unidirectional pathtracing, the much larger number of sampling techniques leads to a significantly higher convergence rate that typically outweighs the higher computational cost.

Below is the same scene as above rendered with 16 iterations of BDPT, and rendered with the same amount of computation time (about 5 iterations of BDPT). Note how with just 5 iterations, the BDPT result with the glass sphere has about the same level of noise as the pathtraced result for 16 iterations without the glass sphere. At 16 iterations, the BDPT result with the glass sphere is noticeably more converged than the pathtraced result for 16 iterations without the glass sphere.

BDPT, 16 iterations, with glass sphere.

BDPT, 5 iterations (same compute time as 16 iterations pathtracing), with glass sphere.

A naive implementation of BDPT would be for each pixel per iteration, trace a full light subpath, store the result, trace a full camera subpath, store the result, and then perform the connection operations between each vertex pair. However, since this approach requires storing the entirety of both subpaths for the entire iteration, there is room for some improvement. For Takua a0.5, my implementation stores only the full light subpath. At each bounce of the camera subpath, my implementation connects the current vertex to each vertex of the stored light subpath, weights and accumulates the result, and then moves onto the next bounce without having to store previous path vertices.

The following image is another example of a scene that BDPT is significantly better at sampling than any unidirectional pathtracing technique. The scene consists of a number of diffuse spheres and spherical lights inside of a glass bunny. In this scene, everything outside of the bunny is being lit using only caustics, while diffuse surfaces inside of the bunny are being lit using a combination of direct lighting, indirect diffuse bounces, and caustics from outside of the bunny reflecting/refracting back into the bunny. This last type of lighting belongs to a category of paths known as specular-diffuse-specular (SDS) paths that are especially difficult to sample unidirectionally.

Various diffuse spheres and sphere lights inside of a glass bunny, rendered using BDPT.

Here is the same scene as above, but with the glass bunny removed just so seeing what is going on with the spheres is a bit easier:

Same spheres as above, sans bunny. Rendered using BDPT.

Comparing pathtracer versus BDPT performance for 16 interations, BDPT’s vastly better performance on this scene becomes obvious:

16 iterations, rendered using pathtracing.

16 iterations, rendered using BDPT.

In the next post, I’ll write about multiple importance sampling (MIS), how it impacts BDPT, and my MIS implementation in Takua a0.5.

Consistent Normal Interpolation

I recently ran into a problem with interpolated normals. Instead of supporting sphere primitives directly, Takua Rev 5 generates polygon mesh spheres and handles them the same way as any other polygon mesh is handled. However, when I ran a test using a glass sphere, a lot of fireflies appeared:

Polygon mesh sphere with heavy firefly artifacts.

The fireflies are an artifact arising from how normal interpolation interacts with specular materials. Since the sphere is a polygonal mesh, normal interpolation is required to give the sphere a smooth appearance instead of a faceted one. The interpolation scheme I was using vanilla Phong normal interpolation: store a smoothed normal at each vertex, and then calculate the smooth shading normal at each point as the barycentric-coordinate-weighted sum of the smooth normals at each vertex of the current triangle. This works well for most cases, but a problem arises at grazing angles: since the smooth shading normal corresponds not to the actual geometry but to a “virtual” smoothed version of the geometry, sometimes outgoing specular rays will end up going below the tangent plane of the current hit point. Because of this, rays hitting a glass sphere with Phone normal interpolation at a grazing angle can sometimes go the wrong way, hence the artifacts in the above image.

Of course, the closer the actual geometry lines up to the virtual smoothed geometry, the less this grazing angle problem occurs. However, in order to completely eliminate artifacting, the polygon geometry needs to approach the limit of the virtual smoothed geometry. In this render, I regenerated the sphere with two more levels of subdivision. As a result, there are fewer fireflies, but fireflies are still present:

More heavily subdivided polygon mesh sphere. Fewer but still present firefly artifacts.

Initially I thought about just getting rid of the fireflies by checking pixel intensities and clamping intensities that were significantly brighter than their immediate neighbors, which is a fairly basic/standard firefly reduction strategy. However, since in this case the fireflies occur mostly at grazing angles and therefore on silhouettes, intensity clamping can lead to some unpleasant aliasing on silhouettes.

Fortunately, there was a paper by Alexander Reshetov, Alexei Soupikov, and William R. Mark at SIGGRAPH Asia 2010 about dealing with this exact problem. The paper, “Consistent Normal Interpolation”, presents a simple method for tweaking Phong normal interpolation to guarantee that reflected rays never go below the tangent plane. The method is based on incoming ray direction and the angle between the smooth interpolated normal and true face normal. The actual method presented in the paper is very straightforward to implement, but the derivation of the algorithm is fairly interesting and involves solving a nontrivial optimization problem to find a scaling term.

I implemented a slightly modified version of the algorithm presented on page 5 of the paper. The modification I made is simply to account for rays hitting polygons from below the tangent plane, as in the case of internal refraction. Now interpolated normals at grazing angles no longer produce firefly artifacts:

Polygon sphere with consistent normal interpolation. Note the lack of firefly artifacts.

I’m working on writing up a lot of stuff, so more soon! Stay tuned!

Takua Render Revision 5

Rough blue metallic XYZRGB Dragon model in a Cornell Box, rendered entirely with Takua Render a0.5

I haven’t posted much at all this past year due, but I’ve been working on some stuff that I’m really excited about! For the past year and a half, I’ve been building a new, much more advanced version of Takua Render completely from scratch. In this post, I’ll give a brief introduction and runthrough of the new version of Takua, which I’ve numbered as Revision 5 or a0.5. Since I first started exploring the world of renderer construction a few years back, I’ve learned an immense amount about every part of building a renderer, ranging all the way from low level architecture all the way up to light transport and surface algorithms. I’ve also been fortunate and lucky enough to be able to meet and talk to a lot of people working on professional, industry quality renderers and people from some of the best rendering research groups in the world, and so this new version of my own renderer is an attempt at applying everything I’ve learned and building a base for even further future improvement and research projects.

Very broadly, the two things I’m most proud of with Takua a0.5 are the internal renderer architecture and a lot of work on integrators and light transport. Takua a0.5’s internal architecture is heavily influenced by Disney’s Sorted Deferred Shading paper, the internal architecture of NVIDIA’s Optix engine, and the modular architecture of Mitsuba Render. In the light transport area, Takua a0.5 implements not just unidirectional pathtracing with direct light importance sampling (PT), but also correctly implements multiple importance sampled bidirectional pathtracing (BDPT), progressive photon mapping (PPM), and the relatively new vertex connection and merging (VCM) algorithm. I’m planning on writing a series of posts in the next few weeks/months that will dive in depth into Takua a0.5’s various features.

Takua a0.5 has also marked a pretty large shift in strategy in terms of targeted hardware. In previous versions of Takua, I did a lot of exploration with getting the entire renderer to run on CUDA-enabled GPUs. In the interest of increased architectural flexibility, Takua a0.5 does not have a 100% GPU mode anymore. Instead, Takua a0.5 is structured in such a way that certain individual modules can be accelerated by running on the GPU, but overall much of the core of the renderer is designed to make efficient use of the CPU to achieve high performance while bypassing a lot of the complexity of building a pure GPU renderer. Again, I’ll have a detailed post on this decision later down the line.

Here is a list of the some of the major new things in Takua a0.5:

  • Completely modular plugin system
    • Programmable ray/shader queue/dispatch system
    • Natively bidirectional BSDF system
    • Multiple geometry backends optimized for different hardware
    • Plugin systems for cameras, lights, acceleration structures, geometry, viewers, materials, surface patterns, BSDFs, etc.
  • Task-based concurrency and parallelism via Intel’s TBB library
  • Mitsuba/PBRT/Renderman 19 RIS style integrator system
    • Unidirectional pathtracing with direct light importance sampling
    • Lighttracing with camera importance sampling
    • Bidirectional pathtracing with multiple importance sampling
    • Progressive photon mapping
    • Vertex connection and merging
    • All integrators designed to be re-entrant and capable of deferred operations
  • Native animation support
    • Renderer-wide keyframing/animation support
    • Transformational AND deformational motion blur
    • Motion blur support for all camera, material, surface pattern, light, etc. attributes
    • Animation/keyframe sequences can be instanced in addition to geometry instancing

The blue metallic XYZRGB dragon image is a render that was produced using only Takua a0.5. Since I now have access to the original physical Cornell Box model, I thought it would be fun to use a 100% measurement-accurate model of the Cornell Box as a test scene while working on Takua a0.5. All of these renders have no post-processing whatsoever. Here are some other renders made as tests during development:

Vanilla Cornell Box with measurements taken directly off of the original physical Cornell Box model.

Glass Stanford Dragon producing some interesting caustics on the floor.

Floating glass ball as another caustics test.

Mirror cube.

Deformational motion blur test using a glass rectangular prism with the top half twisting over time.

A really ugly texture test that for some reason I kind of like.

More interesting non-Cornell Box renders coming in later posts!

Edit: Since making this post, I found a weighting bug that was causing a lot of energy to be lost in indirect diffuse bounces. I’ve since fixed the bug and updated this post with re-rendered versions of all of the images.

SIGGRAPH Asia 2014 Paper- A Framework for the Experimental Comparison of Solar and Skydome Illumination

One of the projects I worked on in my first year as part of Cornell University’s Program of Computer Graphics has been published in the November 2014 issue of ACM Transactions on Graphics and is being presented at SIGGRAPH Asia 2014! The paper is “A Framework for the Experimental Comparison of Solar and Skydome Illumination”, and the team on the project was my junior advisor Joseph T. Kider Jr., my lab-mates Dan Knowlton and Jeremy Newlin, myself, and my main advisor Donald P. Greenberg.

The bulk of my work on this project was in implementing and testing sky models inside of Mitsuba and developing the paper’s sample-driven model. Interestingly, I also did a lot of climbing onto the roof of Cornell’s Rhodes Hall building for this paper; Cornell’s facilities was kind enough to give us access to the roof of Rhodes Hall to set up our capture equipment on. This usually involved Joe, Dan, and myself hauling multiple tripods and backpacks of gear up onto the roof in the morning, and then taking it all back down in the evening. Sunny clear skies can be a rare sight in Ithaca, so getting good captures took an awful lot of attempts!

Here is the paper abstract:

The illumination and appearance of the solar/skydome is critical for many applications in computer graphics, computer vision, and daylighting studies. Unfortunately, physically accurate measurements of this rapidly changing illumination source are difficult to achieve, but necessary for the development of accurate physically-based sky illumination models and comparison studies of existing simulation models.

To obtain baseline data of this time-dependent anisotropic light source, we design a novel acquisition setup to simultaneously measure the comprehensive illumination properties. Our hardware design simultaneously acquires its spectral, spatial, and temporal information of the skydome. To achieve this goal, we use a custom built spectral radiance measurement scanner to measure the directional spectral radiance, a pyranometer to measure the irradiance of the entire hemisphere, and a camera to capture high-dynamic range imagery of the sky. The combination of these computer-controlled measurement devices provides a fast way to acquire accurate physical measurements of the solar/skydome. We use the results of our measurements to evaluate many of the strengths and weaknesses of several sun-sky simulation models. We also provide a measurement dataset of sky illumination data for various clear sky conditions and an interactive visualization tool for model comparison analysis available at http://www.graphics.cornell.edu/resources/clearsky/.

The paper and related materials can be found at:

Joe Kider will be presenting the paper at SIGGRAPH Asia 2014 in Shenzen as part of the Light In, Light Out Technical Papers session. Hopefully our data will prove useful to future research!


Addendum 04/26/2017: I added a personal project page for this paper to my website, located here. My personal page mirrors the same content found on the main site, including an author’s version of the paper, supplemental materials, and more.