Code and Visuals Version 4.0

I’d like to introduce the newest version of my computer graphics blog, Code and Visuals! On the surface, everything has been redesigned with a new layer of polish; everywhere, the site is now simpler, cleaner, and the layout is now fully responsive. Under the hood, I’ve moved from Blogger to Jekyll, hosted on Github Pages.

As part of the move to Jekyll, I’ve opted to clean up a lot of old posts as well. This blog started as some combination of a devblog, doodleblog, and photoblog, but quickly evolved into a pure computer graphics blog. In the interest of keeping historical context intact, I’ve ported over most of my older non-computer graphics posts, with minor edits and touchups here and there. A handful of posts I didn’t really like I’ve chosen to leave behind, but they can still be found on the old Blogger-based version of this blog.

The Atom feed URL for Code and Visuals is still the same as before, so that should transition over smoothly.

Why the move from Blogger to Jekyll/Github Pages? Here are the main reasons:

  • Markdown/Github support. Blogger’s posting interface is all kinds of terrible. With Jekyll/Github Pages, writing a new post is super nice: simply write a new post in a Markdown file, push to Github, and done. I love Markdown and I love Github, so its a great combo for me.
  • Significantly faster site. Previous versions of this blog have always been a bit pokey speed-wise, since they relied on dynamic page generators (originally my hand-rolled PHP/MySQL CMS, then Wordpress, and then Blogger). However, Jekyll is a static page generator; the site is converted from Markdown and template code into static HTML/CSS once at generation time, and then simply served as pure HTML/CSS.
  • Easier templating system. Jekyll’s templating system is build on Liquid, which made building this new theme really fast and easy.
  • Transparency. This entire blog’s source is now available on Github, and the theme is separately available here.

I’ve been looking to replace Blogger for some time now. Before trying out Jekyll, I was tinkering with Ghost, and even fully built out a working version of Code and Visuals on a self-hosted Ghost instance. In fact, this current theme was originally built for Ghost and then ported to Jekyll after I decided to use Jekyll (both the Ghost and Jekyll versions of this theme are in the Github repo). However, Ghost as a platform is still extremely new and isn’t quite ready for primetime yet; while Ghost’s Markdown support and Node.js underpinnings are nice, Ghost is still missing crucial features like the ability to have an archive page. Plus, at the end of the day, Jekyll is just plain simpler; Ghost is still a CMS, Jekyll is just a collection of text files.

I intend to stay on a Jekyll/Github Pages based solution for a long time; I am very very happy with this system. Over time, I’ll be moving all of my other couple of non-computer graphics blogs over to Jekyll as well. I’m still not sure if my main website needs to move to Jekyll though, since it already is coded up as a series of static pages and requires a slightly more complex layout on certain pages.

Over the past few months I haven’t posted much, since over the summer almost all of my Pixar related work was under heavy NDA (and still is and will be for the foreseeable future, with the exception of our SIGGRAPH demo), and a good deal of my work at Cornell’s Program for Computer Graphics is under wraps as well while we work towards paper submissions. However, I have some new personal projects I’ll write up soon, in addition to some older projects that I never posted about.

With that, welcome to Code and Visuals Version 4.0!

Pixar Optix Lighting Preview Demo

For the past two months or so, I’ve been working at Pixar Animation Studio as a summer intern with Pixar’s Research Group. The project I’m on for the summer is a realtime, GPU based lighting preview tool implemented on top of NVIDIA’s OptiX framework, entirely inside of The Foundry’s Katana. I’m incredibly pleased to be able to say that our project was demoed at SIGGRAPH 2013 at the NVIDIA booth, and that NVIDIA has a recording of the entire demo online!

The demo was done by our project’s lead, Danny Nahmias, and got an overwhelmingly positive reception. Check out the recording here:

FXGuide also did a podcast about our demo! Check it out here.

I’m just an intern, and the vast majority of the cool work being done on this project is from Danny Nahmias, Phillip Rideout, Mark Meyer, and others, but I’m very very proud, and consider myself extraordinarily lucky, to be part of this team!

Edit: I’ve replaced the original Ustream embed with a Vimeo mirror since the Ustream embed was crashing Chrome for some people. The original Ustream link is here.

Giant Mesh Test

My friend/schoolmate Zia Zhu is an amazing modeler, and recently she was kind enough to lend me a ZBrush sculpt she did for use as a high-poly test model for Takua Render. The model is a sculpture of Venus, and is made up of slightly over a million quads, or about two million triangles once triangulated inside of Takua Render.

Here are some nice, pretty test renders I did. As usual, everything was rendered with Takua Render, and there has been absolutely zero post-processing:

Each one of these renders was lit using a single, large area light (with importance sampled direct lighting, of course). The material on the model is just standard lambert diffuse white; I’ll do another set of test renders once I’ve finished rewriting my subsurface scatter system. Each render was set to 2800 samples per pixels and took about 20 minutes to render on a single GTX480. In other words, not spectacular, but not bad either.

The key takeaway from this series of tests was that Takua’s performance still suffers significantly when datasets become extremely large; while the render took about 20 minutes, setup time (including memory transfer, etc) took nearly 5 minutes, which I’m not happy about. I’ll be taking some time to rework Takua’s memory manager.

On a happier note, KD-tree construction performed well! The KD-tree for the Venus sculpt was built out to a depth of 30 and took less than a second to build.

Here’s a bonus image of what the sculpt looks like in the GL preview mode:

Again, all credit for the actual model goes to the incredibly talented Zia Zhu!

Importance Sampled Direct Lighting

Takua Render now has correct, fully working importance sampled direct lighting, supported for any type of light geometry! More importantly, the importance sampled direct lighting system is now fully integrated with the overall GI pathtracing integrator.

A naive, standard pathtracing implementation shoots out rays and accumulates colors until a light source is reached, upon which the total accumulated color is multiplied by the emittance of the light source and added to the framebuffer. As a result, even the simplest pathtracing integrator does account for both the indirect and direct illumination within a scene, but since sampling light sources is entirely dependent on the BRDF at each point, correctly sampling the direct illumination component in the scene is extremely inefficient. The canonical example of this inefficiency is a scene with a single very small, very intense, very far away light source. Since the probability of hitting such a small light source is so small, convergence is extremely slow.

To demonstrate/test this property, I made a simple test scene with an extremely bright sun-like object illuminating the scene from a huge distance away:

Using naive pathtracing without importance sampled direct lighting produces an image like this after 16 samples per pixel:

Mathematically, the image is correct, but is effectively useless since so few contributing ray paths have actually been found. Even after 5120 samples, the image is still pretty useless:

Instead, a much better approach is to accumulate colors just like before, but not bother waiting until a light source is hit by the ray path through pure BRDF sampling to multiply emittance. Instead, at each ray bounce, a new indirect ray is generated via the BRDF like before, AND to generate a new direct ray towards a randomly chosen light source via multiple importance sampling and multiply the accumulated color by the resultant emittance. Multiple importance sampled direct lighting works by balancing two different sampling strategies: sampling by light source and sampling by BRDF, and then weighting the two results with some sort of heuristic (such as the power heuristic described in Eric Veach’s thesis).

Sampling by light source is the trickier part of this technique. The idea is to generate a ray that we know will hit a light source, and then weight the contribution from that ray by the probability of generating that ray to remove the bias introduced by artificially choosing a ray direction. There’s a few good ways to do this: one way is to generate an evenly distributed random point on a light source as the target for the direct lighting ray, and then weight the result using the probability distribution function with respect to surface area, transformed into a PDF with respect to solid angle.

Takua Render at the moment uses a slightly different approach, for the sake of simplicity. The approach I’m using is similar to the one described in my earlier post on the topic, but with a disk instead of a sphere. The approach works like this:

  1. Figure out a bounding sphere for the light source
  2. Construct a ray from the point to be lit to the center of the bounding sphere. Let’s call the direction of this ray D.
  3. Find a great circle on the bounding sphere with a normal N, such that N is lined up exactly with D.
  4. Move the great circle along its normal towards the point to be lit by a distance of exactly the radius of the bounding sphere
  5. Treat the great circle as a disk and generate uniformly distributed random points on the disk to shoot rays towards.
  6. Weight light samples by the projected solid angle of the disk on the point being lit.

Alternatively, the weighting can simply be based on the normal solid angle instead of the projected solid angle, since the random points are chosen with a cosine weighted distribution.

The nice thing about this approach is that it allows for importance sampled direct lighting even for shapes that are difficult to sample random points on; effectively, the problem of sampling light sources is abstracted away, at the cost of a slight loss in efficiency since some percentage of rays fired at the disk have to miss the light in order for the weighting to remain unbiased.

I also started work on the surface area PDF to solid angle PDF method, so I might post about that later too. But for now, everything works! With importance sampled direct lighting, the scene from above is actually renderable in a reasonable amount of time. With just 16 samples per pixel, Takua Render now can generate this image:

…and after 5120 samples per pixel, a perfectly clean render:

The other cool thing about this scene is that most of the scene is actually being lit through pure indirect illumination. With only direct illumination and no GI, the render looks like this:

Quick Update on Future Plans

Just a super quick update on my future plans:

Next year, starting in September, I’ll be joining Dr. Don Greenberg and Dr. Joseph T. Kider and others at Cornell’s Program for Computer Graphics. I’ll be pursuing a Master of Science in Computer Graphics there, and will most likely be working on something involving rendering (which I suppose is not surprising).

Between the end of school and September, I’ll be spending the summer at Pixar Animation Studios once again, this time as part of Pixar’s Research Group.

Obviously I’m quite excited by all of this!

Now, back to working on my renderer.

Working Towards Importance Sampled Direct Lighting

I haven’t made a post in a few weeks now since I’ve been working on a number of different things all of which aren’t quite done yet. Since its been a few weeks, here’s a writeup of one of the things I’m working on and where I am with that.

One of the major features I’ve been working towards for the past few weeks is full multiple importance sampling, which will serve a couple of purposes. First, importance sampling the direct lighting contribution in the image should allow for significantly higher convergence rates for the same amount of compute, allowing for much smoother renders for the same render time. Second, MIS will serve as groundwork for future bidirectional integration schemes, such as Metropolis transport and photon mapping. I’ve been working with my friend Xing Du on understanding the math behind MIS and figuring out how exactly the math should translate into implementation.

So first off, some ground truth tests. All ground truth tests are rendered using brute force pathtracing with hundreds of thousands of iterations per pixel. Here is the test scene I’ve been using lately, with all surfaces reduced to lambert diffuse for the sake of simplification:

Ground truth global illumination render, representing 512000 samples per pixel. All lights sampled by BRDF only.

Ground truth for direct lighting contribution only, with all lights sampled by BRDF only.

Ground truth for indirect lighting contribution only.

The motivation behind importance sampling lights by directly sampling objects with emissive materials comes from the difficulty of finding useful samples from the BRDF only; for example, for the lambert diffuse case, since sampling from only the BRDF produces outgoing rays in totally random (or, slightly better, cosine weighted random) directions, the probability of any ray coming from a diffuse surface actually hitting a light is relatively low, meaning that the contribution of each sample is likely to be low as well. As a result, finding the direct lighting contribution via just BRDF sampling.

For example, here’s the direct lighting contribution only, after 64 samples per pixel with only BRDF sampling:

Direct lighting contribution only, all lights sampled by BRDF only, 64 samples per pixel.

Instead of sampling direct lighting contribution by shooting a ray off in a random direction and hoping that maybe it will hit a light, a much better strategy would be to… shoot the ray towards the light source. This way, the contribution from the sample is guaranteed to be useful. There’s one hitch though: the weighting for a sample chosen using the BRDF is relatively simple to determine. For example, in the lambert diffuse case, since the probability of any particular random sample within a hemisphere is the same as any other sample, the weighting per sample is even with all other samples. Once we selectively choose the ray direction specifically towards the light though, the weighting per sample is no longer even. Instead, we must weight each sample by the probability of a ray going in that particular direction towards the light, which we can calculate by the solid angle subtended by the light source divided by the total solid angle of the hemisphere.

So, a trivial example case would be if a point was being lit by a large area light subtending exactly half of the hemisphere visible from the point. In this case, the area light subtends Pi steradians, making its total weight Pi/(2*Pi), or one half.

The tricky part of calculating the solid angle weighting is in calculating the fractional unit-spherical surface area projection for non-uniform light sources. In other words, figuring out what solid angle a sphere subtends is easy, but figuring out what solid angle a Stanford Bunny subtends is…. less easy.

The initial approach that Xing and I arrived at was to break complex meshes down into triangles and treat each triangle as a separate light, since calculating the solid angle subtended by a triangle is once again easy. However, since treating a mesh as a cloud of triangle area lights is potentially very expensive, for each iteration the direct lighting contribution from all lights in the scene becomes potentially untenable, meaning that each iteration of the render will have to randomly select a small number of lights to directly sample.

As a result, we brainstormed some ideas for potential shortcuts. One shortcut idea we came up with was that instead of choosing an evenly distributed point on the surface of the light to serve as the target for our ray, we could instead shoot a ray at the bounding sphere for the target light and weight the resulting sample by the solid angle subtended not by the light itself, but by the bounding sphere. Our thinking was that this approach would dramatically simplify the work of calculating the solid angle weighting, while still maintaining mathematical correctness and unbiasedness since the number of rays fired at the bounding sphere that will miss the light should exactly offset the overweighting produced by using the bounding sphere’s subtended solid angle.

I went ahead and tried out this idea, and produced the following image:

Direct lighting contribution only, all lights sampled by direct sampling weighted by subtended solid angle, 64 samples per pixel.

First off, for the most part, it works! The resulting direct illumination matches the ground truth and the BRDF-sampling render, but is significantly more converged than the BRDF-sampling render for the same number of samples. BUT, there is a critical flaw: note the black circle around the light source on the ceiling. That black circle happens to fall exactly within the bounding sphere for the light source, and results from a very simple mathematical fact: calculating the solid angle subtended by the bounding sphere for a point INSIDE of the bounding sphere is undefined. In other words, this shortcut approach will fail for any points that are too close to a light source.

One possible workaround I tried was to have any points inside of a light’s bounding sphere to fall back to pure BRDF sampling, but this approach is also undesirable, as a highly visible discontinuity between the differently sampled area develops due to vastly different convergence rates.

So, while the overall solid angle weighting approach checks out, our shortcut does not. I’m now working on implementing the first approach described above, which should produce a correct result, and will post in the next few days.