Pathtracer with KD-Tree

I have finished my KD-Tree rewrite! My new KD-Tree implements the Surface-Area Heuristic for finding optimal splitting planes, and stops splitting once a node has either reached a certain sufficiently small surface area, or has a sufficiently small number of elements contained within itself. Basically, very standard KD-Tree stuff, but this time, properly implemented. As a result, I can now render meshes much quicker than before.

Here’s a cow in a Cornell Box. Each iteration of the cow took about 3 minutes, which is a huge improvement over my old raytracer, but still leaves a lot of room for improvement:

…and of course, the obligatory Stanford Dragon test. Each iteration took about 4 minutes for both of these images (the second one I let converge for a bit longer than the first one), and I made these renders a bit larger than the cow one:

So! Of course the KD-Tree could still use even more work, but for now it works well enough that I think I’m going to start focusing on other things, such as more interesting BSDFs and other performance enhancements.

First Pathtraced Image!

Behold, the very first image produced using my pathtracer!

Granted, the actual image is not terribly interesting- just a cube inside of a standard Cornell box type setup, but it was rendered entirely using my own pathtracer! Aside from being converted from a BMP file to a PNG, this render has not been modified in any way whatsoever outside of my renderer (I have yet to name it). This render is the result of a thousand iterations. Here are some comparisons of the variance in the render at various iteration levels (click through to the full size versions to get an actual sense of the variance levels):

Upper Left: 1 iteration. Upper Right: 5 iterations. Lower Left: 10 iterations. Lower Right: 15 iterations.

Upper Left: 1 iteration. Upper Right: 250 iterations. Lower Left: 500 iterations. Lower Right: 750 iterations.

Each iteration took about 15 seconds to finish.

Unfortunately, I have not been able to move as quickly with this project as I would like, due to other schoolwork and TAing for CIS277. Nonetheless, here’s where I am right now:

Currently the renderer is in a very very basic primitive state. Instead of extending my raytracer, I’ve opted for a completely from scratch start. The only piece of code brought over from the raytracer was the OBJ mesh system I wrote, since that was written to be fairly modular anyway. Right now my pathtracer works entirely through indirect lighting and only supports diffuse surfaces… like I said, very basic! Adding direct lighting should speed up render convergence, especially for scenes with small light sources. Also, right now the pathtracer only uses single direction pathtracing from the camera into the scene… adding bidirectional pathtracing should lead to another performance boost.

I’m still working on rewriting my KD-tree system, that should be finished within the next few days.

Something that is fairly high on my list of things to do right now is redesign the architecture for my renderer… right now, for each iteration, the renderer traces a path through a pixel all the way to its recursion depth before moving on to the next pixel. As soon as possible I want to move the renderer to use an iterative (as opposed to recursive) accumulated approach for each iteration (slightly confusing terminology, here i mean iteration as in each render pass), which, oddly enough, is something that my old raytracer already does. I’ve already started moving towards the accumulated approach; right now, I store the first set of raycasts from the camera and reuse those rays in each iteration.

One cool thing that storing the initial ray cast allows me to do is to generate a z-depth version of the render for “free”:

Okay, hopefully by my next post I’ll have the KD-tree rewrite done!

Smoke Sim- Preconditioning and Huge Grids

I have added preconditioning to my smoke simulator! For the preconditioner, I am using Incomplete Cholesky, which is the preconditioner recommended in chapter 4 of the Bridson Fluid Course Notes. I’ve also troubleshooted by vorticity implementation, so the simulation should produce more interesting/stable vortices now.

The key reason for implementing the preconditioner is simple: speed. With a faster convergence comes an added bonus: being able to do larger grids due to less time required per solve. Because of that speed increase, I can now run my simulations on 3D grids.

In previous years, the CIS563 smoke simulator framework usually hit a performance cliff at grids beyond around 50x50x50, but last year Peter Kutz managed to push his smoke simulator to 90x90x36 by implementing a sparse A-Matrix structure, as opposed to storing every single data point, including empty ones, for the grid. This year’s smoke simulation framework was updated to include some of Peter’s improvements, and so Joe reckons that we should be able to push our smoke simulation grids pretty far. I’ve been scaling up starting from 10x10x10, and now I’m at 100x100x50:

This simulation took about 24 hours to run on a 2008 MacBook Pro with a 2.8 Ghz Core 2 Duo, but that is actually pretty good for fluid simulation! According to my rather un-scientific estimates, the simulation would take about 4 or 5 days without the preconditioner, and even longer without the sparse A-Matrix. I bet I can still push this further, and I’m starting to think about multithreading the simulation with OpenMP to get even more performance and even larger grids. We shall see.

One more thing: rendering this thing. So far I have not been doing any fancy rendering, just using the default OpenGL render that our framework came with. However, I want to get this into my volumetric renderer at some point and maybe even try out the pseudo-black body stuff with it. Eventually I want to try rendering this out with my pathtracer too!

Smoke Simulation Basics!

For CIS563 (Physically Based Animation), our current assignment is to write a fluid simulator capable of simulating smoke inside of a box. For this assignment, we’re using a semi-lagrangian approach based on Robert Bridson’s 2007 SIGGRAPH Course Notes on Fluid Simulation.

I won’t go into the nitty-gritty details of the math behind the simulation (for that, consult the Bridson notes), but I’ll give a quick summary. Basically, we start with a specialized grid structure called the MAC (marker and cell) grid, where each grid cell stores information relevant to the point in space the cell represents, such as density, velocity, temperature, etc. We update values across the grid by pretending a particle carried the cell’s values into the cell and using the velocity to extrapolate in time the particle’s previous position, and look up the values from the grid cell the particle was previously in. We then use that information to perform advection and projection and solve the system through a preconditioned conjugate gradient solver.

So far I have implemented density advection, projection, buoyancy (via temperature advection), and vorticity. For the integration scheme I’m just using basic Eularian, which was the default for the framework we started with. Eularian seems stable enough for the smoke sim, but I might try to go ahead and implement RK4 later anyway, since I suspect RK4 won’t smooth out details as much as basic Eularian.

I’m still missing the actual preconditioner, so for now I’m only testing the simulation on a 2D grid, since otherwise the simulation times will be really really long.

Here is a test on a 100x100 2D grid!

Jello Sim Maya Integration

I ported my jello simulation to Maya!

Well, sort of.

Instead of building a full Maya plugin like my good friend Dan Knowlton did, I opted for a simpler approach: I write out the vertex positions for each jello cube for each time step to a giant text file, and then use a custom Python script in Maya to read the vertex positions from the text file and animate a cube inside of Maya. It is a bit hacky and not nearly as elegant as the full-Maya-plugin approach, but it works in a pinch.

I think beng able to integrate my coding projects into artistic projects is very important, since at the end of the day, the main point of computer graphics is to be able to produce a good looking image. As such, I thought putting some jello into my kitchen scene would be fun, so here is the result, rendered out with Vray (some day I want to replace Vray with my own renderer though!):

The rendering process I’m using isn’t perfect yet… the fact that the jello cubes are being simulated with relatively few vertices is extremely apparent in the above video, as can be seen in how angular the edges of the jello become when it wiggles. At the moment, I can think of two possible fixes: one, simple run the simulation with a higher vertex count, or two, render the jello as a subdivision surface with creased edges. Since the second option should in theory allow for better looking renders without impacting simulation time, I think I will try the subdivision method forst.

But for now, here are some pretty still frames:

Multijello Simulation

The first assignment of the semester for CIS563 is to write a jello simulator using a particle-mass-spring system. The basic jello system involves building a particle grid where all of the particles are connected using a variety of springs, such as bend and shear springs, and then applying forces across the spring grid. In order to step the entire simulation forward in time, we also have to implement a stable integration scheme, such as RK4. For each step forward in time, we have to do intersection tests for each particle against solid objects in the simulation, such as the ground plane or boxes or spheres.

The particle-mass-spring we used is based directly on the Baraff/Witkin 2001 SIGGRAPH Physically Based Animation Course Notes.

For the actual assignment, we were only required to support a single jello interacting against boxes, spheres, cylinders, and the ground. However, I think basic primitives are a tad boring… so I went ahead and integrated mesh collisions as well. The mesh collision stuff is actually using the same OBJ mesh system and KD-Tree system that I am using for my pathtracer! I am planning on cleaning up my OBJ/KD-Tree system and releasing it on Github or something soon, as I think I will still find even more uses for it in graphics projects.

Of course, a natural extension of mesh support is jello-on-jello interaction, which is why I call my simulator “multijello” instead of just singular jello. For jello-on-jello, my approach is to update one jello at a time, and for each jello, treat all other jellos in the simulation as just more OBJ meshes. This solution yields pretty good results, although some interpenetration happens if the time step is too large or if jello meshes are too sparse.

Here’s a video showcasing some things my jello simulator can do: