Smoke Simulation Basics!

For CIS563 (Physically Based Animation), our current assignment is to write a fluid simulator capable of simulating smoke inside of a box. For this assignment, we’re using a semi-lagrangian approach based on Robert Bridson’s 2007 SIGGRAPH Course Notes on Fluid Simulation.

I won’t go into the nitty-gritty details of the math behind the simulation (for that, consult the Bridson notes), but I’ll give a quick summary. Basically, we start with a specialized grid structure called the MAC (marker and cell) grid, where each grid cell stores information relevant to the point in space the cell represents, such as density, velocity, temperature, etc. We update values across the grid by pretending a particle carried the cell’s values into the cell and using the velocity to extrapolate in time the particle’s previous position, and look up the values from the grid cell the particle was previously in. We then use that information to perform advection and projection and solve the system through a preconditioned conjugate gradient solver.

So far I have implemented density advection, projection, buoyancy (via temperature advection), and vorticity. For the integration scheme I’m just using basic Eularian, which was the default for the framework we started with. Eularian seems stable enough for the smoke sim, but I might try to go ahead and implement RK4 later anyway, since I suspect RK4 won’t smooth out details as much as basic Eularian.

I’m still missing the actual preconditioner, so for now I’m only testing the simulation on a 2D grid, since otherwise the simulation times will be really really long.

Here is a test on a 100x100 2D grid!

Jello Sim Maya Integration

I ported my jello simulation to Maya!

Well, sort of.

Instead of building a full Maya plugin like my good friend Dan Knowlton did, I opted for a simpler approach: I write out the vertex positions for each jello cube for each time step to a giant text file, and then use a custom Python script in Maya to read the vertex positions from the text file and animate a cube inside of Maya. It is a bit hacky and not nearly as elegant as the full-Maya-plugin approach, but it works in a pinch.

I think beng able to integrate my coding projects into artistic projects is very important, since at the end of the day, the main point of computer graphics is to be able to produce a good looking image. As such, I thought putting some jello into my kitchen scene would be fun, so here is the result, rendered out with Vray (some day I want to replace Vray with my own renderer though!):

The rendering process I’m using isn’t perfect yet… the fact that the jello cubes are being simulated with relatively few vertices is extremely apparent in the above video, as can be seen in how angular the edges of the jello become when it wiggles. At the moment, I can think of two possible fixes: one, simple run the simulation with a higher vertex count, or two, render the jello as a subdivision surface with creased edges. Since the second option should in theory allow for better looking renders without impacting simulation time, I think I will try the subdivision method forst.

But for now, here are some pretty still frames:

Multijello Simulation

The first assignment of the semester for CIS563 is to write a jello simulator using a particle-mass-spring system. The basic jello system involves building a particle grid where all of the particles are connected using a variety of springs, such as bend and shear springs, and then applying forces across the spring grid. In order to step the entire simulation forward in time, we also have to implement a stable integration scheme, such as RK4. For each step forward in time, we have to do intersection tests for each particle against solid objects in the simulation, such as the ground plane or boxes or spheres.

The particle-mass-spring we used is based directly on the Baraff/Witkin 2001 SIGGRAPH Physically Based Animation Course Notes.

For the actual assignment, we were only required to support a single jello interacting against boxes, spheres, cylinders, and the ground. However, I think basic primitives are a tad boring… so I went ahead and integrated mesh collisions as well. The mesh collision stuff is actually using the same OBJ mesh system and KD-Tree system that I am using for my pathtracer! I am planning on cleaning up my OBJ/KD-Tree system and releasing it on Github or something soon, as I think I will still find even more uses for it in graphics projects.

Of course, a natural extension of mesh support is jello-on-jello interaction, which is why I call my simulator “multijello” instead of just singular jello. For jello-on-jello, my approach is to update one jello at a time, and for each jello, treat all other jellos in the simulation as just more OBJ meshes. This solution yields pretty good results, although some interpenetration happens if the time step is too large or if jello meshes are too sparse.

Here’s a video showcasing some things my jello simulator can do:

Pathtracer Time

This semester I am setting out on an independent study under the direction of Joe Kider to build a pathtracer (obviously inspired by my friend and fellow DMD student Peter Kutz). Global illumination rendering techniques are becoming more and more relevant in industry today as hardware performance in the past few years has begun to reach a point where GI in commercial productions is suddenly no longer unfeasibly expensive. Some houses like Sony Imageworks have already moved to full GI renderers like Arnold, while other studios like Pixar are in the process of adopting GI based renderers or extending their existing renderers to support GI lighting. This industry move, coupled with the fact that GI quite simply produces very pretty results, sparked my initial interesting in GI techniques like pathtracing. Having built a basic raytracer last semester, I decided in typical over-confident style: “how hard could it be?”

Here’s my project abstract:

Both path tracing and bidirectional scatter distribution functions (BSDFs) are ideas that have existed within the field of computer graphics for many years and have seen numerous implementations in a variety of rendering pack- ages. Similarly, creating images of convincing plant life is a technical challenge that a host of solutions now exist for. However, achieving dynamic plant effects such as the change of a plants coloring during the transition from summer to fall is a task that to date has been mostly been accomplished using procedural techniques and various compositing tricks.

The goal of this project is to build a path tracing based renderer that is designed specifically with the intent to facil- itate achieving dynamic plant effects with a more physically based approach by introducing a new time component to the existing bidirectional scatter distribution model. By allowing BSDFs to vary over not only space but also over time, plant effects such as leaf decay could be achieved through shaders with appearances that are driven through physically based mathematical models instead of procedural techniques. In other words, this project has two main prongs: develop a robust path tracer with at least basic functionality, and then develop and implement a time- dependent BSDF model within the path tracer.

…and here’s some background that I wrote up for my proposal…

1. INTRODUCTION

Efficiently rendering convincing images with direct and indirect lighting has been a major problem in the field of computer graphics since the field’s very inception, as con- vincingly realistic graphics in games and movies depends upon lighting that can accurately mimic that of reality. Known generally as global illumination, the indirect light- ing problem has in the past decade seen a number of solu- tions such as path tracing and photon mapping that can generate convincingly realistic images with reasonable computational resource consumption and efficiency. One of the key discoveries that enabled the development of modern global illumination techniques is the concept of Bidirectional Scattering Distribution Functions, or BSDFs. Developed as a superset and generalization of two other concepts known as bidirectional reflectance distribution functions (BRDFs) and bidirectional transmittance distribu- tion functions (BTDFs), BSDF is a general mathematical function that describes how light is scattered by a certain surface, given the material properties of the surface. BSDFs are useful today for representing the material properties of an object at a single point in time; however, in reality mate- rial properties can change and morph over time, as exem- plified by the natural phenomena of leaf color changes from summer to fall. This project will attempt to build a prototype of a path tracing renderer with a BSDF model modified to include an additional time component to allow for material properties to change over time in a way representative of how material properties change over time in reality. The hope is that such a renderer will prove to be useful in future attempts to recreate natural phenomena using physically based models, such as leaf decay.

…and the actual goal of the project…

1.1 Design Goals

The project’s goal is to develop a reasonably robust and efficient path tracing renderer with a BSDF model modified to include an additional time component. In order to prove the feasibility of such a modified BSDF model, the end goal is to be able to use the renderer to produce images of plant life with changing surface material properties, in addition to standard test image such as Cornell Box tests that validate the functionality of the underlying basic path tracer.

…and finally, what I’m hoping I’ll actually be able to produce at the end of this independent study:

1.2 Projects Proposed Features and Functionality

The proposed renderer should allow a user to load a sce- ne with an arbitrary number of lights, materials, and objects and render out a realistic, global illumination based render. The renderer should be able to render implicitly defined objects such as spheres and cubes in addition to meshes defined in the .obj format. The renderer should also allow users to specify changes in object/light/camera transfor- mations over time in addition to changes in materials and BSDFs over time and render out a series of frames showing the scene at various points in time. A graphical interface would be a nice additional feature, but is not a priority of this project.

I’ll be posting at least weekly updates to this blog showing my progress. In my next post, I’ll go over some of the papers and sources Joe gave me to look over and explain some of the basic mechanics of how a pathtracer works. Apologies for the casual reader for this particular post being extremely text heavy; I shall have images to show soon!

Basic Raytracer and Fun with KD-Trees

The last assignment of the year for CIS460/560 (I’m still not sure what I’m supposed to call that class) is the dreaded RAYTRACER ASSIGNMENT.

The assignment is actually pretty straightforward: implement a recursive, direct lighting only raytracer with support for Blinn-Phong shading and support for basic primitive shapes (spheres, boxes, and polygon extrusions). In other words, pretty much a barebones implementation of the original Turner Whitted raytracing paper.

I’ve been planning on writing a global-illumination renderer (perhaps based on pathtracing or photon mapping?) for a while now, so my own personal goal with the raytracer project was to use it as a testbed for some things that I know I will need for my GI renderer project. With that in mind, I decided from the start that my raytracer should support rendering OBJ meshes and include some sort of acceleration system for OBJ meshes.

The idea behind the acceleration system goes like this: in the raytracer, one obviously needs to cast rays into the scene and track how they bounce around to get a final image. That means that every ray needs to have intersection tests against objects in the scene in order to determine what ray is hitting what object. Intersection testing against mathematically defined primitives is simple, but OBJ meshes present more of a problem; since an OBJ mesh is composed of a bunch of triangles or polygons, the naive way to intersection test against an OBJ mesh is to check for ray intersections with every single polygon inside of the mesh. This naive approach can get extremely expensive extremely quickly, so a better approach would be to use some sort of spatial data structure to quickly figure out what polygons are within the vicinity of the ray and therefore need intersection testing.

After talking with Joe and trawling around on Wikipedia for a while, I picked a KD-Tree as my spatial data structure for accelerated mesh intersection testing. I won’t go into the details of how KD-Trees work, as the Wikipedia article does a better job of it than I ever could. I will note, however, that the main resources I ended up pulling information from while looking up KD-Tree stuff are Wikipedia, Jon McCaffrey’s old CIS565 slides on spatial data structure, and the fantastic PBRT book that Joe pointed me towards.

Implementing the KD-Tree for the first time took me the better part of two weeks, mainly because I was misunderstanding how the surface area splitting heuristic works. Unfortunately, I probably can’t post actual code for my raytracer, since this is a class assignment that will repeated in future incarnations of the class. However, I can show images!

The KD-Tree meant I could render meshes in a reasonable amount of time, so I rendered an airplane:

The airplane took about a minute or so to render, which got me wondering how well my raytracer would work if I threw the full 500000+ poly Stanford Dragon at it. This render took about five or six minutes to finish (without the KD-Tree in place, this same image takes about 30 minutes to render):

Of course, the natural place to go after one dragon is three dragons. Three dragons took about 15 minutes to render, which is pretty much exactly a three-fold increase over one dragon. That means my renderer’s performance scales more or less linearly, which is good.

For fun, and because I like space shuttles, here is a space shuttle. Because the space shuttle has a really low poly count, this image took under a minute to render:

For reflections, I took a slightly different approach from the typical recursive method. The normal recursive approach to a raytracer is to begin with one ray, and trace that ray completely through recursion to its recursion depth limit before moving onto the next pixel and ray. However, such an approach might not actually be idea in a GI renderer. For example, from what I understand, in pathtracing a better raytracing approach is to actually trace everything iteratively; that is, trace the first bounce for all rays and store where the rays are, then trace the second bounce for all rays, then the third, and so on and so forth. Basically, such an approach allows one to set an unlimited trace depth and just let the renderer trace and trace and trace until one stops the renderer, but the corresponding cost of such a system is slightly higher memory usage, since ray positions need to be stored for the previous iteration.

Adding reflections did impact my render times pretty dramatically. I have a suspicion that both my intersection code and my KD-Tree are actually far from ideal, but I’ll have to look at that later. Here’s a test with reflections with the airplane:

…and here is a test with three reflective dragons. This image took foooorrreeevvveeeerrrr to render…. I actually do not know how long, as I let it run overnight:

I also added support for multiple lights with varying color support:

Here are some more images rendered with my raytracer:

In conclusion, the raytracer was a fun final project. I don’t think my raytracer is even remotely suitable for actual production use, and I don’t plan on using it for any future projects (unlike my volumetric renderer, which I think I will definitely be using in the future). However, I will definitely be using stuff I learned from the raytracer in my future GI renderer project, such as the KD-tree stuff and the iterative raytracing method. I will probably have to give my KD-tree a total rewrite, since it is really really far from optimal here, so that is something I’ll be starting over winter break! Next stop, GI renderer, CIS563, and CIS565!

As an amusing parting note, here is the first proper image I ever got out of my raytracer. Awww yeeeaaahhhhhh:

A Volumetric Renderer for Rendering Volumes

The first assignment of the semester for CIS460 was to write, from scratch in C++, a volumetric renderer. Quite simply, a volumetric renderer is a program that can create a 2D image from a 3D discretized data set. Such data set is more often referred to as a voxel grid. In other words, a volumetric renderer makes pictures from voxels. Such renderers are useful in visualizing medical imaging data and some forms of 3D scans and blah blah blah…

…or you can make pretty clouds.

One of the first things I ever tried to make when I first was introduced to Maya was a cloud. I quickly learned that there simply is no way to get a nice fluffy cloud using polygonal modeling techniques. Ever since then I’ve kept the idea of making clouds parked in the back of my head, so when we were assigned the task of writing a volumetric renderer that could produce clouds, obviously I was pretty excited.

The coolest part of studying computer graphics from the computer science side of things has got to be the whole idea of “well, I want to make X, but I can’t seem to find any tool that can do X, so I guess…. I’LL JUST WRITE MY OWN PROGRAM TO MAKE X.”

I won’t go into detailed specifics about implementing the volumetric renderer, as that is a topic well covered by many papers written by authors much smarter than me. Also, future CIS460 students may stumble across this blog, and half the fun of the assignment is figuring out the detailed implementation for oneself. I don’t want to ruin that for them ;) Instead, I’ll give a general run-through of how this works.

The way the volumetric renderer works is pretty simple. You start with a big ol’ grid of voxels, called… the voxel grid or voxel buffer. From the camera, you shoot an imaginary ray through each pixel of what will be the final picture and trace that ray to see if it enters the voxel buffer. If the ray does indeed hit the voxel buffer, then you slowly sample along the ray a teeny step at a time and accumulate the color of the pixel based on the densities of the voxels traveled through. Lighting information is easy too: for each voxel reached, figure out how much stuff there is between that voxel and any light sources, and use a fancy equation to weight the amount of shadow a voxel receives. “But where does that voxel grid come from?”, you may wonder. In the case of my renderer, the voxel grid can either be loaded in from text files containing voxel data in a custom format, or the grid can be generated by sampling a Perlin noise function for each voxel in the grid.

So obviously volumetric renderers are pretty good for rendering clouds, as one can simply represent a cloud as a bunch of discrete points where each point has some density value. However, discretizing the world has a distinct disadvantage: artifacting. In the above render, some pixel-y artifacting is visible because the voxel grid I used wasn’t sufficiently high resolution enough to make each voxel indistinguishable. The problem is even more obvious in this render, where I stuck the camera right up into a cloud:

(sidenote for those reading out of interest in CIS460: I implemented multiple arbitrary light sources in my renderer, which is where those colors are coming from)

There are four ways to deal with the artifacting issue. The first is to simply move the camera further away. Once the camera is sufficiently far away, even a relatively low resolution grid will look pretty smooth:

A second way is to simply dramatically increase the resolution of the voxel mesh. This technique can be very very very memory expensive though. Imagine a 100x100x100 voxel grid where each voxel requires 4 bytes of memory… the total memory required is about 3.8 MB, which isn’t bad at all. But lets say we want a grid 5 times higher in resolution… a 500^3 grid needs 476 MB! Furthermore, a 1000x1000x1000 grid requires 3.72 GB! Of course, we could try to save memory by only storing non-empty voxels through the use of a hashmap or something, but that is more computationally expensive and gives no benefit in the worst case scenario of every voxel having some density.

A third alternative is to use trilinear interpolation or some other interpolation scheme to smooth out the voxel grid as its being sampled. This technique can lead to some fairly nice results:

At least in the case of my renderer, there is a fourth way to deal with the artifacting: instead of preloading the voxel buffer with values from Perlin noise, why not just get rid of the notion of a discretized voxel buffer altogether and directly sample the Perlin noise function when raymarching? The result would indeed be a perfectly smooth, artifact free render, but the computational cost is extraordinarily high compared to using a voxel buffer.

Of course, one could just box blur the render afterwards as well. But doing so is sort of cheating.

I also played with trying to get my clouds to self illuminate, with the hope of possibly eventually making explosion type things. Ideally I would have done this by properly implementing a physically accurate black body system, but I did not have much time before the finished assignment was due to implement such a system. So instead, my friend Stewart Hills and I came up with a fake black body system where the emmitance of each voxel was simply determined by how far the voxel is from the outside of the cloud. For each voxel, simply raycast in several random directions until each raymarch hits zero density, pick the shortest distance, and plug that distance into some exponential falloff curve to get the voxel’s emittance. Here’s a self-glowing cloud:

…not even close to physically accurate, but pretty good looking for a hack that was cooked up in a few hours! A closeup shot:

So! The volumetric renderer was definitely a fun assignment, and now I’ve got a cool way to make clouds! Hopefully I’ll be able to integrate this renderer into some future projects!