Smoke Sim- Preconditioning and Huge Grids

I have added preconditioning to my smoke simulator! For the preconditioner, I am using Incomplete Cholesky, which is the preconditioner recommended in chapter 4 of the Bridson Fluid Course Notes. I’ve also troubleshooted by vorticity implementation, so the simulation should produce more interesting/stable vortices now.

The key reason for implementing the preconditioner is simple: speed. With a faster convergence comes an added bonus: being able to do larger grids due to less time required per solve. Because of that speed increase, I can now run my simulations on 3D grids.

In previous years, the CIS563 smoke simulator framework usually hit a performance cliff at grids beyond around 50x50x50, but last year Peter Kutz managed to push his smoke simulator to 90x90x36 by implementing a sparse A-Matrix structure, as opposed to storing every single data point, including empty ones, for the grid. This year’s smoke simulation framework was updated to include some of Peter’s improvements, and so Joe reckons that we should be able to push our smoke simulation grids pretty far. I’ve been scaling up starting from 10x10x10, and now I’m at 100x100x50:

This simulation took about 24 hours to run on a 2008 MacBook Pro with a 2.8 Ghz Core 2 Duo, but that is actually pretty good for fluid simulation! According to my rather un-scientific estimates, the simulation would take about 4 or 5 days without the preconditioner, and even longer without the sparse A-Matrix. I bet I can still push this further, and I’m starting to think about multithreading the simulation with OpenMP to get even more performance and even larger grids. We shall see.

One more thing: rendering this thing. So far I have not been doing any fancy rendering, just using the default OpenGL render that our framework came with. However, I want to get this into my volumetric renderer at some point and maybe even try out the pseudo-black body stuff with it. Eventually I want to try rendering this out with my pathtracer too!

Smoke Simulation Basics!

For CIS563 (Physically Based Animation), our current assignment is to write a fluid simulator capable of simulating smoke inside of a box. For this assignment, we’re using a semi-lagrangian approach based on Robert Bridson’s 2007 SIGGRAPH Course Notes on Fluid Simulation.

I won’t go into the nitty-gritty details of the math behind the simulation (for that, consult the Bridson notes), but I’ll give a quick summary. Basically, we start with a specialized grid structure called the MAC (marker and cell) grid, where each grid cell stores information relevant to the point in space the cell represents, such as density, velocity, temperature, etc. We update values across the grid by pretending a particle carried the cell’s values into the cell and using the velocity to extrapolate in time the particle’s previous position, and look up the values from the grid cell the particle was previously in. We then use that information to perform advection and projection and solve the system through a preconditioned conjugate gradient solver.

So far I have implemented density advection, projection, buoyancy (via temperature advection), and vorticity. For the integration scheme I’m just using basic Eularian, which was the default for the framework we started with. Eularian seems stable enough for the smoke sim, but I might try to go ahead and implement RK4 later anyway, since I suspect RK4 won’t smooth out details as much as basic Eularian.

I’m still missing the actual preconditioner, so for now I’m only testing the simulation on a 2D grid, since otherwise the simulation times will be really really long.

Here is a test on a 100x100 2D grid!

Jello Sim Maya Integration

I ported my jello simulation to Maya!

Well, sort of.

Instead of building a full Maya plugin like my good friend Dan Knowlton did, I opted for a simpler approach: I write out the vertex positions for each jello cube for each time step to a giant text file, and then use a custom Python script in Maya to read the vertex positions from the text file and animate a cube inside of Maya. It is a bit hacky and not nearly as elegant as the full-Maya-plugin approach, but it works in a pinch.

I think beng able to integrate my coding projects into artistic projects is very important, since at the end of the day, the main point of computer graphics is to be able to produce a good looking image. As such, I thought putting some jello into my kitchen scene would be fun, so here is the result, rendered out with Vray (some day I want to replace Vray with my own renderer though!):

The rendering process I’m using isn’t perfect yet… the fact that the jello cubes are being simulated with relatively few vertices is extremely apparent in the above video, as can be seen in how angular the edges of the jello become when it wiggles. At the moment, I can think of two possible fixes: one, simple run the simulation with a higher vertex count, or two, render the jello as a subdivision surface with creased edges. Since the second option should in theory allow for better looking renders without impacting simulation time, I think I will try the subdivision method forst.

But for now, here are some pretty still frames:

Multijello Simulation

The first assignment of the semester for CIS563 is to write a jello simulator using a particle-mass-spring system. The basic jello system involves building a particle grid where all of the particles are connected using a variety of springs, such as bend and shear springs, and then applying forces across the spring grid. In order to step the entire simulation forward in time, we also have to implement a stable integration scheme, such as RK4. For each step forward in time, we have to do intersection tests for each particle against solid objects in the simulation, such as the ground plane or boxes or spheres.

The particle-mass-spring we used is based directly on the Baraff/Witkin 2001 SIGGRAPH Physically Based Animation Course Notes.

For the actual assignment, we were only required to support a single jello interacting against boxes, spheres, cylinders, and the ground. However, I think basic primitives are a tad boring… so I went ahead and integrated mesh collisions as well. The mesh collision stuff is actually using the same OBJ mesh system and KD-Tree system that I am using for my pathtracer! I am planning on cleaning up my OBJ/KD-Tree system and releasing it on Github or something soon, as I think I will still find even more uses for it in graphics projects.

Of course, a natural extension of mesh support is jello-on-jello interaction, which is why I call my simulator “multijello” instead of just singular jello. For jello-on-jello, my approach is to update one jello at a time, and for each jello, treat all other jellos in the simulation as just more OBJ meshes. This solution yields pretty good results, although some interpenetration happens if the time step is too large or if jello meshes are too sparse.

Here’s a video showcasing some things my jello simulator can do:

Pathtracer Time

This semester I am setting out on an independent study under the direction of Joe Kider to build a pathtracer (obviously inspired by my friend and fellow DMD student Peter Kutz). Global illumination rendering techniques are becoming more and more relevant in industry today as hardware performance in the past few years has begun to reach a point where GI in commercial productions is suddenly no longer unfeasibly expensive. Some houses like Sony Imageworks have already moved to full GI renderers like Arnold, while other studios like Pixar are in the process of adopting GI based renderers or extending their existing renderers to support GI lighting. This industry move, coupled with the fact that GI quite simply produces very pretty results, sparked my initial interesting in GI techniques like pathtracing. Having built a basic raytracer last semester, I decided in typical over-confident style: “how hard could it be?”

Here’s my project abstract:

Both path tracing and bidirectional scatter distribution functions (BSDFs) are ideas that have existed within the field of computer graphics for many years and have seen numerous implementations in a variety of rendering pack- ages. Similarly, creating images of convincing plant life is a technical challenge that a host of solutions now exist for. However, achieving dynamic plant effects such as the change of a plants coloring during the transition from summer to fall is a task that to date has been mostly been accomplished using procedural techniques and various compositing tricks.

The goal of this project is to build a path tracing based renderer that is designed specifically with the intent to facil- itate achieving dynamic plant effects with a more physically based approach by introducing a new time component to the existing bidirectional scatter distribution model. By allowing BSDFs to vary over not only space but also over time, plant effects such as leaf decay could be achieved through shaders with appearances that are driven through physically based mathematical models instead of procedural techniques. In other words, this project has two main prongs: develop a robust path tracer with at least basic functionality, and then develop and implement a time- dependent BSDF model within the path tracer.

…and here’s some background that I wrote up for my proposal…

1. INTRODUCTION

Efficiently rendering convincing images with direct and indirect lighting has been a major problem in the field of computer graphics since the field’s very inception, as con- vincingly realistic graphics in games and movies depends upon lighting that can accurately mimic that of reality. Known generally as global illumination, the indirect light- ing problem has in the past decade seen a number of solu- tions such as path tracing and photon mapping that can generate convincingly realistic images with reasonable computational resource consumption and efficiency. One of the key discoveries that enabled the development of modern global illumination techniques is the concept of Bidirectional Scattering Distribution Functions, or BSDFs. Developed as a superset and generalization of two other concepts known as bidirectional reflectance distribution functions (BRDFs) and bidirectional transmittance distribu- tion functions (BTDFs), BSDF is a general mathematical function that describes how light is scattered by a certain surface, given the material properties of the surface. BSDFs are useful today for representing the material properties of an object at a single point in time; however, in reality mate- rial properties can change and morph over time, as exem- plified by the natural phenomena of leaf color changes from summer to fall. This project will attempt to build a prototype of a path tracing renderer with a BSDF model modified to include an additional time component to allow for material properties to change over time in a way representative of how material properties change over time in reality. The hope is that such a renderer will prove to be useful in future attempts to recreate natural phenomena using physically based models, such as leaf decay.

…and the actual goal of the project…

1.1 Design Goals

The project’s goal is to develop a reasonably robust and efficient path tracing renderer with a BSDF model modified to include an additional time component. In order to prove the feasibility of such a modified BSDF model, the end goal is to be able to use the renderer to produce images of plant life with changing surface material properties, in addition to standard test image such as Cornell Box tests that validate the functionality of the underlying basic path tracer.

…and finally, what I’m hoping I’ll actually be able to produce at the end of this independent study:

1.2 Projects Proposed Features and Functionality

The proposed renderer should allow a user to load a sce- ne with an arbitrary number of lights, materials, and objects and render out a realistic, global illumination based render. The renderer should be able to render implicitly defined objects such as spheres and cubes in addition to meshes defined in the .obj format. The renderer should also allow users to specify changes in object/light/camera transfor- mations over time in addition to changes in materials and BSDFs over time and render out a series of frames showing the scene at various points in time. A graphical interface would be a nice additional feature, but is not a priority of this project.

I’ll be posting at least weekly updates to this blog showing my progress. In my next post, I’ll go over some of the papers and sources Joe gave me to look over and explain some of the basic mechanics of how a pathtracer works. Apologies for the casual reader for this particular post being extremely text heavy; I shall have images to show soon!

Basic Raytracer and Fun with KD-Trees

The last assignment of the year for CIS460/560 (I’m still not sure what I’m supposed to call that class) is the dreaded RAYTRACER ASSIGNMENT.

The assignment is actually pretty straightforward: implement a recursive, direct lighting only raytracer with support for Blinn-Phong shading and support for basic primitive shapes (spheres, boxes, and polygon extrusions). In other words, pretty much a barebones implementation of the original Turner Whitted raytracing paper.

I’ve been planning on writing a global-illumination renderer (perhaps based on pathtracing or photon mapping?) for a while now, so my own personal goal with the raytracer project was to use it as a testbed for some things that I know I will need for my GI renderer project. With that in mind, I decided from the start that my raytracer should support rendering OBJ meshes and include some sort of acceleration system for OBJ meshes.

The idea behind the acceleration system goes like this: in the raytracer, one obviously needs to cast rays into the scene and track how they bounce around to get a final image. That means that every ray needs to have intersection tests against objects in the scene in order to determine what ray is hitting what object. Intersection testing against mathematically defined primitives is simple, but OBJ meshes present more of a problem; since an OBJ mesh is composed of a bunch of triangles or polygons, the naive way to intersection test against an OBJ mesh is to check for ray intersections with every single polygon inside of the mesh. This naive approach can get extremely expensive extremely quickly, so a better approach would be to use some sort of spatial data structure to quickly figure out what polygons are within the vicinity of the ray and therefore need intersection testing.

After talking with Joe and trawling around on Wikipedia for a while, I picked a KD-Tree as my spatial data structure for accelerated mesh intersection testing. I won’t go into the details of how KD-Trees work, as the Wikipedia article does a better job of it than I ever could. I will note, however, that the main resources I ended up pulling information from while looking up KD-Tree stuff are Wikipedia, Jon McCaffrey’s old CIS565 slides on spatial data structure, and the fantastic PBRT book that Joe pointed me towards.

Implementing the KD-Tree for the first time took me the better part of two weeks, mainly because I was misunderstanding how the surface area splitting heuristic works. Unfortunately, I probably can’t post actual code for my raytracer, since this is a class assignment that will repeated in future incarnations of the class. However, I can show images!

The KD-Tree meant I could render meshes in a reasonable amount of time, so I rendered an airplane:

The airplane took about a minute or so to render, which got me wondering how well my raytracer would work if I threw the full 500000+ poly Stanford Dragon at it. This render took about five or six minutes to finish (without the KD-Tree in place, this same image takes about 30 minutes to render):

Of course, the natural place to go after one dragon is three dragons. Three dragons took about 15 minutes to render, which is pretty much exactly a three-fold increase over one dragon. That means my renderer’s performance scales more or less linearly, which is good.

For fun, and because I like space shuttles, here is a space shuttle. Because the space shuttle has a really low poly count, this image took under a minute to render:

For reflections, I took a slightly different approach from the typical recursive method. The normal recursive approach to a raytracer is to begin with one ray, and trace that ray completely through recursion to its recursion depth limit before moving onto the next pixel and ray. However, such an approach might not actually be idea in a GI renderer. For example, from what I understand, in pathtracing a better raytracing approach is to actually trace everything iteratively; that is, trace the first bounce for all rays and store where the rays are, then trace the second bounce for all rays, then the third, and so on and so forth. Basically, such an approach allows one to set an unlimited trace depth and just let the renderer trace and trace and trace until one stops the renderer, but the corresponding cost of such a system is slightly higher memory usage, since ray positions need to be stored for the previous iteration.

Adding reflections did impact my render times pretty dramatically. I have a suspicion that both my intersection code and my KD-Tree are actually far from ideal, but I’ll have to look at that later. Here’s a test with reflections with the airplane:

…and here is a test with three reflective dragons. This image took foooorrreeevvveeeerrrr to render…. I actually do not know how long, as I let it run overnight:

I also added support for multiple lights with varying color support:

Here are some more images rendered with my raytracer:

In conclusion, the raytracer was a fun final project. I don’t think my raytracer is even remotely suitable for actual production use, and I don’t plan on using it for any future projects (unlike my volumetric renderer, which I think I will definitely be using in the future). However, I will definitely be using stuff I learned from the raytracer in my future GI renderer project, such as the KD-tree stuff and the iterative raytracing method. I will probably have to give my KD-tree a total rewrite, since it is really really far from optimal here, so that is something I’ll be starting over winter break! Next stop, GI renderer, CIS563, and CIS565!

As an amusing parting note, here is the first proper image I ever got out of my raytracer. Awww yeeeaaahhhhhh: