First Pathtraced Image!

Behold, the very first image produced using my pathtracer!

Granted, the actual image is not terribly interesting- just a cube inside of a standard Cornell box type setup, but it was rendered entirely using my own pathtracer! Aside from being converted from a BMP file to a PNG, this render has not been modified in any way whatsoever outside of my renderer (I have yet to name it). This render is the result of a thousand iterations. Here are some comparisons of the variance in the render at various iteration levels (click through to the full size versions to get an actual sense of the variance levels):

Upper Left: 1 iteration. Upper Right: 5 iterations. Lower Left: 10 iterations. Lower Right: 15 iterations.

Upper Left: 1 iteration. Upper Right: 250 iterations. Lower Left: 500 iterations. Lower Right: 750 iterations.

Each iteration took about 15 seconds to finish.

Unfortunately, I have not been able to move as quickly with this project as I would like, due to other schoolwork and TAing for CIS277. Nonetheless, here’s where I am right now:

Currently the renderer is in a very very basic primitive state. Instead of extending my raytracer, I’ve opted for a completely from scratch start. The only piece of code brought over from the raytracer was the OBJ mesh system I wrote, since that was written to be fairly modular anyway. Right now my pathtracer works entirely through indirect lighting and only supports diffuse surfaces… like I said, very basic! Adding direct lighting should speed up render convergence, especially for scenes with small light sources. Also, right now the pathtracer only uses single direction pathtracing from the camera into the scene… adding bidirectional pathtracing should lead to another performance boost.

I’m still working on rewriting my KD-tree system, that should be finished within the next few days.

Something that is fairly high on my list of things to do right now is redesign the architecture for my renderer… right now, for each iteration, the renderer traces a path through a pixel all the way to its recursion depth before moving on to the next pixel. As soon as possible I want to move the renderer to use an iterative (as opposed to recursive) accumulated approach for each iteration (slightly confusing terminology, here i mean iteration as in each render pass), which, oddly enough, is something that my old raytracer already does. I’ve already started moving towards the accumulated approach; right now, I store the first set of raycasts from the camera and reuse those rays in each iteration.

One cool thing that storing the initial ray cast allows me to do is to generate a z-depth version of the render for “free”:

Okay, hopefully by my next post I’ll have the KD-tree rewrite done!

Smoke Sim- Preconditioning and Huge Grids

I have added preconditioning to my smoke simulator! For the preconditioner, I am using Incomplete Cholesky, which is the preconditioner recommended in chapter 4 of the Bridson Fluid Course Notes. I’ve also troubleshooted by vorticity implementation, so the simulation should produce more interesting/stable vortices now.

The key reason for implementing the preconditioner is simple: speed. With a faster convergence comes an added bonus: being able to do larger grids due to less time required per solve. Because of that speed increase, I can now run my simulations on 3D grids.

In previous years, the CIS563 smoke simulator framework usually hit a performance cliff at grids beyond around 50x50x50, but last year Peter Kutz managed to push his smoke simulator to 90x90x36 by implementing a sparse A-Matrix structure, as opposed to storing every single data point, including empty ones, for the grid. This year’s smoke simulation framework was updated to include some of Peter’s improvements, and so Joe reckons that we should be able to push our smoke simulation grids pretty far. I’ve been scaling up starting from 10x10x10, and now I’m at 100x100x50:

This simulation took about 24 hours to run on a 2008 MacBook Pro with a 2.8 Ghz Core 2 Duo, but that is actually pretty good for fluid simulation! According to my rather un-scientific estimates, the simulation would take about 4 or 5 days without the preconditioner, and even longer without the sparse A-Matrix. I bet I can still push this further, and I’m starting to think about multithreading the simulation with OpenMP to get even more performance and even larger grids. We shall see.

One more thing: rendering this thing. So far I have not been doing any fancy rendering, just using the default OpenGL render that our framework came with. However, I want to get this into my volumetric renderer at some point and maybe even try out the pseudo-black body stuff with it. Eventually I want to try rendering this out with my pathtracer too!

Smoke Simulation Basics!

For CIS563 (Physically Based Animation), our current assignment is to write a fluid simulator capable of simulating smoke inside of a box. For this assignment, we’re using a semi-lagrangian approach based on Robert Bridson’s 2007 SIGGRAPH Course Notes on Fluid Simulation.

I won’t go into the nitty-gritty details of the math behind the simulation (for that, consult the Bridson notes), but I’ll give a quick summary. Basically, we start with a specialized grid structure called the MAC (marker and cell) grid, where each grid cell stores information relevant to the point in space the cell represents, such as density, velocity, temperature, etc. We update values across the grid by pretending a particle carried the cell’s values into the cell and using the velocity to extrapolate in time the particle’s previous position, and look up the values from the grid cell the particle was previously in. We then use that information to perform advection and projection and solve the system through a preconditioned conjugate gradient solver.

So far I have implemented density advection, projection, buoyancy (via temperature advection), and vorticity. For the integration scheme I’m just using basic Eularian, which was the default for the framework we started with. Eularian seems stable enough for the smoke sim, but I might try to go ahead and implement RK4 later anyway, since I suspect RK4 won’t smooth out details as much as basic Eularian.

I’m still missing the actual preconditioner, so for now I’m only testing the simulation on a 2D grid, since otherwise the simulation times will be really really long.

Here is a test on a 100x100 2D grid!

Jello Sim Maya Integration

I ported my jello simulation to Maya!

Well, sort of.

Instead of building a full Maya plugin like my good friend Dan Knowlton did, I opted for a simpler approach: I write out the vertex positions for each jello cube for each time step to a giant text file, and then use a custom Python script in Maya to read the vertex positions from the text file and animate a cube inside of Maya. It is a bit hacky and not nearly as elegant as the full-Maya-plugin approach, but it works in a pinch.

I think beng able to integrate my coding projects into artistic projects is very important, since at the end of the day, the main point of computer graphics is to be able to produce a good looking image. As such, I thought putting some jello into my kitchen scene would be fun, so here is the result, rendered out with Vray (some day I want to replace Vray with my own renderer though!):

The rendering process I’m using isn’t perfect yet… the fact that the jello cubes are being simulated with relatively few vertices is extremely apparent in the above video, as can be seen in how angular the edges of the jello become when it wiggles. At the moment, I can think of two possible fixes: one, simple run the simulation with a higher vertex count, or two, render the jello as a subdivision surface with creased edges. Since the second option should in theory allow for better looking renders without impacting simulation time, I think I will try the subdivision method forst.

But for now, here are some pretty still frames:

Multijello Simulation

The first assignment of the semester for CIS563 is to write a jello simulator using a particle-mass-spring system. The basic jello system involves building a particle grid where all of the particles are connected using a variety of springs, such as bend and shear springs, and then applying forces across the spring grid. In order to step the entire simulation forward in time, we also have to implement a stable integration scheme, such as RK4. For each step forward in time, we have to do intersection tests for each particle against solid objects in the simulation, such as the ground plane or boxes or spheres.

The particle-mass-spring we used is based directly on the Baraff/Witkin 2001 SIGGRAPH Physically Based Animation Course Notes.

For the actual assignment, we were only required to support a single jello interacting against boxes, spheres, cylinders, and the ground. However, I think basic primitives are a tad boring… so I went ahead and integrated mesh collisions as well. The mesh collision stuff is actually using the same OBJ mesh system and KD-Tree system that I am using for my pathtracer! I am planning on cleaning up my OBJ/KD-Tree system and releasing it on Github or something soon, as I think I will still find even more uses for it in graphics projects.

Of course, a natural extension of mesh support is jello-on-jello interaction, which is why I call my simulator “multijello” instead of just singular jello. For jello-on-jello, my approach is to update one jello at a time, and for each jello, treat all other jellos in the simulation as just more OBJ meshes. This solution yields pretty good results, although some interpenetration happens if the time step is too large or if jello meshes are too sparse.

Here’s a video showcasing some things my jello simulator can do:

Pathtracer Time

This semester I am setting out on an independent study under the direction of Joe Kider to build a pathtracer (obviously inspired by my friend and fellow DMD student Peter Kutz). Global illumination rendering techniques are becoming more and more relevant in industry today as hardware performance in the past few years has begun to reach a point where GI in commercial productions is suddenly no longer unfeasibly expensive. Some houses like Sony Imageworks have already moved to full GI renderers like Arnold, while other studios like Pixar are in the process of adopting GI based renderers or extending their existing renderers to support GI lighting. This industry move, coupled with the fact that GI quite simply produces very pretty results, sparked my initial interesting in GI techniques like pathtracing. Having built a basic raytracer last semester, I decided in typical over-confident style: “how hard could it be?”

Here’s my project abstract:

Both path tracing and bidirectional scatter distribution functions (BSDFs) are ideas that have existed within the field of computer graphics for many years and have seen numerous implementations in a variety of rendering pack- ages. Similarly, creating images of convincing plant life is a technical challenge that a host of solutions now exist for. However, achieving dynamic plant effects such as the change of a plants coloring during the transition from summer to fall is a task that to date has been mostly been accomplished using procedural techniques and various compositing tricks.

The goal of this project is to build a path tracing based renderer that is designed specifically with the intent to facil- itate achieving dynamic plant effects with a more physically based approach by introducing a new time component to the existing bidirectional scatter distribution model. By allowing BSDFs to vary over not only space but also over time, plant effects such as leaf decay could be achieved through shaders with appearances that are driven through physically based mathematical models instead of procedural techniques. In other words, this project has two main prongs: develop a robust path tracer with at least basic functionality, and then develop and implement a time- dependent BSDF model within the path tracer.

…and here’s some background that I wrote up for my proposal…

1. INTRODUCTION

Efficiently rendering convincing images with direct and indirect lighting has been a major problem in the field of computer graphics since the field’s very inception, as con- vincingly realistic graphics in games and movies depends upon lighting that can accurately mimic that of reality. Known generally as global illumination, the indirect light- ing problem has in the past decade seen a number of solu- tions such as path tracing and photon mapping that can generate convincingly realistic images with reasonable computational resource consumption and efficiency. One of the key discoveries that enabled the development of modern global illumination techniques is the concept of Bidirectional Scattering Distribution Functions, or BSDFs. Developed as a superset and generalization of two other concepts known as bidirectional reflectance distribution functions (BRDFs) and bidirectional transmittance distribu- tion functions (BTDFs), BSDF is a general mathematical function that describes how light is scattered by a certain surface, given the material properties of the surface. BSDFs are useful today for representing the material properties of an object at a single point in time; however, in reality mate- rial properties can change and morph over time, as exem- plified by the natural phenomena of leaf color changes from summer to fall. This project will attempt to build a prototype of a path tracing renderer with a BSDF model modified to include an additional time component to allow for material properties to change over time in a way representative of how material properties change over time in reality. The hope is that such a renderer will prove to be useful in future attempts to recreate natural phenomena using physically based models, such as leaf decay.

…and the actual goal of the project…

1.1 Design Goals

The project’s goal is to develop a reasonably robust and efficient path tracing renderer with a BSDF model modified to include an additional time component. In order to prove the feasibility of such a modified BSDF model, the end goal is to be able to use the renderer to produce images of plant life with changing surface material properties, in addition to standard test image such as Cornell Box tests that validate the functionality of the underlying basic path tracer.

…and finally, what I’m hoping I’ll actually be able to produce at the end of this independent study:

1.2 Projects Proposed Features and Functionality

The proposed renderer should allow a user to load a sce- ne with an arbitrary number of lights, materials, and objects and render out a realistic, global illumination based render. The renderer should be able to render implicitly defined objects such as spheres and cubes in addition to meshes defined in the .obj format. The renderer should also allow users to specify changes in object/light/camera transfor- mations over time in addition to changes in materials and BSDFs over time and render out a series of frames showing the scene at various points in time. A graphical interface would be a nice additional feature, but is not a priority of this project.

I’ll be posting at least weekly updates to this blog showing my progress. In my next post, I’ll go over some of the papers and sources Joe gave me to look over and explain some of the basic mechanics of how a pathtracer works. Apologies for the casual reader for this particular post being extremely text heavy; I shall have images to show soon!