April 1st CIS563 Progress Summary- Framework Improvements and Bounding Volumes

Here’s the first progress update/blog digest for the MultiFluids project!

Dan and I started by taking our starting framework and tearing it down to its core. We then rebuilt the base code up with our own custom additions, leaving just the core solver intact. From there, we started building some of the basic features our project will require!

Here are the posts for this update:

  1. Framework Improvements and Particles with Properties: Tearing the base code down to the ground and rebuilding it better, faster, and with more features
  2. Bounding Volumes & Lesson 1: Don’t just assume base code is perfect: Dan discovers some flaws in the base code!
  3. Multiple Arbitrary Bounding Volumes: All-important object interaction

A frame from one of our test videos:

Check the posts for details and videos!

April 1st CIS565 Progress Summary- Camera and Pathtracing

Here’s the first progress summary/blog digest for the GPU Pathtracer project!

Over the past few days, Peter and I established our framework, got random number generation working on the GPU, built an accumulator, figured out parallelized camera ray projection, got spherical intersection tests working, and got a basic path-traced image!

Here are the posts for this update:

  1. Random Number Generation: Fun with parallelized random number generators and seeding
  2. First Rays on the GPU: Parallel raycasting!
  3. Accumulating Iterations: The heart of any monte-carlo based renderer
  4. We Have Path Tracing: First working renders!

Here’s an image from our very first working render! More soon!

CIS563/CIS565 Final Project Github Repos!

For both MultiFluids and the GPU Pathtracer, we will be making our source code accessibly to all on Github!

Of course commercial coding projects and whatnot have very good reasons for keeping their source code locked down and proprietary, but open source is something I very strongly believe in. Open code allows other people to see what one does and give feedback and suggestions for improvement, and also allows other people interested in similar projects to potentially learn and build off of. Everybody wins!

The MultiFluids repository can be found here: https://github.com/betajippity/MultiFluids

The GPU Pathtracer repository can be found here: https://github.com/peterkutz/GPUPathTracer/

…and of course, the relevant blog posts:

GPU Pathtracer: http://gpupathtracer.blogspot.com/2012/03/github-repository.html

MultiFluids: http://chocolatefudgesyrup.blogspot.com/2012/03/github-and-windowsosx.html

CIS563/CIS565 Final Projects- Multiple Interacting Fluids and GPU Pathtracing

Over the next month and a half, I will be working on a pair of final projects for two of my classes, CIS565 (GPU Programming, taught by Patrick Cozzi), and CIS563 (Physically Based Animation, taught by Joe Kider).

For CIS563, I will be teaming up with my fellow classmate and good friend Dan Knowlton to develop a liquid fluid simulator capable of simulating multiple fluids interacting against each other. Dan is without a doubt one of the best in our class and easily my equal or superior in all things graphics, so working with him should be a lot of fun. Our project is going to be based primarily on the paper Multiple Interacting Fluids by Losasso et. al. and as a starting point we will be using Chris Batty’s Fluid 3D framework.

For CIS565, I will be working with my fellow Pixarian and friend Peter Kutz, who is somewhat of a physically based rendering titan at Penn. Working with Peter should be a very interesting and exciting learning experience. Peter and I will be developing a CUDA based GPU Pathtracer with the goal of generating convincing photorealistic images extremely rapidly. We will be developing our GPU pathtracer from scratch, although we will obviously draw inspiration from both Peter’s Photorealizer project and my own CPU pathtracer project.

For both projects, we will be keeping blogs where we will post development updates, so I won’t post too much about development details to this here personal blog. Instead, I’m thinking about posting a weekly digest of progress on both projects with links to interesting highlights on the project blogs.

Dan and I will be blogging at http://chocolatefudgesyrup.blogspot.com/. We’ve titled our project “Chocolate Syrup” for two reasons: firstly, Dan likes to codename his project with types of confectionaries, and secondly, chocolate syrup is one type of highly viscous fluid we aim for our simulator to be able to handle!

Peter and I will be blogging at http://gpupathtracer.blogspot.com/. For now we have decided to call our project “Peter and Karl’s GPU Pathtracer”, for obvious reasons.

Details for each project can be found in the first post of each blog, which are the project proposals.

Multiple Interacting Fluids Proposal: http://chocolatefudgesyrup.blogspot.com/2012/03/project-proposal.html

GPU Pathtracer Proposal: http://gpupathtracer.blogspot.com/2012/03/project-proposal.html

Both of these projects should be very very cool, and I’ll be posting often to both development blogs!

Pathtracer with KD-Tree

I have finished my KD-Tree rewrite! My new KD-Tree implements the Surface-Area Heuristic for finding optimal splitting planes, and stops splitting once a node has either reached a certain sufficiently small surface area, or has a sufficiently small number of elements contained within itself. Basically, very standard KD-Tree stuff, but this time, properly implemented. As a result, I can now render meshes much quicker than before.

Here’s a cow in a Cornell Box. Each iteration of the cow took about 3 minutes, which is a huge improvement over my old raytracer, but still leaves a lot of room for improvement:

…and of course, the obligatory Stanford Dragon test. Each iteration took about 4 minutes for both of these images (the second one I let converge for a bit longer than the first one), and I made these renders a bit larger than the cow one:

So! Of course the KD-Tree could still use even more work, but for now it works well enough that I think I’m going to start focusing on other things, such as more interesting BSDFs and other performance enhancements.

First Pathtraced Image!

Behold, the very first image produced using my pathtracer!

Granted, the actual image is not terribly interesting- just a cube inside of a standard Cornell box type setup, but it was rendered entirely using my own pathtracer! Aside from being converted from a BMP file to a PNG, this render has not been modified in any way whatsoever outside of my renderer (I have yet to name it). This render is the result of a thousand iterations. Here are some comparisons of the variance in the render at various iteration levels (click through to the full size versions to get an actual sense of the variance levels):

Upper Left: 1 iteration. Upper Right: 5 iterations. Lower Left: 10 iterations. Lower Right: 15 iterations.

Upper Left: 1 iteration. Upper Right: 250 iterations. Lower Left: 500 iterations. Lower Right: 750 iterations.

Each iteration took about 15 seconds to finish.

Unfortunately, I have not been able to move as quickly with this project as I would like, due to other schoolwork and TAing for CIS277. Nonetheless, here’s where I am right now:

Currently the renderer is in a very very basic primitive state. Instead of extending my raytracer, I’ve opted for a completely from scratch start. The only piece of code brought over from the raytracer was the OBJ mesh system I wrote, since that was written to be fairly modular anyway. Right now my pathtracer works entirely through indirect lighting and only supports diffuse surfaces… like I said, very basic! Adding direct lighting should speed up render convergence, especially for scenes with small light sources. Also, right now the pathtracer only uses single direction pathtracing from the camera into the scene… adding bidirectional pathtracing should lead to another performance boost.

I’m still working on rewriting my KD-tree system, that should be finished within the next few days.

Something that is fairly high on my list of things to do right now is redesign the architecture for my renderer… right now, for each iteration, the renderer traces a path through a pixel all the way to its recursion depth before moving on to the next pixel. As soon as possible I want to move the renderer to use an iterative (as opposed to recursive) accumulated approach for each iteration (slightly confusing terminology, here i mean iteration as in each render pass), which, oddly enough, is something that my old raytracer already does. I’ve already started moving towards the accumulated approach; right now, I store the first set of raycasts from the camera and reuse those rays in each iteration.

One cool thing that storing the initial ray cast allows me to do is to generate a z-depth version of the render for “free”:

Okay, hopefully by my next post I’ll have the KD-tree rewrite done!