Subsurface Scattering and New Name

I implemented subsurface scattering in my renderer!

Here’s a Stanford Dragon in a totally empty environment with just one light source providing illumination. The dragon is made up of a translucent purple jelly-like material, showing off the subsurface scattering effect:

Subsurface scattering is an important behavior that light exhibits upon hitting some translucent materials; normal transmissive materials will simply transport light through the material and out the other side, but subsurface scattering materials will attenuate and scatter light before releasing the light somewhere not necessarily along a line from the entry point. This is what gives skin and translucent fruit and marble and a whole host of other materials their distinctive look.

There are currently a whole host of methods to rapidly approximate subsurface scattering, including some screen-space techniques that are actually fast enough for use in realtime renderers. However, my implementation at the moment is purely brute-force monte-carlo; while extremely physically accurate, it is also very very slow. In my implementation, when a ray enters a subsurface scattering material, I generate a random scatter direction via isotropic scattering, and then calculate light accumulation attenuation based on an absorption coefficient defined for the material. This approach is very similar to the one taken by Peter and me in our GPU pathtracer.

At some point in the future I might try out a faster approximation method, but for the time being, I’m pretty happy with the visual result that brute-force monte-carlo scattering produces.

Here’s the same subsurface scattering dragon from above, but now in the Cornell Box. Note the cool colored soft shadows beneath the dragon:

Also, I’ve finally settled on a name for my renderer project: Takua Render! So, that is what I shall be calling my renderer from now on!

More Fun with Jello

At Joe’s request, I made another jello video! Joe suggested I make a video that shows the simulation both in the actual simulator’s GL view, and rendered out from Maya, so this video does just that. The starting portion of the video shows what the simulation looks like in the simulator GL view, and then shifts to the final render (done with Vray, my pathtracer still is not ready yet!). The GL and final render views don’t quite line up with each other perfectly, but its close enough that you get the idea.

There is a slight change in the tech involved too- I’ve upgraded my jello simulator’s spring array so that simulations should be more stable now. The change isn’t terribly dramatic; all I did was add in more bend and shear springs in my simulation, so jello cubes now “try” harder to return to a perfect cube shape.

This video is making use of my Vray white-backdrop studio setup! The pitcher was just a quick 5 minute model, nothing terribly interesting there.

…and of course, some stills:

Smoke Sim + Volumetric Renderer

Something I’ve had on my list of things to do for a few weeks now is mashing up my volumetric renderer from CIS460 with my smoke simulator from CIS563.

Now I can cross that off of my list! Here is a 100x100x100 grid smoke simulation rendered out with pseudo Monte-Carlo black body lighting (described in my volumetric renderer post):

The actual approach I took to integrating the two was to simply pipeline them instead of actually merging the codebases. I added a small extension to the smoke simulator that lets it output the smoke grid to the same voxel file format that the volumetric renderer reads in, and then wrote a small Python script that just iterates over all voxel files in a folder and calls the volumetric renderer over and over again.

I’m actually not entirely happy with the render… I don’t think I picked very good settings for the pseudo-black body, so a lot of the render is overexposed and too bright. I’ll probably tinker with that some later and re-render the whole thing, but before I do that I want to move the volumetric renderer onto the GPU with CUDA. Even with multithreading via OpenMP, the rendertimes per frame are still too high for my liking… Anyway, here are some stills!

April 23rd CIS565 Progress Summary- Speed and Refraction

This post is the third update for the GPU Pathtracer project Peter and I are working on!

Over the past few weeks, the GPU Pathtracer has gained two huge improvements: refraction, and major speed gains! In just 15 seconds on Peter’s NVIDIA GTX530 (on a more powerful card in the lab, we get even better speeds) , we can now get something like this:

Admittedly Peter has been contributing more interesting code than I have, which makes sense since in this project Peter is clearly the veteran rendering expert and I am the newcomer. But, I am learning a lot, and Peter is getting more cool stuff done since I can get other stuff done and out of the way!

The posts for this update are:

  1. Performance Optimization: Speed boosts through zero-weight ray elimination
  2. Cool Error Render: Fun debug images from getting refraction to work
  3. Transmission: Glass spheres!
  4. Fast Convergence: Tricks for getting more raw speed

As always, check the posts for details and images!

April 14th CIS563 Progress Summary- Meshes and Meshes and Meshes

This post is the second update for the MultiFluids project!

The past week for Dan and me has been all about meshes: mesh loading, mesh interactions, and mesh reconstruction! We integrated in a OBJ to Signed Distance Field convertor, which allowed us to then implement liquid-against-mesh interactions and use meshes to define starting liquid positions. We also figured out how to run marching cubes on signed distance fields, allowing us to export OBJ mesh sequences of our fluid simulation and bring our sims into Maya for rendering!

Here is a really cool render from this week:

The posts for this week are:

  1. Surface Reconstruction via Marching Cubes: Level set goes in, OBJ comes out
  2. Mesh Interactions: Using meshes as interactable objects
  3. Meshes as Starting Liquid Volumes and Maya Integration: Cool tests with a liquid Stanford Dragon

Check out the posts for details, images, and videos!

April 5th CIS565 Progress Summary- Interactivity, Alpha Review, Fresnel Reflections, Antialiasing

This post is the second update for the GPU Pathtracer project!

Since the last update, Peter and I added an interactive camera to the renderer to allow realtime movement around the scene! We also had our Alpha Review, which went quite well, and Peter implemented a reflection model. Initially the reflection model used was Schlick’s Approximation, but later Peter replaced that with the full Fresnel equations. I also added super-sampled anti-aliasing for a smoother image.

The posts for this update:

  1. Interactivity and Moveable Camera: We can move around the scene!
  2. Alpha Review Presentation: Slides and other stuff from our Alpha Review
  3. Specular Reflection Test: The first test with Shlick’s Approximation
  4. Fresnel Reflections: Some details on our reflection model
  5. Abstract Art: Some fun buggy renders Peter produced while debugging
  6. Anti-Aliasing: Super-sampled anti-aliasing!

A nice image from the last post:

Check the posts for tons of details, images, and even some video!