New PIC/FLIP Simulator

Over the past month or so, I’ve been writing a brand new fluid simulator from scratch. It started as a project for a course/seminar type thing I’ve been taking with Professor Doug James, but I’ve been working on since the course ended for fun. I wanted to try our implementing the PIC/FLIP method from Zhu and Bridson; in industry, PIC/FLIP has more or less become the de fact standard method for fluid simulation. Houdini and Naiad both use PIC/FLIP implementations as their core fluid solvers, and I’m aware that Double Negative’s in-house simulator is also a PIC/FLIP implementation.

I’ve named my simulator “Ariel”, since I like Disney movies and the name seemed appropriate for a project related to water. Here’s what a “dambreak” type simulation looks like:

That “dambreak” test was run with approximately a million particles, with a 128x64x64 grid for the projection step.

PIC/FLIP stands for Particle-In-Cell/Fluid-Implicit Particles. PIC and FLIP are actually two separate methods that each have certain shortcomings, but when used together in a weighted sum, produces a very stable fluid solver (my own solver uses approximately a 90% FLIP to 10% PIC ratio). PIC/FLIP is similar to SPH in that it’s fundamentally a particle based method, but instead of attempting to use external forces to maintain fluid volume, PIC/FLIP splats particle velocities onto a grid, calculates a velocity field using a projection step, and then copies the new velocities back onto the particles for each step. This difference means PIC/FLIP doesn’t suffer from the volume conservation problems SPH has. In this sense, PIC/FLIP can almost be thought of as a hybridization of SPH and semi-Lagrangian level-set based methods. From this point forward, I’ll refer to the method as just FLIP for simplicity, even though it’s actually PIC/FLIP.

I also wanted to experiment with OpenVDB, so I built my FLIP solver on top of OpenVDB. OpenVDB is a sparse volumetric data structure library open sourced by Dreamworks Animation, and now integrated into a whole bunch of systems such as Houdini, Arnold, and Renderman. I played with it two years ago during my summer at Dreamworks, but didn’t really get too much experience with it, so I figured this would be a good opportunity to give it a more detailed look.

My simulator uses OpenVDB’s mesh-to-levelset toolkit for constructing the initial fluid volume and solid obstacles, meaning any OBJ meshes can be used to building the starting state of the simulator. For the actual simulation grid, things get a little bit more complicated; I initially started with using OpenVDB for storing the grid for the projection step with the idea that storing the projection grid sparsely should allow for scaling the simulator to really really large scenes. However, I quickly ran into the ever present memory-speed tradeoff of computer science. I found that while the memory footprint of the simulator stayed very small for large sims, it ran almost ten times slower compared to when the grid is stored using raw floats. The reason is that since OpenVDB under the hood is a B+tree, constant read/write operations against a VDB grid end up being really expensive, especially if the grid is not very sparse. The fact that VDB enforces single-threaded writes due to the need to rebalance the B+tree does not help at all. As a result, I’ve left in a switch that allows my simulator to run in either raw float of VDB mode; VDB mode allows for much larger simulations, but raw float mode allows for faster, multithreaded sims.

Here’s a video of another test scene, this time patterned after a “waterfall” type scenario. This test was done earlier in the development process, so it doesn’t have the wireframe outlines of the solid boundaries:

In the above videos and stills, blue indicates higher density/lower velocity, white indicate lower density/higher velocity.

Writing the core PIC/FLIP solver actually turned out to be pretty straightforward, and I’m fairly certain that my implementation is correct since it closely matches the result I get out of Houdini’s FLIP solver for a similar scene with similar parameters (although not exactly, since there’s bound to be some differences in how I handle certain details, such as slightly jittering particle positions to prevent artifacting between steps). Figuring out a good meshing and rendering pipeline turned out to be more difficult; I’ll write about that in my next post.

Takua Chair Renders

A while back, I did some test renders with Takua a0.4 to test out the material system. The test model was a model of an Eames Lounge Chair Wood, and the materials were glossy wood and aluminum. Each render was done with a single large, importance sampled area light and took about two minutes to complete.

These renders were the last tests I did with Takua a0.4 before starting the new version. More on that soon!

Throwback- Holiday Card 2011

Two years ago, I was asked to create CG@Penn’s 2011 Holiday Card. Shortly after finishing that particular project, I started writing a breakdown post but for some reason never finished/posted it. While going through old content for the move to Github Pages, I found some of my old unfinished posts, and I’ve decided to finish up some of them and post them over time as sort of a series of throwback posts.

This project is particularly interesting because almost every approach I took two years ago to finish this project, I would not bother using today. But its still interesting to look back on!

Amy and Joe wanted something wintery and nonreligious for the card, since it would be sent to a very wide and diverse audience. They suggested some sort of snowy landscape piece, so I decided to make a snow-covered forest. This particular idea meant I had to figure out three key elements:

  • Conifer trees
  • Modeling snow ON the trees
  • Rendering snow

Since the holiday card had to be just a single still frame and had to be done in just a few days, I knew right away that I could (and would have to!) cheat heavily with compositing, so I was willing to try more unknown elements than I normally would throw into a single project. Also, since the shot I had in mind would be a wide, far shot, I knew that I could get away with less up-close detail for the trees.

I started by creating a handful of different base conifer tree models in OnyxTree and throwing them directly into Maya/Vray (this was before I had even started working on Takua Render) just to see how they would look. Normally models directly out of OnyxTree need some hand-sculpting and tweaking to add detail for up-close shots, but here I figured if they looked good enough, I could skip those steps. The result looked okay enough to move on:

The textures for the bark and leaves were super simple. To make the bark texture’s diffuse layer, I pulled a photograph of bark off of Google, modified it to tile in Photoshop, and adjusted the contrast and levels until it was the color I wanted. The displacement layer was simply the diffuse layer converted to black and white and with contrast and brightness adjusted. Normally this method won’t work well for up close shots, but again, since I knew the shot would be far away, I could get away with some cheating. Here’s a crop from the bark textures:

The pine needles were also super cheatey. I pulled a photo out of one of my reference libraries, dropped an opacity mask on top, and that was all for the diffuse color. Everything else was hacked in the leaf material’s shader; since the tree would be far away, I could get away with basic transparency instead of true subsurface scattering. The diffuse map with opacity flattened to black looks like this:

With the trees roughed in, the next problem to tackle was getting snow onto the trees. Today, I would immediately spin up Houdini to create this effect, but back then, I didn’t have a Houdini license and hadn’t played with Houdini enough to realize how quickly it could be done. Not knowing better back then, I used 3dsmax and a plugin called Snowflow (I used the demo version since this project was a one-off). To speed up the process, I used a simplified, decimated version of the tree mesh for Snowflow. Any inaccuracies between the resultant snow layer and the full tree mesh were acceptable, since they would look just like branches and leaves poking through the snow:

I tried a couple of different variations on snow thickness, which looked decent enough to move on with:

The next step was a fast snow material that would look reasonably okay from a distance, and render quickly. I wasn’t sure if the snow should have a more powdery, almost diffuse look, or if it should have a more refractive, frozen, icy look. I wound up trying both and going with a 50-50 blend of the two:

From left to right: refractive frozen ice, powdery diffuse, 50-50 blend

The next step was to compose a shot, make a very quick, simple lighting setup, and do some test renders. After some iterating, I settled for this render as a base for comp work:

The base render is very blueish since the lighting setup was a simple, grey-blueish dome light over the whole scene. The shadows are blotchy since I turned Vray’s irradiance cache settings all the way down for faster rendertimes; I decided that I would rather deal with the blotchy shadows in post and have a shot at making the deadline rather than wait for a very long rendertime. I wound up going with the thinner snow at the time since I wanted the trees to be more recognizable as trees, but in retrospect, that choice was probably a mistake.

The final step was some basic compositing. In After Effects, I applied post-processed DOF using a z-depth layer and Frischluft, color corrected the image, cranked up the exposure, and added vignetting to get the final result:

Looking back on this project two years later, I don’t think the final result looks really great. The image looks okay for two days of rushed work, but there is enormous room for improvement. If I could go back and change one thing, I would have chosen to use the much heavier snow cover version of the trees for the final composition. Also, today I would approach this project very very differently; instead of ping-ponging between multiple programs for each component, I would favor a almost pure-Houdini pipeline. The trees could be modeled as L-systems in Houdini, perhaps with some base work done in Maya. The snow could absolutely be simmed in Houdini. For rendering and lighting, I would use either my own Takua Render or some other fast physically based renderer (Octane, or perhaps Renderman 18’s iterative pathtracing mode) to iterate extremely quickly without having to compromise on quality.

So that’s the throwback breakdown of the CG@Penn Holiday 2011 card! I learned a lot from this project, and looking back and comparing how I worked two years ago to how I work today is always a good thing to do.

Code and Visuals Version 4.0

I’d like to introduce the newest version of my computer graphics blog, Code and Visuals! On the surface, everything has been redesigned with a new layer of polish; everywhere, the site is now simpler, cleaner, and the layout is now fully responsive. Under the hood, I’ve moved from Blogger to Jekyll, hosted on Github Pages.

As part of the move to Jekyll, I’ve opted to clean up a lot of old posts as well. This blog started as some combination of a devblog, doodleblog, and photoblog, but quickly evolved into a pure computer graphics blog. In the interest of keeping historical context intact, I’ve ported over most of my older non-computer graphics posts, with minor edits and touchups here and there. A handful of posts I didn’t really like I’ve chosen to leave behind, but they can still be found on the old Blogger-based version of this blog.

The Atom feed URL for Code and Visuals is still the same as before, so that should transition over smoothly.

Why the move from Blogger to Jekyll/Github Pages? Here are the main reasons:

  • Markdown/Github support. Blogger’s posting interface is all kinds of terrible. With Jekyll/Github Pages, writing a new post is super nice: simply write a new post in a Markdown file, push to Github, and done. I love Markdown and I love Github, so its a great combo for me.
  • Significantly faster site. Previous versions of this blog have always been a bit pokey speed-wise, since they relied on dynamic page generators (originally my hand-rolled PHP/MySQL CMS, then Wordpress, and then Blogger). However, Jekyll is a static page generator; the site is converted from Markdown and template code into static HTML/CSS once at generation time, and then simply served as pure HTML/CSS.
  • Easier templating system. Jekyll’s templating system is build on Liquid, which made building this new theme really fast and easy.
  • Transparency. This entire blog’s source is now available on Github, and the theme is separately available here.

I’ve been looking to replace Blogger for some time now. Before trying out Jekyll, I was tinkering with Ghost, and even fully built out a working version of Code and Visuals on a self-hosted Ghost instance. In fact, this current theme was originally built for Ghost and then ported to Jekyll after I decided to use Jekyll (both the Ghost and Jekyll versions of this theme are in the Github repo). However, Ghost as a platform is still extremely new and isn’t quite ready for primetime yet; while Ghost’s Markdown support and Node.js underpinnings are nice, Ghost is still missing crucial features like the ability to have an archive page. Plus, at the end of the day, Jekyll is just plain simpler; Ghost is still a CMS, Jekyll is just a collection of text files.

I intend to stay on a Jekyll/Github Pages based solution for a long time; I am very very happy with this system. Over time, I’ll be moving all of my other couple of non-computer graphics blogs over to Jekyll as well. I’m still not sure if my main website needs to move to Jekyll though, since it already is coded up as a series of static pages and requires a slightly more complex layout on certain pages.

Over the past few months I haven’t posted much, since over the summer almost all of my Pixar related work was under heavy NDA (and still is and will be for the foreseeable future, with the exception of our SIGGRAPH demo), and a good deal of my work at Cornell’s Program for Computer Graphics is under wraps as well while we work towards paper submissions. However, I have some new personal projects I’ll write up soon, in addition to some older projects that I never posted about.

With that, welcome to Code and Visuals Version 4.0!

Pixar Optix Lighting Preview Demo

For the past two months or so, I’ve been working at Pixar Animation Studio as a summer intern with Pixar’s Research Group. The project I’m on for the summer is a realtime, GPU based lighting preview tool implemented on top of NVIDIA’s OptiX framework, entirely inside of The Foundry’s Katana. I’m incredibly pleased to be able to say that our project was demoed at SIGGRAPH 2013 at the NVIDIA booth, and that NVIDIA has a recording of the entire demo online!

The demo was done by our project’s lead, Danny Nahmias, and got an overwhelmingly positive reception. Check out the recording here:

FXGuide also did a podcast about our demo! Check it out here.

I’m just an intern, and the vast majority of the cool work being done on this project is from Danny Nahmias, Phillip Rideout, Mark Meyer, and others, but I’m very very proud, and consider myself extraordinarily lucky, to be part of this team!

Edit: I’ve replaced the original Ustream embed with a Vimeo mirror since the Ustream embed was crashing Chrome for some people. The original Ustream link is here.

Giant Mesh Test

My friend/schoolmate Zia Zhu is an amazing modeler, and recently she was kind enough to lend me a ZBrush sculpt she did for use as a high-poly test model for Takua Render. The model is a sculpture of Venus, and is made up of slightly over a million quads, or about two million triangles once triangulated inside of Takua Render.

Here are some nice, pretty test renders I did. As usual, everything was rendered with Takua Render, and there has been absolutely zero post-processing:

Each one of these renders was lit using a single, large area light (with importance sampled direct lighting, of course). The material on the model is just standard lambert diffuse white; I’ll do another set of test renders once I’ve finished rewriting my subsurface scatter system. Each render was set to 2800 samples per pixels and took about 20 minutes to render on a single GTX480. In other words, not spectacular, but not bad either.

The key takeaway from this series of tests was that Takua’s performance still suffers significantly when datasets become extremely large; while the render took about 20 minutes, setup time (including memory transfer, etc) took nearly 5 minutes, which I’m not happy about. I’ll be taking some time to rework Takua’s memory manager.

On a happier note, KD-tree construction performed well! The KD-tree for the Venus sculpt was built out to a depth of 30 and took less than a second to build.

Here’s a bonus image of what the sculpt looks like in the GL preview mode:

Again, all credit for the actual model goes to the incredibly talented Zia Zhu!