Sep 9 2009

Point-based Indirect Diffuse/Colour Bleeding

If you read my previous post about baking occlusion, you’ll be very comfortable about this subject.

Much like baking occlusion into a point cloud, baking indirect diffuse also involves two steps. First we need to actually bake it into a point cloud, and then we need to read it back from the same point cloud.

First step:

Instead of having to add a surface shader that computes indirect diffuse to every object, in this approach we are going to use a light shader that computes that for us. This way we can leave our surface shaders, etc, untouched and only using a simple light with this shader I wrote. Just create an ambient light and apply the shader to it. You can also specify the number of samples in the shader, and the intensity of the indirect diffuse being calculated.
To bake it, we need to make sure to turn off the culling operations to be sure that the backfacing and hidden faces are taken into account by the renderer when creating the indirect diffuse pass(second step of this approach), and we also need to use a dicing method that is independent from the camera view:

Attribute “cull” “hidden” [0]
Attribute “cull” “backfacing [0]
Attribute “dice” “rasterorient” [0]

Again, we can use low pixel samples because we are not interested in the quality of the render, we only want to create the point cloud at this stage, thus its creation being faster:

PixelSamples 1 1

If we want a very dense point cloud, then we need to use a low shading rate. 0.5 should be enough. Just play around a bit to get this value right.

Second step:

In this step we only have to read back the information stored in the point cloud. To do this we use this shader I wrote in the same light, which is specific to read the data stored in the point cloud we created in the step before.
You can increase the pixel samples, lower the shading rate if needed and turn the culling and dicing back on.


To render these images, I used 3Delight. The shaders that work with PRman are here. I haven’t tested them but they should work.

Let me know what you guys think,


Mar 7 2009

Point-Based Ambient Occlusion

Hello, this is my first post and I’m going to talk about ambient occlusion, also known as geometric exposure, with Renderman. The approach I’m going to describe is called Point-Based Ambient Occlusion and uses a point cloud to compute the occlusion. If you read my “About” you know I use 3Delight to do my personal research and tests.

This approach is done in two passes. In the first pass we need to convert the scene’s geometry to a point cloud, and in the second pass the point cloud is used to compute the occlusion effect.

Why using this?

Well, it’s much faster than ray-traced occlusion, doesn’t produce any noise, uses less memory, and displacements that usually are a very costly operation when using ray-tracing, don’t have any additional cost when using the point-based approach.

There are also disadvantages. For example, as it is a two-pass approach, it can become a bit daunting when setting up its pipeline. Also, it is not numerically accurate as the ray-tracing approach, but we are not aiming to have numerically accurate results, but instead, having visually accurate approach. In other words we want it to look as good as the ray-tracing approach, and if it takes less time and resources, we’ve achieved our goal.

Now, the steps.

First we need to create a point cloud. This pass is very simple. We only need to use a shader that will convert the geometry into a point cloud. This shader will use the bake3d shadeop to convert each micropolygon to a point, thus creating the point cloud. I use a shader that comes with 3Delight, it is in the $DELIGHT/examples/ptc_occlusion/ folder and is called ptc_write.

In order to get this right, we must do some things first. We need to make sure to turn off the culling operations to make sure that the backfacing and hidden faces are taken into account by the renderer when creating the occlusion pass(second pass of this approach). We also need to use a dicing method that is independent from the camera view:

Attribute “cull” “hidden” [0]
Attribute “cull” “backfacing [0]
Attribute “dice” “rasterorient” [0]

As in this pass we only care about creating the point cloud, and not having a high quality result, we can use low Pixel Samples:

PixelSamples 1 1

This way the creation of the point cloud will be faster. However, if we are aiming to have a very dense point cloud we should lower the Shading Rate. A value of 0.5 should be enough to have a very dense point cloud, but the point cloud creation will be slower.

Now, to the occlusion creation:

Here we use a shader that reads back the point cloud file to compute the occlusion. It can get a bit tricky to get good and fast results. Regarding the occlusion shadeop used in this shader, my experience, and hearing opinions from others, is that we need to have a good balance between the bias and the max solid angle. A bias value of 0.1 should be enough. As for the max solid angle if we use higher values, the rendering time will be faster but the quality decreases and vice-versa. So a good range is 0.01 to 0.5.

We should also set “hitsides” to “both” so that each side of the point cloud’s samples will produce occlusion.

Usually we’ll also want to set “clamp” to 1, as this creates results similar to the ray-tracing approach, but at the cost of speed.

The shader that comes with 3Delight that reads the point cloud to create the occlusion, is very simple, so I made some modifications in order for me to able to control the parameters of the occlusion shadeop. You can get it here if you’d like.

Finally we need to set the quality of the render, such as Pixel Samples, etc, as we want.

When you render it, you’ll notice it is much faster than the ray-tracing approach. Here you can see two renders I did with the purpose of showing you the render times. Higher Resolution image is here. The Ray-Tracing approach took me 13 minutes and the Point-Based only 4 minutes and 40 seconds. Less than half time.


I hope you find this useful, and please, leave me a comment if you think I did something wrong or if you have something to add.