Volumetric Path Tracing

Yifan (York) Liu, yil173@ucsd.edu

Zhongyi Wang, zhw039@ucsd.edu

TL;DR

In this project, we learned mathematical models to represent participating media and implemented methods to render scenes with participating media and volumetric effects.

The render folder contains high-quality images before JPG compression.

Math: Volume Rendering Equation

Why do we see volumetric effects such as fog and god rays in real life? There are tiny particles between objects that alters the trajectory of light. Therefore, one way to model volumetric effects physically accurately is to simulate every particles and how they reflect and refract light. This is foreseeably expensive to simulate, and thus CG scientists have developed a mathematical model to describe those tiny things in between objects.

L(x,w)=Tr(x,xz)L(xz,w)reduced background radiance+0zTr(x,xt)σs(xt)Ls(xt,w)dtaccumulated in-scattered radiance

Fear not, let us explain all these terms in detail, together with this illustration from CMU's slides.

Let us assume our path tracer recursively calculates L(x,w): the radiance looking from point at looking at direction w. Without the participating media in between, we would intersect this eye ray with the scene and find the closest hit point xz, and calculate direct and indirect lighting L(xz,w).

Let us look at the reduced background radiance first. With the participating media, L(xz,w) is attenuated (e.g. fog effect), because particles in the participating media scatter the light all around. Tr(x,xz) is a (0,1) factor meaning "what portion of light is left".

The second term accumulated in-scattered radiance is not that daunting once we realize that at every point on this light path there could be light from other directions joining into this light path. This integral is over the distance domain and at every distance t, it sums the in-scattered radiance Ls(xt,w) at t, attenuated by Tr(x,xz). There is also a scattering factor σs(xt), indicating the rate of accumulation from other light paths. This factor could be spatially varing.

We then made several assumptions.

Homogeneous Media

In this project, we assume there is a giant participating medium that shrouds the entire scene and this medium has the same volumetric properties at every point in space. These properties are

We can simplify one more term in the rendering equation

Single Scattering

The in-scattered radiance is assumed to come from lights directly. In this sense, single scattering is similar to direct lighting.

Even with this assumption, this is still an integration over the spherical domain.

Ls(xt,w)=S2phase(xt,w,w)Li(xt,w)dwLi(xt,w)=directly coming from a light, or 0 if no light 

This spherical integration is similar to BRDF sampling in direct lighting, which can be substituted using next event estimation (directly enumerating all lights). We explored two methods to do this. See the next section for more details.

Implementation Workflow

Free-path sampling

Recall that there are two integrations in the rendering equation, the first over the distance domain, and the second over the spherical domain.

Free-path sampling avoids both integrations, by letting the light ray randomly bounce off in the middle of a participating media before hitting a surface.

We first chose to implement this algorithm because the CMU slides contain very clear pseudo-code. Although we finally moved away from this algorithm, we still saved it inside the freepath branch in the repo.

Then we produced our first render.

We then tried to toggle volumetric effect on/off on a classroom scene.

The effects are pretty cool, but we're not sure if our implementation is unbiased. There are also noticable noises. With more digging, we realized that free-path sampling is capable of rendering multiple scattering, which is not our focus at the point. It is more than what we need, so we decided to switch to another method.

Equi-angular sampling

This method is designed for single scattering and is expected to have lower noise.

Again, recall that there are two integrations in the rendering equation, the first over the distance domain, and the second over the spherical domain.

Equi-angular sampling avoids the spherical sampling but still samples the distance. Recall the accumlated in-scatterd radiance term in the volume rendering equation. Assuming there is only one spotlight with intensity Φ at position K:

0zTr(x,xt)σs(xt)Ls(xt,w)dt=0zTr(x,xt)σs(xt)Φ(Kxt)2dt

It reduces noise by importance sampling the geometry term.

It does this by

accumulated in-scatterd radiance=σsabTr(x,xt)Tr(xt,K)Vis(xt,K)ΦD2+t2dtone-sample=σsTr(x,xt)Tr(xt,K)Vis(xt,K)ΦD2+t2/pdf(t)ξ=rand01()t=Dtan((1ξ)θa+ξθb)pdf(t)=D(θbθa)(D2+t2)

I added a paramter spr (samples per ray) to indicate how many such samples should we use. During testing spr around 5 generally yield much better results compared to free-path sampling, with a comparable runtime. To deal with multiple lights, just add a for-loop (similar to NEE) outside of this sampling procedure.

I also implemented the Henyey-Greenstein phase function, with an anisotropy (-1 to 1) parameter added to our path tracer. The higher the anisotropy, the more light is scatter forward.

Here are some images we rendered with all the techniques in this section:

With equi-angular sampling, each image is rendered in less than 10 minutes with spp < 100 and spr = 4. (also thanks to embree3).

Conclusion

With volumetric effects, the empty space between objects feels not empty any more.

Appendix: more implementation details

References