Yifan (York) Liu, yil173@ucsd.edu
Zhongyi Wang, zhw039@ucsd.edu
In this project, we learned mathematical models to represent participating media and implemented methods to render scenes with participating media and volumetric effects.
The render
folder contains high-quality images before JPG compression.
Why do we see volumetric effects such as fog and god rays in real life? There are tiny particles between objects that alters the trajectory of light. Therefore, one way to model volumetric effects physically accurately is to simulate every particles and how they reflect and refract light. This is foreseeably expensive to simulate, and thus CG scientists have developed a mathematical model to describe those tiny things in between objects.
Fear not, let us explain all these terms in detail, together with this illustration from CMU's slides.
Let us assume our path tracer recursively calculates
Let us look at the reduced background radiance first. With the participating media,
The second term accumulated in-scattered radiance is not that daunting once we realize that at every point on this light path there could be light from other directions joining into this light path. This integral is over the distance domain and at every distance
We then made several assumptions.
In this project, we assume there is a giant participating medium that shrouds the entire scene and this medium has the same volumetric properties at every point in space. These properties are
We can simplify one more term in the rendering equation
The in-scattered radiance is assumed to come from lights directly. In this sense, single scattering is similar to direct lighting.
Even with this assumption, this is still an integration over the spherical domain.
This spherical integration is similar to BRDF sampling in direct lighting, which can be substituted using next event estimation (directly enumerating all lights). We explored two methods to do this. See the next section for more details.
Recall that there are two integrations in the rendering equation, the first over the distance domain, and the second over the spherical domain.
Free-path sampling avoids both integrations, by letting the light ray randomly bounce off in the middle of a participating media before hitting a surface.
xxxxxxxxxx
tmax = distance to the closest surface
t = sample free flight distance based on extinction coefficient
if t < tmax: # volume interaction
# for each light: do NEE
# add indirect lighting based on phase function sampling
else: # surface interaction
# just normal path tracer code.
# NEE or BRDF sampling or both (MIS)
# add indirect lighting
We first chose to implement this algorithm because the CMU slides contain very clear pseudo-code. Although we finally moved away from this algorithm, we still saved it inside the freepath
branch in the repo.
Then we produced our first render.
We then tried to toggle volumetric effect on/off on a classroom scene.
The effects are pretty cool, but we're not sure if our implementation is unbiased. There are also noticable noises. With more digging, we realized that free-path sampling is capable of rendering multiple scattering, which is not our focus at the point. It is more than what we need, so we decided to switch to another method.
This method is designed for single scattering and is expected to have lower noise.
Again, recall that there are two integrations in the rendering equation, the first over the distance domain, and the second over the spherical domain.
Equi-angular sampling avoids the spherical sampling but still samples the distance. Recall the accumlated in-scatterd radiance term in the volume rendering equation. Assuming there is only one spotlight with intensity
It reduces noise by importance sampling the geometry term.
It does this by
I added a paramter spr
(samples per ray) to indicate how many such samples should we use. During testing spr
around 5 generally yield much better results compared to free-path sampling, with a comparable runtime. To deal with multiple lights, just add a for-loop (similar to NEE) outside of this sampling procedure.
I also implemented the Henyey-Greenstein phase function, with an anisotropy
(-1 to 1) parameter added to our path tracer. The higher the anisotropy, the more light is scatter forward.
Here are some images we rendered with all the techniques in this section:
With equi-angular sampling, each image is rendered in less than 10 minutes with spp < 100 and spr = 4. (also thanks to embree3).
With volumetric effects, the empty space between objects feels not empty any more.
addtional commands in the scene file
volume properties (assuming one big homogeneous volume)
anisotropy
: -1 to 1, (The g parameter in the Henyey-Greenstein phase function)volumeAbsorption
: 0 to 1, volumeScattering
: 0 to 1, Path tracer parameters:
geometries:
f
and vn
similar to objLights:
quadLight2
: a spot light implementation using quad light. lookat, lookfrom, angle, scale, color
accerlation structure: embree
compiler support: any C++20 compiler with parallel algorithms (i.e. Apple Clang not supported).