I am a software developer who is enthusiastic about many subjects, including algorithms, computer graphics, analog/digital electronics, science, and engineering.
I use this space to organize my thoughts on various subjects, which may be incomplete or inaccurate, but may still have value for those willing to excercise a bit of skepticism.
Source code for all demos can be found here.
In my earlier post, I outlined the process for generating an environment map for radiance due to atmospheric scatter (i.e. drawing skies). This isn’t particularly useful on its own, so in the next few posts I’ll explore spherical harmonics lighting.
The primary advantage of spherical harmonics is that it greatly simplifies evaluation of environmental lighting. SH lighting reduces the lighting integral (convolution of radiance with a BRDF) to a single dot product. Spherical harmonics are also a very compact storage format for radiance, irradiance, occlusion, distribution functions, etc. and an essential building block for more sophisticated indirect lighting algorithms, such as Precomputed Radiance Transfer (PRT) or Light Propagation Volumes.
The trade-off is a reduction in precision for low-order SH representations and ringing artifacts. Some common operations also become more difficult in spherical harmonics. Multiplication, for example, becomes more complex, making it more difficult to apply visiblity to a SH environment or gather occlusion from multiple SH sources.
SH coefficients are obtained by (from Wikipedia)
Because we are only concerned with real values, we can replace the complex conjugate of the spherical harmonics basis Y_l^{m*}\left(\theta,\phi\right) with the real spherical harmonics Y_{l,m}\left(\theta,\phi\right) to obtain a SH expansion of radiance, L,
which can be converted to a discrete form and rewritten in terms of a normalized view vector, \boldsymbol{\hat{\omega}},
The differential solid angle, d\Omega, is generally defined in spherical coordinates as d\theta d\phi \sin\theta. However, since we’re not working in spherical coordinates, we need to find a solution for cartesian coordinates.
d\Omega represents the surface area of a unit sphere subtended by a single pixel. We can obtain an approximation of the arc subtended by a pixel in the x- and y- directions by finding the change in the normalized view vector, \boldsymbol{\omega}_x, \boldsymbol{\omega}_y. The solid angle, then, is approximated by the area of the resulting parallelogram,
which makes the full equation
To implement this on the GPU, we perform the integration for each SH basis function separately.
It may be possible to optimize step 2-3 on some hardware by using generateMipmap()
and reading back the highest mip level, multiplying the result by the total number of pixels. Computing the total manually is safer in the general case and allows us to use arbitrary encodings (such as RGBE).
Note: The ‘harmonics’ uniform array will contain a coefficient of one for the current basis function and a value of zero for all others. It may be more efficient to compile a separate program for each basis function.
Additionally, manual calculation of derivatives may be replaced with dFdx()
and dFdy()
when the OES_standard_derivatives extension is supported
Potential optimization: If bilinear interpolation is available, we may sample four texels at once by choosing our coordinates carefully and multiplying the sampled value by four (see Efficient Gaussian Blur with Linear Sampling).
The following demo shows the radiance from an environment map (above) and spherical harmonics representation (below). Second order (9-coefficient) spherical harmonics are used. While high-frequency features are lost, the reconstructed lighting should still be sufficient for diffuse illumination.