GDC 2000: Advanced OpenGL A Practical and Robust Bump-mapping
Game Development Technique for Today’s GPUs
7
are needed to effectively hide the memory latency of the second
dependent texture access. Because the perturbed normals and
the specular cube map will not be oriented in the same
coordinate system, a normal coordinate system transformation is
required. Ideally, this transform should be updated per-pixel to
avoid artifacts. The specular cube map assumes the specular
illumination comes from an infinite environment. This means
local lights, attenuation, and spotlight effects are not possible
though an unlimited number of directional light sources can be
encoded into a specular cube map. Good pre-filtering of a
specular cube map, particularly for dull surfaces, is expensive.
Likewise, changing the lighting configuration or environment
requires regenerating the cube map. Each material with a
unique shininess exponent requires a differently pre-filtered
specular cube map. Saving the expense of renormalization of
the perturbed normal is only a true advantage if the diffuse
illumination can also be computed with an unnormalized
perturbed normal too. The obvious way to accomplish this is
with a second diffuse cube map, particularly if the specular cube
map encodes multiple directional lights. Unfortunately, a
diffuse cube map requires an additional dependent cube map
texture fetch unit or requires a second rendering pass.
2.5.3
Bump Map Filtering for Specular Lighting
Both models compute the diffuse contribution based on a
Lambertian model identical to the previous subsection. There-
fore, the previous analysis to justify pre-filtering of perturbed
normals for the diffuse contribution still applies.
However, the exponentiation in the specular contribution makes
a similar attempt to factor L outside the specular dot product
rather dubious. Assuming a discrete collection of n perturbed
normals within a given pixel footprint and equal weighting of
the samples, the perceived specular intensity should be
()
∑
=
•=
n
i
shininess
ispecular
NH
n
I
1
,0max
1
Equation 19
The conventional interpretation of the exponentiation is that it
models an isotropic Gaussian micro-distribution of normals.
The exponent is often referred to as a measure of the surface’s
roughness or shininess. In the context of bump mapping, this
supposes some statistical bumpiness on a scale smaller than
even the bump map.
Fournier [10] proposed a pre-filtering method for normal maps
that substitutes a small set of “Phong peaks” from a larger
sample of perturbed normals to be filtered. The perceived
specular contribution is then approximated as a weighted sum of
the smaller number of Phong peaks. The approximation is
()
()
∑∑
==
•≅•
m
j
e
jj
n
i
shininess
i
j
w
n
11
max,0max
1
NHNH
Equation 20
where m is the number of Phong peaks, w
j
is the weighting for
peak j, N
j
is the normalized direction of peak j, and e
j
is the
exponent for peak j. Fournier suggests fitting the Phong peaks
using an expensive non-linear least-squares approach.
Fournier’s approach is poorly suited for hardware implementa-
tion. Each peak consists of at least four parameters: a weight, an
exponent, and a direction (θ,φ). Fournier also suggests that as
many as seven peaks may be required for adequate recon-
struction and multiple sets of peaks may need to be averaged for
each pixel.
Schilling [23] presented another approach that is more amenable
to hardware implementation yet still out of reach for today’s
available hardware resources. Schilling proposes the construc-
tion of a roughness map that encodes the statistical covariance
of perturbed normals within a given pixel footprint. Schilling’s
approach proposes a considerably more tractable representation
of the normal distribution than Fournier’s more expensive and
comprehensive approach.
The covariance provides enough information to model aniso-
tropic reflection effects. During rendering, information from the
roughness map is used for anisotropic filtering of a specular
cube map or as inputs to a modified Blinn lighting model. A
more compact roughness scheme is possible by limiting the
roughness map to isotropic roughness.
Without a tractable approach to pre-filtering bump maps for
specular illumination given today’s available hardware
resources, we are left with few options. One tractable though
still expensive option is simply to apply Equation 19 for some
fixed number of samples using multiple passes, perhaps using an
accumulation buffer to accumulate and weight each distinct
perturbed normal’s specular contribution.
With no more effective option available, we again consider pre-
filtering the bump map by simply averaging the perturbed
normals within each pixel’s footprint and renormalizing. This is
precisely what we decided was incorrect for diffuse illumination,
but in the case of specular illumination, evaluating Equation 16
or Equation 17 without a normalized normal is something to be
avoided to have any chance of a bright specular highlight. We
can at least observe that this approach is the degenerate case of
Equation 20 where m equals 1 though there is no guarantee that
the average perturbed normal is a remotely reasonable
reconstruction of the true distribution of normals.
The single good thing about this approach is that there is an
opportunity for sharing the same bump map encoding between
the diffuse and specular illumination computations. The diffuse
computation assumes an averaged perturbed normal that is not
normalized while the specular computation requires the same
normal normalized. Both needs are met by storing the
normalized perturbed normal and a descaling factor. The
specular computation uses the normalized normal directly while
the diffuse computation uses the normalized normal but then
fixes the diffuse illumination by multiplying by the descaling
factor to get back to the unnormalized vector needed for
computing the proper diffuse illumination. If N’
filtered
is
computed according to Equation 14, then the normalized
version is
filtered
filtered
N’
N’