Appeared on Machine Graphics and Vision, Vol. 7, No 3, 1998
Light Field Rendering of Dynamic Scene
Wei Li, Qi Ke, Xiaohu Huang, and Nanning Zheng
Institute of Artificial Intelligence and Robotics
Xi'an Jiaotong University
Abstract:
Image based rendering has displayed advantage in speed over traditional geometry based rendering
algorithms. With the four dimensional light field descriptions, static scene can be generated with interactive rate
on ordinary computer. The limitation of the scheme is that it can only handle static scene and fixed illumination.
This paper raises the idea to decompose the light field into sub-light-fields that do not change as scene changes. It
expands the advantage of light field rendering to dynamic scenes where the position and orientation of objects,
lights and viewpoint can be modified arbitrarily.
The sub-light-fields include: ambient light field and spot light field. The latter is actually an eight-
dimensional space. Because diffuse reflection is independent on view direction, this paper presents a four-
dimensional representation of spot light field. Considering the linearity of diffuse reflection to different spot
lights, the spot light fields of an object can be represented by the reflection light field to a pure-color light with
unit intensity, to decrease storage and preprocessing. Owing to the coherency in their data structures, data of the
corresponding point in the ambient light field, diffuse light field and depth field are combined into a 5-
dimensional vector which can be compressed efficiently with vector quantization. The algorithm given in this
paper accurately computes typical characteristics of dynamic scene such as changes in surface color and shadow.
Keywords:
Image-based Rendering, Ambient Light Field, Spot Light Field, Diffuse Light Field, and Depth
Field.
1. Introduction:
The traditional geometry based graphics rendering algorithm needs to handle description of the borders of
divided scenes (polygon rendering) or sampling of space functions (volume rendering) and all kinds of lighting
models. When the geometry model is very complicated, it will take quite a long time to render an image if we
want to display the scene as real as possible.
In recent years, researchers are more and more interested in image based rendering, which takes rendered or
natural images as input and generate new scenes after a series of simple operations (such as memory operation or
linear interpolation). Rendering time is reduced by these algorithms compared with traditional methods and
generally not related to the complicity of the scene.
Environment Map is a representative of early image based rendering algorithms. An Environment Map
records the colors of rays casting onto a point from all directions. QuickTime VR [4] released by Apple Inc. is
exactly a commercial system based on an idea in which users can view virtual environments in any direction from
a fixed viewpoint.
The most serious problem in the Environment-Map-based algorithm is that the viewpoint is fixed. Thus
there are only three degrees of freedom when a viewer interacts with the scene in the space of five degrees of
freedom, that is, only the viewing direction can be changed. To overcome this shortcoming, a new image
composition method based on image warping and view interpolation is put forth [5] and [3]. In this algorithm, the
depth of each pixel is needed or corresponding points between images should be predetermined.
Recently, Marc Levoy and Pat Hanrahan brought about a light field rendering algorithm [1]. At the same
time, a similar algorithm based on the so-called Lumigraph is also presented by Steven Gortler
et. al.
[2]. The
light field is a function that describes the variation of radiance according to space coordinates and casting
directions. It contains information of the scene from any viewpoint in any direction and makes it unnecessary to
match pixels between images. Light field rendering can be used to generate static scenes of any complexity at
interactive speed on low-cost workstations or PCs.
Only static scenes can be handled is the obvious shortcoming of the light field rendering method mentioned
above, that is, the positions and directions of objects and lights are not allowed to change. However in dynamic
scenes, the reflected rays from the object surface and the shadow on an object cast by another are variables to the
relative positions of objects and lights. In this paper, we present a light field rendering algorithm to render
dynamic scenes. That is, the positions and orientations of objects, lights and viewpoint as well as the lights’ colors
are all changeable by users or application programs. The basic idea of the algorithm is to divide the light field of
the whole scene into several sub-light fields that will not change with the scene, but only shift or rotate as a whole.
We propose two kinds of sub-light fields: the ambient light field and the spot light field that are respectively