3/15/03 Page 0.2
3D
world
coordinate
system
The 3D coordinate system that is shared by all the objects in the scene is called the world
coordinate system. By placing every component of the scene in this single shared world, we can
treat the scene uniformly as we develop the presentation of the scene through the graphics display
device to the user. The scene is a master design element that contains both the geometry of the
objects placed in it and the geometry of lights that illuminate it. Note that the world coordinate
system often is considered to represent the actual dimensions of a scene because it may be used to
model some real-world environment. This coordinate system exists without any reference to a
viewer, as is the case with any real-world scene. In order to create an image from the scene, the
viewer is added at the next stage.
3D
eye
coordinate
system
Once the 3D world has been created, an application programmer would like the freedom to allow
an audience to view it from any location. But graphics viewing models typically require a specific
orientation and/or position for the eye at this stage. For example, the system might require that the
eye position be at the origin, looking in –Z (or sometimes +Z). So the next step in the geometry
pipeline is the viewing transformation, in which the coordinate system for the scene is changed to
satisfy this requirement. The result is the 3D eye coordinate system. We can think of this process
as grabbing the arbitrary eye location and all the 3D world objects and sliding them around to
realign the spaces so that the eye ends up at the proper place and looking in the proper direction.
The relative positions between the eye and the other objects have not been changed; all the parts of
the scene are simply anchored in a different spot in 3D space. Because standard viewing models
may also specify a standard distance from the eyepoint to some fixed “look-at” point in the scene,
there may also be some scaling involved in the viewing transformation. The viewing
transformation is just a transformation in the same sense as modeling transformations, although it
can be specified in a variety of ways depending on the graphics API. Because the viewing
transformation changes the coordinates of the entire world space in order to move the eye to the
standard position and orientation, we can consider the viewing transformation to be the inverse of
whatever transformation placed the eye point in the position and orientation defined for the view.
We will take advantage of this observation in the modeling chapter when we consider how to place
the eye in the scene’s geometry.
Clipping
At this point, we are ready to clip the object against the 3D viewing volume. The viewing volume
is the 3D volume that is determined by the projection to be used (see below) and that declares what
portion of the 3D universe the viewer wants to be able to see. This happens by defining how much
of the scene should be visible, and includes defining the left, right, bottom, top, near, and far
boundaries of that space. Any portions of the scene that are outside the defined viewing volume
are clipped and discarded. All portions that are inside are retained and passed along to the
projection step. In Figure 0.2, it is clear that some of the world and some of the helicopter lie
outside the viewable space to the left, right, top, or bottom, but note how the front of the image of
the ground in the figure is clipped—is made invisible in the scene—because it is too close to the
viewer’s eye. This is a bit difficult to see, but if you look at the cliffs at the upper left of the scene
you will see a clipped edge.
Clipping is done as the scene is projected to the 2D eye coordinates in projections, as described
next. Besides ensuring that the view includes only the things that should be visible, clipping also
increases the efficiency of image creation because it eliminates some parts of the geometry from the
rest of the display process.