首页Introduction to 3D Game programming with DirectX 11英文版
Introduction to 3D Game programming with DirectX 11英文版
5星 · 超过95%的资源 需积分: 9 121 浏览量 更新于2023-05-26 评论 2 收藏 23.97MB PDF 举报
Introduction to 3D Game programming with DirectX 11
n this part, we study fundamental Direct3D concepts and techniques that are used throughout the rest of this book. With these
fundamentals mastered, we can move on to writing more interesting applications. A brief description of the chapters in this part
Chapter 4, Direct3D Initialization: In this chapter, we learn what Direct3D is about and how to initialize it in preparation for 3D
drawing. Basic Direct3D topics are also introduced, such as surfaces, pixel formats, page flipping, depth buffering, and multisampling.
We also learn how to measure time with the performance counter, which we use to compute the frames rendered per second. In
addition, we give some tips on debugging Direct3D applications. We develop and use our own application framework–not the SDK's
Chapter 5, The Rendering Pipeline: In this long chapter, we provide a thorough introduction to the rendering pipeline, which is the
sequence of steps necessary to generate a 2D image of the world based on what the virtual camera sees. We learn how to define 3D
worlds, control the virtual camera, and project 3D geometry onto a 2D image plane.
Chapter 6, Drawing in Direct3D: This chapter focuses on the Direct3D API interfaces and methods needed to configure the rendering
pipeline, define vertex and pixel shaders, and submit geometry to the rendering pipeline for drawing. The effects framework is also
introduced. By the end of this chapter, you will be able to draw grids, boxes, spheres and cylinders.
Chapter 7, Lighting: This chapter shows how to create light sources and define the interaction between light and surfaces via
materials. In particular, we show how to implement directional lights, point lights, and spotlights with vertex and pixel shaders.
Chapter 8, Texturing: This chapter describes texture mapping, which is a technique used to increase the realism of the scene by
mapping 2D image data onto a 3D primitive. For example, using texture mapping, we can model a brick wall by applying a 2D brick
wall image onto a 3D rectangle. Other key texturing topics covered include texture tiling and animated texture transformations.
Chapter 9, Blending: Blending allows us to implement a number of special effects like transparency. In addition, we discuss the
intrinsic clip function, which enables us to mask out certain parts of an image from showing up; this can be used to implement fences
and gates, for example. We also show how to implement a fog effect.
Chapter 10, Stenciling: This chapter describes the stencil buffer, which, like a stencil, allows us to block pixels from being drawn.
Masking out pixels is a useful tool for a variety of situations. To illustrate the ideas of this chapter, we include a thorough discussion on
implementing planar reflections and planar shadows using the stencil buffer.
Chapter 11, The Geometry Shader: This chapter shows how to program geometry shaders, which are special because they can create
or destroy entire geometric primitives. Some applications include billboards, fur rendering, subdivisions, and particle systems. In
addition, this chapter explains primitive IDs and texture arrays.
Chapter 12, The Compute Shader: The Compute Shader is a programmable shader Direct3D exposes that is not directly part of the
rendering pipeline. It enables applications to use the graphics processing unit (GPU) for general purpose computation. For example, an
imaging application can take advantage of the GPU to speed up image processing algorithms by implementing them with the compute
shader. Because the Compute Shader is part of Direct3D, it reads from and writes to Direct3D resources, which enables us integrate
results directly to the rendering pipeline. Therefore, in addition to general purpose computation, the compute shader is still applicable
for 3D rendering.
Chapter 13, The Tessellation Stages: This chapter explores the tessellation stages of the rendering pipeline. Tessellation refers to
subdividing geometry into smaller triangles and then offsetting the newly generated vertices in some way. The motivation to increase
the triangle count is to add detail to the mesh. To illustrate the ideas of this chapter, we show how to tessellate a quad patch based on
distance, and we show how to render cubic Bézier quad patch surfaces.
The initialization process of Direct3D requires us to be familiar with some basic Direct3D types and basic graphics concepts; the first
section of this chapter addresses these requirements. We then detail the necessary steps to initialize Direct3D. After that, a small
detour is taken to introduce accurate timing and the time measurements needed for real-time graphics applications. Finally, we explore
the sample framework code, which is used to provide a consistent interface that all demo applications in this book follow.
1. To obtain a basic understanding of Direct3D’s role in programming 3D hardware.
2. To understand the role COM plays with Direct3D.
3. To learn fundamental graphics concepts, such as how 2D images are stored, page flipping, depth buffering, and multisampling.
4. To learn how to use the performance counter functions for obtaining high-resolution timer readings.
5. To find out how to initialize Direct3D.
6. To become familiar with the general structure of the application framework that all the demos of this book employ.
The Direct3D initialization process requires us to be familiar with some basic graphics concepts and Direct3D types. We introduce
these ideas and types in this section, so that we do not have to digress in the next section.
4.1.1 Direct3D Overview
Direct3D is a low-level graphics API (application programming interface) that enables us to render 3D worlds using 3D hardware
acceleration. Essentially, Direct3D provides the software interfaces through which we control the graphics hardware. For example, to
instruct the graphics hardware to clear the render target (e.g., the screen), we would call the Direct3D method
ID3D11DeviceContext::ClearRenderTargetView. Having the Direct3D layer between the application and the graphics hardware
means we do not have to worry about the specifics of the 3D hardware, so long as it is a Direct3D 11 capable device.
A Direct3D 11 capable graphics device must support the entire Direct3D 11 capability set, with few exceptions (some things like
the multisampling count still need to be queried, as they can vary between Direct3D 11 hardware). This is in contrast to Direct3D 9,
where a device only had to support a subset of Direct3D 9 capabilities; consequently, if a Direct3D 9 application wanted to use a
certain feature, it was necessary to first check if the available hardware supported that feature, as calling a Direct3D function not
implemented by the hardware resulted in failure. In Direct3D 11, device capability checking is no longer necessary because it is now a
strict requirement that a Direct3D 11 device implement the entire Direct3D 11 capability set.
Component Object Model (COM) is the technology that allows DirectX to be programming language independent and have backwards
compatibility. We usually refer to a COM object as an interface, which for our purposes can be thought of and used as a C++ class.
Most of the details of COM are hidden to us when programming DirectX with C++. The only thing that we must know is that we
obtain pointers to COM interfaces through special functions or by the methods of another COM interface–we do not create a COM
interface with the C++ new keyword. In addition, when we are done with an interface we call its Release method (all COM interfaces
inherit functionality from the IUnknown COM interface, which provides the Release method) rather than delete it–COM objects
perform their own memory management.
There is, of course, much more to COM, but more detail is not necessary for using DirectX effectively.
COM interfaces are prefixed with a capital I . For example, the COM interface that represents a 2D texture is called
4.1.3 Textures and Data Resource Formats
A 2D texture is a matrix of data elements. One use for 2D textures is to store 2D image data, where each element in the texture
stores the color of a pixel. However, this is not the only usage; for example, in an advanced technique called normal mapping, each
element in the texture stores a 3D vector instead of a color. Therefore, although it is common to think of textures as storing image
data, they are really more general purpose than that. A 1D texture is like a 1D array of data elements, and a 3D texture is like a 3D
array of data elements. As will be discussed in later chapters, textures are actually more than just arrays of data; they can have
mipmap levels, and the GPU can do special operations on them, such as apply filters and multisampling. In addition, a texture cannot
store arbitrary kinds of data; it can only store certain kinds of data formats, which are described by the DXGI_FORMAT enumerated
type. Some example formats are:
1. DXGI_FORMAT_R32G32B32_FLOAT: Each element has three 32-bit floating-point components.
2. DXGI_FORMAT_R16G16B16A16_UNORM: Each element has four 16-bit components mapped to the [0, 1] range.
3. DXGI_FORMAT_R32G32_UINT: Each element has two 32-bit unsigned integer components.
4. DXGI_FORMAT_R8G8B8A8_UNORM: Each element has four 8-bit unsigned components mapped to the [0, 1] range.
5. DXGI_FORMAT_R8G8B8A8_SNORM: Each element has four 8-bit signed components mapped to the [−1, 1] range.
6. DXGI_FORMAT_R8G8B8A8_SINT: Each element has four 8-bit signed integer components mapped to the [−128, 127] range.
7. DXGI_FORMAT_R8G8B8A8_UINT: Each element has four 8-bit unsigned integer components mapped to the [0, 255] range.
Note that the R, G, B, A letters are used to stand for red, green, blue, and alpha, respectively. Colors are formed as combinations
of the basis colors red, green, and blue (e.g., equal red and equal green makes yellow). The alpha channel or alpha component is
generally used to control transparency. However, as we said earlier, textures need not store color information; for example, the format
has three floating-point components and can therefore store a 3D vector with floating-point coordinates. There are also typeless
formats, where we just reserve memory and then specify how to reinterpret the data at a later time (sort of like a C++ reinterpret
cast) when the texture is bound to the pipeline; for example, the following typeless format reserves elements with four 8-bit
components, but does not specify the data type (e.g., integer, floating-point, unsigned integer):
4.1.4 The Swap Chain and Page Flipping
To avoid flickering in animation, it is best to draw an entire frame of animation into an off screen texture called the back buffer. Once
the entire scene has been drawn to the back buffer for the given frame of animation, it is presented to the screen as one complete
frame; in this way, the viewer does not watch as the frame gets drawn–the viewer only sees complete frames. To implement this, two
texture buffers are maintained by the hardware, one called the front buffer and a second called the back buffer.The front buffer
stores the image data currently being displayed on the monitor, while the next frame of animation is being drawn to the back buffer.
After the frame has been drawn to the back buffer, the roles of the back buffer and front buffer are reversed: the back buffer
becomes the front buffer and the front buffer becomes the back buffer for the next frame of animation. Swapping the roles of the
back and front buffers is called presenting. Presenting is an efficient operation, as the pointer to the current front buffer and the
pointer to the current back buffer just need to be swapped. Figure 4.1 illustrates the process.
The front and back buffer form a swap chain. In Direct3D, a swap chain is represented by the IDXGISwapChain interface. This
interface stores the front and back buffer textures, as well as provides methods for resizing the buffers
(IDXGISwapChain::ResizeBuffers) and presenting (IDXGISwapChain::Present). We will discuss these methods in detail in §4.4.
Using two buffers (front and back) is called double buffering. More than two buffers can be employed; using three buffers is
called triple buffering. Two buffers are usually sufficient, however.
Even though the back buffer is a texture (so an element should be called a texel), we often call an element a pixel
because, in the case of the back buffer, it stores color information. Sometimes people will call an element of a
texture a pixel, even if it doesn’t store color information (e.g., “the pixels of a normal map”).
Figure 4.1. From top-to-bottom, we first render to Buffer B, which is serving as the current back buffer. Once the frame is completed, the pointers are
swapped and Buffer B becomes the front buffer and Buffer A becomes the new back buffer. We then render the next frame to Buffer A. Once the frame
is completed, the pointers are swapped and Buffer A becomes the front buffer and Buffer B becomes the back buffer again.
4.1.5 Depth Buffering
The depth buffer is an example of a texture that does not contain image data, but rather depth information about a particular pixel.
The possible depth values range from 0.0 to 1.0, where 0.0 denotes the closest an object can be to the viewer and 1.0 denotes the
farthest an object can be from the viewer. There is a one-to-one correspondence between each element in the depth buffer and each
pixel in the back buffer (i.e., the ijth element in the back buffer corresponds to the ijth element in the depth buffer). So if the back
buffer had a resolution of 1280 × 1024, there would be 1280 × 1024 depth entries.
Figure 4.2. A group of objects that partially obscure each other.
Figure 4.2 shows a simple scene, where some objects partially obscure the objects behind them. In order for Direct3D to
determine which pixels of an object are in front of another, it uses a technique called depth buffering or z-buffering. Let us
emphasize that with depth buffering, the order in which we draw the objects does not matter.
To handle the depth problem, one might suggest drawing the objects in the scene in the order of farthest to nearest.
In this way, near objects will be painted over far objects, and the correct results should be rendered. This is how a
painter would draw a scene. However, this method has its own problems–sorting a large data set in back-to-front
order and intersecting geometry. Besides, the graphics hardware gives us depth buffering for free.
To illustrate how depth buffering works, let us look at an example. Consider Figure 4.3, which shows the volume the viewer sees and a
2D side view of that volume. From the figure, we observe that three different pixels compete to be rendered onto the pixel P on the
view window. (Of course, we know the closest pixel should be rendered to P because it obscures the ones behind it, but the computer
- 我的内容管理 收起
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额