没有合适的资源?快使用搜索试试~ 我知道了~
首页500 line or less(最新版)
资源详情
资源评论
资源推荐
A 3D Modeller
Erick Dransch
Erick is a software developer and 2D and 3D computer graphics enthusiast. He has worked on video games, 3D
special effects software, and computer aided design tools. If it involves simulating reality, chances are he'd like to
learn more about it. You can find him online at erickdransch.com.
Introduction
Humans are innately creative. We continuously design and build novel, useful, and interesting things. In modern
times, we write software to assist in the design and creation process. Computer-aided design (CAD) software allows
creators to design buildings, bridges, video game art, film monsters, 3D printable objects, and many other things
before building a physical version of the design.
At their core, CAD tools are a method of abstracting the 3-dimensional design into something that can be viewed and
edited on a 2-dimensional screen. To fulfill that definition, CAD tools must offer three basic pieces of functionality.
Firstly, they must have a data structure to represent the object that's being designed: this is the computer's
understanding of the 3-dimensional world that the user is building. Secondly, the CAD tool must offer some way to
display the design on the user's screen. The user is designing a physical object with 3 dimensions, but the computer
screen has only 2 dimensions. The CAD tool must model how we perceive objects, and draw them to the screen in a
way that the user can understand all 3 dimensions of the object. Thirdly, the CAD tool must offer a way to interact with
the object being designed. The user must be able to add to and modify the design in order to produce the desired
result. Additionally, all tools would need a way to save and load designs from disk so that users can collaborate,
share, and save their work.
A domain-specific CAD tool offers many additional features for the specific requirements of the domain. For example,
an architecture CAD tool would offer physics simulations to test climate stresses on the building, a 3D printing tool
would have features that check whether the object is actually valid to print, an electrical CAD tool would simulate the
physics of electricity running through copper, and a film special effects suite would include features to accurately
simulate pyrokinetics.
However, all CAD tools must include at least the three features discussed above: a data structure to represent the
design, the ability to display it to the screen, and a method to interact with the design.
With that in mind, let's explore how we can represent a 3D design, display it to the screen, and interact with it, in 500
lines of Python.
Rendering as a Guide
The driving force behind many of the design decisions in a 3D modeller is the rendering process. We want to be able
to store and render complex objects in our design, but we want to keep the complexity of the rendering code low. Let
us examine the rendering process, and explore the data structure for the design that allows us to store and draw
arbitarily complex objects with simple rendering logic.
Managing Interfaces and the Main Loop
Before we begin rendering, there are a few things we need to set up. First, we need to create a window to display our
design in. Secondly, we want to communicate with graphics drivers to render to the screen. We would rather not
communicate directly with graphics drivers, so we use a cross-platform abstraction layer called OpenGL, and a library
called GLUT (the OpenGL Utility Toolkit) to manage our window.
A Note About OpenGL
OpenGL is a graphical application programming interface for cross-platform development. It's the standard API for
developing graphics applications across platforms. OpenGL has two major variants: Legacy OpenGL and Modern
OpenGL.
Rendering in OpenGL is based on polygons defined by vertices and normals. For example, to render one side of a
cube, we specify the 4 vertices and the normal of the side.
Legacy OpenGL provides a "fixed function pipeline". By setting global variables, the programmer can enable and
disable automated implementations of features such as lighting, coloring, face culling, etc. OpenGL then
automatically renders the scene with the enabled functionality. This functionality is deprecated.
Modern OpenGL, on the other hand, features a programmable rendering pipeline where the programmer writes small
programs called "shaders" that run on dedicated graphics hardware (GPUs). The programmable pipeline of Modern
OpenGL has replaced Legacy OpenGL.
In this project, despite the fact that it is deprecated, we use Legacy OpenGL. The fixed functionality provided by
Legacy OpenGL is very useful for keeping code size small. It reduces the amount of linear algebra knowledge
required, and it simplifies the code we will write.
About GLUT
GLUT, which is bundled with OpenGL, allows us to create operating system windows and to register user interface
callbacks. This basic functionality is sufficient for our purposes. If we wanted a more full-featured library for window
management and user interaction, we would consider using a full windowing toolkit like GTK or Qt.
The Viewer
To manage the setting up of GLUT and OpenGL, and to drive the rest of the modeller, we create a class called
Viewer . We use a single Viewer instance, which manages window creation and rendering, and contains the main
loop for our program. In the initialization process for Viewer , we create the GUI window and initialize OpenGL.
The function init_interface creates the window that the modeller will be rendered into and specifies the function
to be called when the design needs to rendered. The init_opengl function sets up the OpenGL state needed for
the project. It sets the matrices, enables backface culling, registers a light to illuminate the scene, and tells OpenGL
that we would like objects to be colored. The init_scene function creates the Scene objects and places some
initial nodes to get the user started. We will see more about the Scene data structure shortly. Finally,
init_interaction registers callbacks for user interaction, as we'll discuss later.
After initializing Viewer , we call glutMainLoop to transfer program execution to GLUT. This function never
returns. The callbacks we have registered on GLUT events will be called when those events occur.
class Viewer(object):
def __init__(self):
""" Initialize the viewer. """
self.init_interface()
self.init_opengl()
self.init_scene()
self.init_interaction()
init_primitives()
def init_interface(self):
""" initialize the window and register the render function """
glutInit()
glutInitWindowSize(640, 480)
glutCreateWindow("3D Modeller")
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB)
glutDisplayFunc(self.render)
def init_opengl(self):
""" initialize the opengl settings to render the scene """
self.inverseModelView = numpy.identity(4)
self.modelView = numpy.identity(4)
glEnable(GL_CULL_FACE)
glCullFace(GL_BACK)
glEnable(GL_DEPTH_TEST)
glDepthFunc(GL_LESS)
glEnable(GL_LIGHT0)
glLightfv(GL_LIGHT0, GL_POSITION, GLfloat_4(0, 0, 1, 0))
glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION, GLfloat_3(0, 0, -1))
glColorMaterial(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE)
glEnable(GL_COLOR_MATERIAL)
glClearColor(0.4, 0.4, 0.4, 0.0)
def init_scene(self):
""" initialize the scene object and initial scene """
self.scene = Scene()
self.create_sample_scene()
def create_sample_scene(self):
cube_node = Cube()
cube_node.translate(2, 0, 2)
cube_node.color_index = 2
self.scene.add_node(cube_node)
sphere_node = Sphere()
sphere_node.translate(-2, 0, 2)
sphere_node.color_index = 3
self.scene.add_node(sphere_node)
hierarchical_node = SnowFigure()
hierarchical_node.translate(-2, 0, -2)
self.scene.add_node(hierarchical_node)
def init_interaction(self):
""" init user interaction and callbacks """
self.interaction = Interaction()
self.interaction.register_callback('pick', self.pick)
self.interaction.register_callback('move', self.move)
self.interaction.register_callback('place', self.place)
self.interaction.register_callback('rotate_color', self.rotate_color)
self.interaction.register_callback('scale', self.scale)
def main_loop(self):
glutMainLoop()
if __name__ == "__main__":
viewer = Viewer()
viewer.main_loop()
Before we dive into the render function, we should discuss a little bit of linear algebra.
Coordinate Space
For our purposes, a Coordinate Space is an origin point and a set of 3 basis vectors, usually the x, y, and z axes.
Point
Any point in 3 dimensions can be represented as an offset in the x, y, and z directions from the origin point. The
representation of a point is relative to the coordinate space that the point is in. The same point has different
representations in different coordinate spaces. Any point in 3 dimensions can be represented in any 3-dimensional
coordinate space.
Vector
A vector is an x, y, and z value representing the difference between two points in the x, y, and z axes, respectively.
Transformation Matrix
In computer graphics, it is convenient to use multiple different coordinate spaces for different types of points.
Transformation matrices convert points from one coordinate space to another coordinate space. To convert a vector v
from one coordinate space to another, we multiply by a transformation matrix M: v
′
= Mv. Some common
transformation matrices are translations, scaling, and rotations.
Model, World, View, and Projection Coordinate Spaces
Figure 13.1 - Transformation Pipeline
To draw an item to the screen, we need to convert between a few different coordinate spaces.
The right hand side of Figure 13.1 , including all of the transformations from Eye Space to Viewport Space will all be
handled for us by OpenGL.
Conversion from eye space to homogeneous clip space is handled by gluPerspective , and conversion to
normalized device space and viewport space is handled by glViewport . These two matrices are multiplied together
and stored as the GL_PROJECTION matrix. We don't need to know the terminology or the details of how these
matrices work for this project.
We do, however, need to manage the left hand side of the diagram ourselves. We define a matrix which converts
points in the model (also called a mesh) from the model spaces into the world space, called the model matrix. We
alse define the view matrix, which converts from the world space into the eye space. In this project, we combine these
two matrices to obtain the ModelView matrix.
To learn more about the full graphics rendering pipeline, and the coordinate spaces involved, refer to chapter 2 of
Real Time Rendering, or another introductory computer graphics book.
Rendering with the Viewer
The render function begins by setting up any of the OpenGL state that needs to be done at render time. It initializes
the projection matrix via init_view and uses data from the interaction member to initialize the ModelView matrix
with the transformation matrix that converts from the scene space to world space. We will see more about the
Interaction class below. It clears the screen with glClear and it tells the scene to render itself, and then renders the
unit grid.
We disable OpenGL's lighting before rendering the grid. With lighting disabled, OpenGL renders items with solid
colors, rather than simulating a light source. This way, the grid has visual differentiation from the scene. Finally,
glFlush signals to the graphics driver that we are ready for the buffer to be flushed and displayed to the screen.
# class Viewer
def render(self):
""" The render pass for the scene """
1
剩余401页未读,继续阅读
anzhuoi
- 粉丝: 5
- 资源: 4
上传资源 快速赚钱
- 我的内容管理 收起
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
会员权益专享
最新资源
- ExcelVBA中的Range和Cells用法说明.pdf
- 基于单片机的电梯控制模型设计.doc
- 主成分分析和因子分析.pptx
- 共享笔记服务系统论文.doc
- 基于数据治理体系的数据中台实践分享.pptx
- 变压器的铭牌和额定值.pptx
- 计算机网络课程设计报告--用winsock设计Ping应用程序.doc
- 高电压技术课件:第03章 液体和固体介质的电气特性.pdf
- Oracle商务智能精华介绍.pptx
- 基于单片机的输液滴速控制系统设计文档.doc
- dw考试题 5套.pdf
- 学生档案管理系统详细设计说明书.doc
- 操作系统PPT课件.pptx
- 智慧路边停车管理系统方案.pptx
- 【企业内控系列】企业内部控制之人力资源管理控制(17页).doc
- 温度传感器分类与特点.pptx
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论4