Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for commercial advantage and that copies bear this notice and the full citation on the
first page. Copyrights for components of this work owned by others than ACM must be
honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on
servers, or to redistribute to lists, requires prior specific permission and/or a fee.
Request permissions from permissions@acm.org.
VRCAI 2013, November 17 – 19, 2013, Hong Kong.
Copyright © ACM 978-1-4503-2590-5/13/11 $15.00
Layered Depth-of-Field Rendering Using Color Spreading
Kai Yu
∗ §
Shang Wu
§
Bin Sheng
§ ‡
Lizhuang Ma
† §
§
Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
‡
State Key Lab. of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, China
Figure 1: Depth of field effects. Image on the left is the original scene, the middle one is DoF effect focusing on foreground, and the right
one is focusing on background.
Abstract
Depth of field (DoF) is a depth range outside of which objects are
blurred. DoF has been widely used in photography and films by
artists and is expected to be applied to realistic image rendering
and virtual reality applications in the area of computer graphics.
This paper presents a framework that produces DoF effect with
high quality and performance. It is a layered method rendering
the scene into layers where spreading filter is used to directly get
the output ,reduce memory usage and improve the result comparing
with other layered methods using gathering filter. Depth peeling is
used to solve partial occlusion problem and modified to discard the
insignificant occluded pixels so as to reduce the number of occluded
layers. This framework can also eliminate artifacts such as intensity
leakage and depth discontinuity, and is GPU-friendly.
CR Categories: I.3.3 [Computer Graphics]: Image Generation—
Display algorithms;
Keywords: depth of field, layer, post-processing, GPU
1 Introduction
In most computer applications, renders use one camera to render
the whole scene, thus all objects in the scene are clear because of
pinhole camera model, where there is an assumption that all light-
rays travel through one point before hitting the image plane [Schedl
and Wimmer 2012]. But in real world, cameras, and also human
eyes, have finite-aperture lenses, which make the light-rays can
travel along more paths besides the one through the center of the
lens. It results in a phenomenon called focal blur, meaning that the
∗
email: wsrlyk@gmail.com
†
email: ma-lz@cs.sjtu.edu.cn
objects in the scene are clear only when they are in certain range
of depth, which is called depth of field (DoF), and other objects
are blurred, as shown in Figure 1. Focal blur has been widely used
in photography and films, and has become a key tool to control
people’s attentions and hide the objects artists do not want to show.
In order to render scenes with DoF effect in real-time applications
such as games and virtual reality (VR) systems, a fast method is
needed. The main idea of most real-time approaches is to blur
the rendered scene with different radiuses according to the circle
of confusion (CoC) [Potmesil and Chakravarty 1981], calculated
using depth map. These methods are usually called post-processing
methods. But many such methods suffer some artifacts, for instance
intensity leakage and partial occlusion. Intensity leakage happens
when simply blurring the background with pixels in focus, while
partial occlusion is caused by the fact that a lens with a finite
aperture allows more of the scene to be visible than would be seen
through an infinitesimal pinhole [Barsky et al. 2005].
This paper proposes an approach to simulate the DoF effect in
3D scenes with GPU acceleration. In this approach, scenes
are rendered into layers, which are generated using spreading
filter [Kosloff et al. 2009]. To get the occluded pixels, depth
peeling [Everitt 2001] is used when rendering the scenes, thus
removing the artifacts of partial occlusion. Some methods are used
to overcome the problems caused by splitting one object into two
or more layers. At last all layers are blended from back to front.
The contribution of this paper is a layered DoF effect framework
that gives results with high quality and performance by spreading
the fragments to build scene layers, leading to lower memory cost
compared with those that store the layers for blurring. And a
modified depth peeling is proposed to reduce the number of depth
layers.
2 Previous work
DoF algorithms can be divided into two categories, which are
object-space algorithm and image-space algorithm [Barsky and
Kosloff 2008]. Object-space algorithms directly operate the 3D
scenes, and build DoF effects directly into the rendering pipeline.
The most classic object-space method is distributed ray trac-
ing [Cook et al. 1984], which tracing several rays per pixel to
77