Large-field 3D Imaging Using an Array of Gradually-
changed FOV Cameras
Yan Wang, Lidong Chen
*
and Liang Bai
Science and Technology on Information Systems Engineering Laboratory
National University of Defense Technology
Changsha, China
nudtdong11@163.com
Abstract—Large-field image stitching and 3D reconstruction
are two hot topics in computer vision. Generally, only a small
amount of overlapped area is required in large-field image
stitching. On the other hand, cameras usually share almost the
same field of view for multi-view 3D reconstruction, which
guarantees enough overlapped areas to calculate 3D information
of the target scene. To realize the two purposes using a single
imaging device simultaneously, we propose a novel structure
design of camera array, which combines multiple cameras with
gradually-changed field of view. Based on the proposed camera
array, we can firstly calculate the depth of points in 3D space, then
a panoramic image with large field of view can be generated in
consideration of the depth information. A prototype imaging
system based on the novel camera-array structure is realized, and
then real-scene imaging experiments are implemented to prove the
feasibility and effectiveness of our method.
Keywords—camera array with gradually-changed FOV; image
stitching; multi-view 3D reconstruction; light-field imaging
I. INTRODUCTION
Since the emergence of cameras, camcorders and other
imaging equipment, higher resolution and larger field of view
have always been the two main objectives in a long time.
Unfortunately, these two goals often seem to contradict each
other. Normally, each camera has a limited field of view and
cannot captures enough imaging information in many realistic
applications [1]. Panoramic imaging is an effective solution,
which can stitch multiple images to generate a panorama with a
much larger field of view [2]. Moreover, with the development
of digital image processing technology, the innovation of
imaging structure design and the emergence of light-field
imaging, the acquisition and display of only two dimensional
information are not satisfied in various applications. Three-
dimensional (3D) information are usually expected. Compared
with conventional imaging in two-dimensional space, the 3D
structure information of the spatial scene is much more useful,
which provides the possibility of 3D object recognition, robot
visual navigation and virtual space reconstruction.
In order to break the limitation of physical resolution and
field of view of a single image sensor, combining multiple
cameras into a camera array with some specific structure is a
common solution [3]. There are two main imaging methods
using camera array according to different application
backgrounds:
1) Multi-camera stitching imaging
Multi-camera stitching imaging uses a number of cameras
oriented in different directions, among which adjacent cameras
shares only a little overlapped vision, and respectively, to obtain
scene information of different direction and get panoramic
image of FOV [4]. Typical devices are: the PARASCAN
panoramic camera launched by Honeywell of the United States
and the high-altitude aerial photography of the military ground
reconnaissance imaging system (ARGUS-IS) put forward by the
United States Defense Advanced Research Projects Agency
(DARPA) [5] and so on. However, due to the deviation of the
camera viewpoints in multi-camera mosaic imaging system, the
optical center of each camera cannot be completely coincided
[6]. When obvious depth changes exists, it cannot satisfy the
assumption that global scene are in a space plane of relatively
fixed depth. No matter how precise the camera calibration and
image registration is, the image splicing cracks is theoretically
inevitable.
2) Light-field imaging based on camera array
A common method of light field imaging is to take multiple
images of the same scene to obtain abundant light field
information of the scene [7] [8] [9]. There are two main
achievable approaches: one is that to design a compact
arrangement of cameras, all of which have almost overlapped
field of view; another is that all the cameras are located in a
large spherical distribution surrounding the target, which can
capture different perspectives of the target from different
directions. B. Wilburn studied the light field imaging based on
camera array and proposed a variety of spatial arrangements of
camera array [10] [11]. Moreover, MIT 64-camera array [12],
3D Room [13] in the Carnegie Mellon Virtual Reality Project,
and multi-view dynamic scene video capture platform
developed by Li and Dai [14] are also some successful
prototypes. However, all the above imaging systems have very
limited 3D field of view, which cannot capture light-field
information of large scale scenes.
A novel structure of camera array is proposed in this paper,
which can realize large-field 3D reconstruction and high-
resolution panoramic imaging simultaneously. Multiple cameras
with gradually-changed field of view are combined in the array,
which are regularly fixed in both horizontal and vertical
directions. Overlapped field of view between the adjacent
cameras in two directions causes that a same spatial point may
2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC)
Banff Center, Banff, Canada, October 5-8, 2017
978-1-5386-1644-4/17/$31.00 ©2017 IEEE 630