Navigation for Autonomous MAVs in Complex GNSS-denied Environments 5
navigate safely in the vicinity of obstacles. Our work extends the safety layer by a
deliberative planning layer based on local maps and navigation targets.
Heng et al. (2014) use a multiresolution grid map to represent the surroundings
of a quadrotor. A feasible plan is generated with a vector field histogram. Schmid
et al. (2014) autonomously navigate to user-specified waypoints in a mine. The map
used for planning is created by an onboard stereo camera system. By using rapidly
exploring random belief trees (RRBT), Achtelik et al. (2014) plan paths that do
not only avoid obstacles, but also minimize the variability of the state estimation.
Recent search-based methods for obstacle-free navigation include work of MacAllister
et al. (2013). They use A* search to find a feasible path in a four-dimensional grid
map incorporating the asymmetric shape of the MAV. Cover et al. (2013) also use
a search-based method. These methods assume complete knowledge of the scene
geometry—an assumption that we do not make here.
Creating 3D environment representations through SLAM has been investigated
first with mobile ground robots (Nuechter et al., 2005; Magnusson et al., 2007). A
research topic in SLAM with 3D laser scanners is how to maintain high run-time
performance and low memory consumption simultaneously. While many methods
assume the robot to stand still during 3D scan acquisition, some approaches also
integrate scan lines of a continuously rotating laser scanner into 3D maps while the
robot is moving (Bosse and Zlot, 2009; Elseberg et al., 2012; Stoyanov and Lilienthal,
2009; Maddern et al., 2012; Anderson and Barfoot, 2013).
Takahashi et al. (2008) also build environment maps with a moving 3D laser
scanner. They localize the robot using GPS and IMU sensors. Thrun et al. (2003)
realized a 3D mapping system with a rigidly mounted 2D laser scanner on a heli-
copter. The laser scanner measures in a vertical plane perpendicular to the flight
direction. In order to localize the helicopter, measurements from GPS and IMU are
fused and consecutive 2D scans are registered, assuming scan consistency in flight
direction. In our approach, we do not make such an assumption on scan consistency.
Shen et al. (2014) estimate the MAV state in combined indoor and outdoor flights by
fusing different sensor modalities. A 2D laser scanner is used for indoor localization.
In contrast, our laser-based localization can estimate the complete 6D pose relative
to a map coordinate frame.
Many approaches consider mapping in 3D with a voxel being the smallest map
element, e.g., Ryde and Hu (2010) who use voxel lists for efficient neighbor queries.
Similar to our approach, the 3D-NDT (Magnusson et al., 2007) represents point
clouds as Gaussian distributions in voxels at multiple resolutions. Our local mul-
tiresolution surfel grids adapt the maximum resolution with distance to the sensor
to match the measurement characteristics. Moreover, our registration method aligns
3D scans on all resolutions concurrently.
3 System Setup and Overview
Our MAV is an octorotor with a co-axial arrangement of rotors (see Fig. 1). This
yields a compact flying platform that is able to carry a plurality of sensors and a fast
computer (Intel Core i7-3820QM 2.7 GHz). For sensor data processing and navigation
planning, we employ the Robot Operating System (ROS) by Quigley et al. (2009) as
middleware. For low-level velocity and attitude control, the MAV is equipped with
a PIXHAWK Autopilot flight control unit (Meier et al., 2012). To allow for safe