2377-3766 (c) 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/LRA.2021.3062319, IEEE Robotics
and Automation Letters
IEEE ROBOTICS AND AUTOMATION LETTERS. PREPRINT VERSION. ACCEPTED FEBRUARY, 2021 1
Edge Computing in 5G for Drone Navigation: What
to Offload?
Samira Hayat
1
, Roland Jung
2
, Hermann Hellwagner
1,2
, Christian Bettstetter
2,3
,
Driton Emini
4
, and Dominik Schnieders
4
Abstract—Small drones that navigate using cameras may be
limited in their speed and agility by low onboard computing
power. We evaluate the role of edge computing in 5G for such
autonomous navigation. The offloading of image processing tasks
to an edge server is studied with a vision-based navigation algo-
rithm. Three computation modes are compared: onboard, fully
offloaded to the edge, and partially offloaded. Partial offloading is
expected to pose lower demands on the communication network
in terms of transfer rate than full offloading but requires some
onboard processing. Our results on the computation time help
select the most suitable mode for image processing, i.e., whether
and what to offload, based on the network conditions.
Keywords—Aerial Systems: Perception and Autonomy, Au-
tonomous Vehicle Navigation, Vision-Based Navigation.
I. INTRODUCTION
A
UTONOMY is desirable in many robot systems. In terms
of navigation, it dictates onboard perception and compu-
tations. The concept of self-driving cars has spurred research
in autonomous navigation. Improvements in sensor accuracy,
perception algorithms, high onboard processing power, and
low-latency communication have been topics of interest. The
existing solutions remain infeasible for unmanned aerial vehi-
cles (UAVs), or drones, as explained in the following.
Navigation autonomy in drone systems is essential in ap-
plications employing multiple drones such as disaster re-
sponse [1]. Small multicopter drones weighing less than a
kilogram are desirable candidate platforms due to their high
maneuverability; both moments of inertia and angular ac-
celeration scale with the characteristic dimension [2]. The
maneuverability introduces opportunities in exploration and
mapping of obstacle-ridden environments (e.g., flight through
narrow vertical gaps [3]); the small size and payload capacity
limits the onboard compute power. High-speed vision-based
Manuscript received: October, 15, 2020; Revised January, 11, 2021; Ac-
cepted February, 8, 2021.
This paper was recommended for publication by Editor Pauline Pounds
upon evaluation of the Associate Editor and Reviewers’ comments. This work
was partially supported by Magenta Telekom (T-Mobile Austria GmbH) and
Deutsche Telekom AG, Germany, as well as by University of Klagenfurt (Karl
Popper Kolleg NAV), Austria.)
1
Samira Hayat (samira.hayat@aau.at) and Hermann Hellwagner are
with the Institute of Information Technology (ITEC), University of Klagenfurt.
2
Hermann Hellwagner, Roland Jung, and Christian Bettstetter are members
of the Karl Popper Kolleg on Networked Autonomous Aerial Vehicles (NAV),
University of Klagenfurt.
3
Christian Bettstetter is with the Institute of Net-
worked and Embedded Systems (NES), University of Klagenfurt.
4
Driton
Emini and Dominik Schnieders are with Deutsche Telekom AG.
Digital Object Identifier (DOI): see top of this page.
navigation techniques targeting autonomous driving fail to
meet the challenges posed by small drone systems: high
acceleration and 3D mobility coupled with low capabilities
in terms of payload and onboard computation [4]. The six-
degree-of-freedom trajectories comprised of high linear and
angular velocities for high-speed navigation, such as in drone
racing, are not yet solved by the existing state estimation
solutions. The state-of-the-art methods deployed in practice
use simulations to learn control and perception policies for
navigation [5], [6]. Such learning-based solutions require
extensive simulations. In most cases, they can only tackle
static scenarios. For accurate trajectory generation in a specific
scene with moving obstacles, training on multiple static scenes
(with different obstacle placements) is needed to tackle the
dynamics. Higher speed compromises the navigation accuracy
due to the limited onboard computation capabilities [6].
This letter explores the role of edge computing to facilitate
high-speed vision-based autonomous navigation. We study
the offloading of the computationally intensive estimation of
poses based on sensor measurements to the edge server. Such
offloading requires a communication link with low latency
and high throughput to achieve real-time operation. New
possibilities arise with standardization activities in the Third
Generation Partnership Project (3GPP), tackling the integration
of drones into cellular networks [7]. The promised latency
and throughput figures are sufficient to counter the mentioned
challenges. A drone may respond to its environment in near
real-time by sending the onboard generated sensor information
to an edge server and receiving corresponding state estimation
and control commands.
We evaluate the benefits of using edge servers in enabling
real-time vision-based autonomous navigation of drones. To
this end, we use experimental results on 5G drone con-
nectivity [8] and profiling results of a standard monocular
Visual-Inertial Odometry (VIO) algorithm. Three modes are
compared: no offloading, partial offloading, and full offloading
of the image processing tasks to the edge server. With full
offloading, the edge server executes the entire image process-
ing pipeline, which requires the transfer of the full images
to the server, resulting in low computation burden onboard
and high communication demands. With partial offloading,
image features are detected and tracked onboard; the drone
then transfers the features to the edge server, thus offloading
the remaining image processing. This yields more onboard
computations than full offloading and lower communication
demands. All computations are performed onboard the drone
in the no offloading mode. The three modes are compared with
Authorized licensed use limited to: Stanford University. Downloaded on March 12,2021 at 01:02:08 UTC from IEEE Xplore. Restrictions apply.