高效与鲁棒的自主操作设计:OKVIS视觉-惯性SLAM算法

需积分: 6 8 下载量 103 浏览量 更新于2024-07-18 收藏 7.18MB PDF 举报
本研究论文探讨了"Design and Algorithms for Efficient and Robust Autonomous Operation"的主题,着重于在无人驾驶航空器(Unmanned Solar Airplanes)领域,特别是视觉惯性(Visual-Inertial, VI)同时定位和映射(Simultaneous Localization and Mapping, SLAM)技术的发展。论文的核心焦点是OKVIS算法,这是一个开放的关键帧(Keyframe-based)视觉惯性SLAM估计方法,它将视觉线索与惯性测量单元(IMU)的数据融合,从而实现高精度的系统状态估计和环境感知。 OKVIS通过非线性优化技术,如边际化(Marginalization),有效地整合了地标(landmark)的重新投影误差和惯性数据。边际化策略涉及部分线性化和变量消除,这有助于保持优化问题在可管理范围内,尽管处理的时间间隔可能较长。这种技术对于保证自主操作的效率和鲁棒性至关重要,因为它能够在动态环境中持续提供稳定且精确的位置和姿态信息,这对于无人驾驶航空器的导航和避障至关重要。 作者Stefan Leutenegger,拥有机械工程硕士学位,专注于机器人学,以及来自瑞士联邦理工学院(ETH Zurich)的双学位,他在2014年提交了这篇博士学位论文。他在此期间得到了Roland Siegwart教授、Gerd Hirzinger教授和Kurt Konolige博士等导师的指导和支持。论文不仅展示了作者在自主系统领域的理论研究,也体现了他对实际应用的深入理解,特别是在无人机自主操作方面的创新工作。 该研究的意义在于推动了视觉惯性SLAM技术在无人航空器领域的前沿进展,为实现高效、可靠的自主飞行提供了关键算法和技术支持。它对后续的研究者和实践者来说,提供了宝贵的方法论和实践经验,促进了无人驾驶航空器在复杂环境中的自主探索和控制能力的提升。
2017-11-06 上传
Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate Visual-Inertial Odometry or Simultaneous Localization and Mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that non-linear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual-inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochasic cloning sliding-window filter. This competititve reference implementation performs tightly-coupled filtering-based visual-inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy.