关键帧视觉惯性SLAM的非线性优化方法

需积分: 24 3 下载量 108 浏览量 更新于2024-09-01 收藏 2.72MB PDF 举报
本文档标题为"基于关键帧的视觉惯性SLAM使用非线性优化",发表于《机器人研究国际期刊》二月2014年版。该研究是关于一种先进的视觉惯性同步定位与映射(Visual-Inertial SLAM)算法,特别强调了在单目和立体相机的环境下,采用非线性优化方法来提高定位系统的鲁棒性和精度。 核心知识点: 1. **关键技术:** 关键帧SLAM方法,即仅在传感器数据发生显著变化时(如运动或视差)才创建新的关键帧,这有助于节省计算资源,减少噪声影响。而非线性优化则允许系统处理更复杂的数学模型,提供更精确的估计。 2. **数据融合:** OKVINS(Open Keyframe-based Visual-Inertial SLAM)采用了紧密耦合的数据融合策略,这意味着来自视觉传感器和惯性测量单元(IMU)的数据被实时整合,减少了线性化的局限性,提高了整体性能。 3. **代价函数设计:** 该系统在批量优化阶段,将IMU误差与重投影误差合并到一个统一的非线性代价函数中,通过批量优化求解状态估计问题。这样不仅提升了准确性,也考虑了惯性信息的连续性,避免了局部最优解。 4. **计算复杂度控制:** 旧的状态通过边缘化处理被集成到优化过程中,这是一种有效的计算剪枝策略,能够限制系统的内存和计算需求,确保实时性。 5. **作者贡献:** 该论文由Stefan Leutenegger、Paul Furgale等来自ETH Zurich的研究人员共同完成,他们在长期人机合作项目(如TRADR和ExoMars)中也应用了类似的技术。他们的工作表明了该方法在实际灾难响应和火星探测等领域的潜在价值。 6. **影响力:** 该研究已获得了701次引用,显示了其在SLAM领域的学术影响力,以及对后续研究的启发作用。 "基于关键帧的视觉惯性SLAM使用非线性优化"是一项重要的技术突破,它通过结合视觉和惯性信息,实现了高效、精确的机器人自主导航,并为解决实际应用场景中的定位问题提供了有力的理论基础和实践指导。
2017-11-06 上传
Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate Visual-Inertial Odometry or Simultaneous Localization and Mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that non-linear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual-inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochasic cloning sliding-window filter. This competititve reference implementation performs tightly-coupled filtering-based visual-inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy.