直接EKF方法实现的鲁棒视觉惯性里程计

需积分: 13 5 下载量 149 浏览量 更新于2024-09-09 收藏 1.43MB PDF 举报
"这篇文章是关于一种使用直接EKF(扩展卡尔曼滤波)方法实现的鲁棒视觉惯性里程计技术。由Michael Bloesch、Sammy Omari、Marco Hutter和Roland Siegwart共同撰写,来自ETHZurich的自主系统实验室。该研究在2015年发布,强调了通过像素强度误差来实现精确跟踪性能和高鲁棒性的创新方法。" 《基于直接EKF的鲁棒视觉惯性里程计》 这篇论文介绍了一种单目视觉惯性里程计算法,该算法通过直接利用图像区域的像素强度误差,实现了精确的追踪性能,并表现出极高的鲁棒性。在特征检测之后,多级补丁特征的跟踪与底层的扩展卡尔曼滤波器紧密耦合。具体来说,它将像素强度误差作为更新步骤中的创新项,直接用于滤波过程。 传统的视觉惯性里程计通常结合视觉传感器(如相机)和惯性测量单元(IMU)的数据,以估计运动状态,如位置和姿态。扩展卡尔曼滤波是一种概率滤波方法,用于处理非线性和不确定性问题。在本研究中,EKF被用作融合视觉信息和IMU数据的有效工具。 论文中提到的直接法不同于传统的特征点方法,后者依赖于特征点的检测和匹配。直接法不需提取显著的视觉特征,而是直接处理像素级别的信息,这样可以减少特征匹配的复杂性和潜在的错误,尤其是在光照变化或动态环境中。 通过将像素强度误差作为EKF更新的一部分,算法能够快速适应环境变化并保持稳定跟踪。这种创新的方法使得系统在面对诸如遮挡、光照变化、纹理重复等挑战时,仍然能够提供可靠的定位和姿态估计。 此外,由于EKF允许同时考虑视觉和惯性数据,这种融合策略可以互补两者的优点:视觉信息在长时间段内提供全局定位,而IMU则提供高频、短时间尺度的运动信息。通过有效融合这两类信息,算法能够实现更准确且鲁棒的定位效果。 这项工作对视觉惯性导航系统进行了重大改进,通过直接EKF方法提高了系统的实时性能和鲁棒性,对于自动驾驶、无人机导航和其他需要精确移动估计的应用具有重要意义。
2018-02-06 上传
In this paper, we focus on the problem of motion tracking in unknown environments using visual and inertial sensors.We term this estimation task visual-inertial odometry (VIO), in analogy to the well-known visual-odometry problem. We present a detailed study of EKF-based VIO algorithms, by comparing both their theoretical properties and empirical performance. We show that an EKF formulation where the state vector comprises a sliding window of poses (the MSCKF algorithm) attains better accuracy, consistency, and computational efficiency than the SLAM formulation of the EKF, in which the state vector contains the current pose and the features seen by the camera. Moreover, we prove that both types of EKF approaches are inconsistent, due to the way in which Jacobians are computed. Specifically, we show that the observability properties of the EKF’s linearized system models do not match those of the underlying system, which causes the filters to underestimate the uncertainty in the state estimates. Based on our analysis, we propose a novel, real-time EKF-based VIO algorithm, which achieves consistent estimation by (i) ensuring the correct observability properties of its linearized system model, and (ii) performing online estimation of the camera-to-IMU calibration parameters. This algorithm, which we term MSCKF 2.0, is shown to achieve accuracy and consistency higher than even an iterative, sliding-window fixed-lag smoother, in both Monte-Carlo simulations and real-world testing. I