Machine Learning Aided GPS-Denied Navigation Using Uncertainty Estimation through Deep Neural Networks

, ,

Citation

Han-Pang Chiu, Angel Daruna, Yunye Gong, Abhinav Rajvanshi, Zhiwei Zhu, Yi Yao, Supun Samarasekera, and Rakesh Kumar, “Machine Learning Aided GPS-Denied Navigation Using Uncertainty Estimation through Deep Neural Networks.” 2024 Joint Navigation Conference, Institute of Navigation.

Abstract

Accurate navigation using a set of small low-cost sensors, such as a commercial-grade IMU (inertial measurement unit) and a camera, in GPS-denied environments (including indoors and underground tunnels) is a critical capability for warfighters and autonomous platforms to military missions. In the absence of GPS signals, warfighters and autonomous platforms (such as UGVs and drones) need to rely on these on-board sensors to compute odometry as they navigate through the environments. The typical approach is to use a statistics-based estimator (such as an Extended Kalman filter) to optimally combine information from multiple sensors based on their respective uncertainty (e.g. covariance matrices). However, the performance of this sensor fusion approach is still hindered by the limitation of each sensor. For example, the accumulated error based on traditional 6-DOF (Degrees of Freedom) motion estimation methods using IMU readings alone grow quickly in GPS-denied conditions, while camera-based measurements cannot provide reliable 6-DOF motion in featureless or dark places.

Recent machine learning (ML) techniques using a pre-trained deep neural network (DNN) have shown great promise to estimate more accurate 6-DOF motion over time using a single sensor (either an IMU or a camera), compared to traditional techniques. However, current DNNs cannot provide reliable uncertainty estimates for their output. Therefore, it is difficult to fuse a single-sensor DNN with other sensors using a statistics-based estimator for accurate and trustworthy motion estimation to mission-critical applications. The navigation accuracy cannot be improved via sensor fusion, as typical multi-sensor navigation systems do in GPS-denied environments. In addition, it is difficult to trust the output from a DNN due to the lack of interpretability and transparency of the inner workings of a DNN.

In this presentation, we describe and demonstrate a novel approach for generating accurate and interpretable uncertainty estimation for outputs from a DNN in real time. The key innovation of our approach is to develop and utilize factor graph formulation for modeling DNN uncertainty propagation as a nonlinear optimization problem, by treating DNN layers as discrete time steps and their values as states. Factor graphs are a probabilistic Bayesian graphical model involving state variables and factors among states. They have been widely used for large-scale real-time estimation systems. By using factor graphs to model DNN, connections among DNN layers are formulated as different factors across states. The Jacobian matrix and the noise matrix for each factor come from the weights and biases across correspondent network layers. Output covariance from a DNN then can be accurately estimated by propagating input uncertainty (i.e., aleatoric uncertainty) through the DNN states/factors.

We first show how this approach works with two individual single-sensor tasks: visual odometry computation and IMU motion estimation. For each task, we train a DNN and demonstrate its navigation accuracy outperforms traditional techniques in GPS-denied challenging environments by a large margin. For instance, our IMU motion estimation DNN using a low cost (5$) BOSCH IMU achieves 1.25-meter 3D GPS-denied open-loop navigation accuracy for 797.8-meter travel distance inside an office building. We also validate the reliability of our estimated uncertainty for the DNN’s outputs, achieving only 3.6% error in estimating uncertainty based on ground truth. In addition, our uncertainty estimation method achieves real-time capability by leveraging several computation optimization techniques we developed, which together improved computational efficiency by 10x.

We then describe a ML-aided multi-sensor GPS-denied navigation system, that uses a state-of-the-art statistical estimator to fuse our visual-odometry DNN outputs and measurements from other sensors. We show our ML-aided GPS-denied navigation system improves overall navigation accuracy, by concatenating the visual-odometry DNN with other traditional (non-ML) sensor systems for uncertainty fusion. The estimated uncertainty of DNN’s output using our approach enables the optimal fusion with other sensors in this system.

We integrate the entire ML-aided multi-sensor system into a 4-wheel Rover Zero ground robot platform (62cm*39cm*25.4cm), equipped with a set of small sensors (IMU, cameras, and wheel odometry) and an embedded processor (Nvidia Orin). We show experimental results and demonstrations using our robot in different GPS-denied environments. The ground truth was obtained using highly accurate Opti-track motion tracking system. The results include 6-DoF GPS denied navigation pose estimation and real-time uncertainty analysis, with autonomy applications. Our ML-aided GPS-denied navigation system achieves 0.1% to 1% open loop drift (the error grows to 1-10 meters after 1km navigation), including featureless indoor places, where traditional visual-inertial navigation systems typically fail. 


Read more from SRI