We perform both fundamental and applied research on robotics, computer vision, and state estimation, while focusing on probabilistic sensing, estimation, mapping, localization, perception, prediction, and planning of autonomous vehicles, with applications to autonomous driving, augmented reality (AR), and virtual reality (VR).
Please check out our open-sourced codebases at GitHub and demo videos at Youtube.
The following is a selection of our research projects [to be updated]:
Consistent state estimation for robot localization and mapping
We investigate in depth the estimator inconsistency problem of robot localization, including simultaneous localization and mapping (SLAM), multi-robot cooperative localization (CL), and multi-robot cooperative SLAM (C-SLAM). Previously, many different, often ad hoc, methods were used to alleviate the symptoms of inconsistency, without knowing the root causes of the problem. In contrast, we have shown for the first time ever that one fundamental cause of inconsistency is the mismatch between the observability properties of the underlying nonlinear system and the linearlized system used by a nonlinear estimator. Specifically, by performing observability analysis, we have proven that the linearized error-state system used by the standard filtering or smoothing algorithms – the extended Kalman filter (EKF), the unscented Kalman filter (UKF), and the sliding-window filter (SWF) – has an observable subspace of higher dimension than the underlying nonlinear system. This implies that the standard estimators gain spurious information from the measurements, which unjustifiably reduces the uncertainty of the state estimates and causes the inconsistency. Based on this key insight, we have proposed a novel methodology for designing consistent linearized estimators and developed a class of observability-constrained (OC)-estimators – including the FEJ-EKF [ICRA08, ISER08], the OC-EKF [IJRR10], the OC-UKF [ICRA09, TRO13], and the OC-SWF [IROS11], for SLAM, and the OC-EKFs for CL [RSS09, AURO11] – that computes Jacobians such that the linearized system model used by the estimator has an observable subspace of the same dimension as that of the underlying nonlinear system. Note that this methodology can be generalized to a broad family of nonlinear systems [ACC13, SCL17].
Secure state estimation
Due to increasing proliferation of autonomous vehicles, securing perception and navigation against malicious attacks becomes a matter of urgent societal interest, because attackers can fool these vehicles by manipulating their sensors, exposing us to unprecedented vulnerabilities and ever-increasing possibilities for malicious attacks. From estimation perspective, we have developed a novel Weighted Maximum Correntropy Criterion (WMCC)-EKF [ISRR17] by systematically, rather than in an ad-hoc way, inflating the noise covariance of the compromised measurements based on each measurement’s quality. Moreover, we have also designed a secure estimator by first detecting attacks based on sparse optimization assuming that only a small number of measurements can be attacked, and then employ a sliding-window Kalman filter to update the state estimates and covariance using only the uncompromised measurements – the resulting algorithm is termed Secure Estimation-EKF (SE-EKF) [ISRR17, RSS18w].
Visual-inertial navigation system (VINS) has prevailed in part because of its complementary sensing capabilities as well as decreasing cost and size. While most current VINS algorithms undergo inconsistency, we have developed a novel EKF-based approach towards consistent estimates [ICRA14, IROS17], which imposes both state-transition and obervability constraints in computing Jacobians. Moreover, we have introduced a light optimal state constraint (OSC)-EKF for VINS in a tightly-coupled manner while having bounded computational complexity [ISRR15]. Specifically, we first optimally extract all the information contained in the visual measurements about the relative camera poses in a sliding window of constant size, without keeping the visual features (corner points) in the state vector. Subsequently, these inferred relative-pose measurements are used to update the camera poses within the EKF, thereby resulting in the constant computational complexity of the filter. More recently, we have derived a novel high-accuracy preintegration theory based on continuous-time analytical integration of IMU measurements [WAFR16, IJRR18a] and have successfully integrated into graph-based direct VINS [ICRA17] and camera-odometer online calibration and localization [ACC17].
Over the years, there has been increasingly growing demands of autonomous underwater vehicle (AUVs) for a wide range of applications, such as seabed mapping, deep ocean exploring, routine harbor monitoring and oil pipeline maintenance. To successfully accomplish these tasks, an efficient and accurate localization solution is required for AUVs. However, this is challenging for underwater navigation, in part because GPS signal cannot be received underwater, and acoustic beacons require tedious and costly installation before applications. Although high accuracy inertial sensors, such as DVLs and fiber optic gyroscopes, may provide good localization, the high cost limits their widespread deployments. To reduce the cost and latency of most of current underwater navigation systems, we have been working on a novel acoustic-inertial navigation system (AINS) [ICRA17]. Moreover, we have also studied underwater cooperative SLAM which is challenging due to poor acoustic communications channel [ICRA15]. In particular, we develop a pose graph compression scheme which is efficient, robust and scales well, resulting in that AUVs have to surface less often and can complete missions more efficiently.
While autonomous driving has made great strides in recent years, fully autonomous cars are still a distant goal, primarily due to the lack of robustness. For example, current self-driving cars cannot drive on new roads, or roads that have changed substantially, or when there is a GPS or data outage such as in parking garages, urban cities and tunnels. In contrast, humans are good at all of this – we can drive without detailed maps or high precision GPS/IMU sensors, and typically require only a small amount of sparse information for guidance, and their performance typically gets better over time through learning. Thus, we are developing cognitive SLAM that can perceive, understand and make predictions about a scene in real time with measurable confidence, particularly as the scene is closer to the car. To this end, we have worked on optimally fusing multiple heterogeneous and asynchronous sensors for use in 3D mapping and localization of autonomous vehicles and designed a modular sensor-fusion system that allows for efficient and accurate incorporation of multiple navigation sensors operating at different sampling rates. We developed a general method of out-of-sequence (asynchronous) measurement alignment to incorporate heterogeneous sensors into a factor graph for mapping and localization in 3D, without requiring the addition of new graph nodes, thus allowing the graph to have an overall reduced complexity [PPNIV17, ICRA18].