Collaborative Multi-Object Tracking with Conformal Uncertainty Propagation

IEEE Robotics and Automation Letters

1Department of Computer Science and Engineering, University of Connecticut
2Tandon School of Engineering, New York University

Video




Difference in data association for multi-object tracking (MOT) with and without considering uncertainty. Ground truth bounding boxes are in green, detected bounding boxes in orange, and tracklets' bounding boxes in green, labeled with object IDs. Shadow ellipses indicate detected bounding box uncertainty. SORT, which doesn't consider uncertainty, is on the left side of the figure, while our MOT-CUP framework, which incorporates uncertainty, is on the right side. At time t-1, both MOT algorithms output tracklet ID 186. However, at time t, SORT fails to associate the low-quality detected object with tracklet 186 due to a large IoU distance. Thus, SORT removes the tracklet. In contrast, our MOT-CUP framework quantifies the uncertainty of COD with a larger shadow ellipse to represent the uncertainty of the bounding box for tracklet 186, and successfully associates the low-quality detected object by considering the COD uncertainty.

Abstract

Object detection and multiple object tracking (MOT) are essential components of self-driving systems. Accurate detection and uncertainty quantification are both critical for onboard modules, such as perception, prediction, and planning, to improve the safety and robustness of autonomous vehicles. Collaborative object detection (COD) has been proposed to improve detection accuracy and reduce uncertainty by leveraging the viewpoints of multiple agents. However, little attention has been paid on how to leverage the uncertainty quantification from COD to enhance MOT performance. In this paper, as the first attempt, we design the uncertainty propagation framework to address this challenge, called MOT-CUP. Our framework first quantifies the uncertainty of COD through direct modeling and conformal prediction, and propogates this uncertainty information during the motion prediction and association steps. MOT-CUP is designed to work with different collaborative object detectors and baseline MOT algorithms. We evaluate MOT-CUP on V2X-Sim, a comprehensive collaborative perception dataset, and demonstrate a 2% improvement in accuracy and a 2.67X reduction in uncertainty compared to the baselines, e.g., SORT and ByteTrack. MOT-CUP demonstrates the importance of uncertainty quantification in both COD and MOT, and provides the first attempt to improve the accuracy and reduce the uncertainty in MOT based on COD through uncertainty propogation.

Contribution

  • To the best of our knowledge, our MOT-CUP framework is the first attempt to leverage quantified uncertainty from collaborative object detection to improve MOT performance. This framework can be applied to object detection model and MOT algorithms.
  • In the collaborative object detection stage, we employ modeling and conformal prediction techniques to rigorously quantify the uncertainty.
  • For MOT, we further improve the original MOT algorithm by designing two novel methods that effectively leverage uncertainty information for both the Kalman Filter and association.

  • Method



    Overview of our MOT-CUP framework. The red color highlights the novelties and important techniques in our MOT-CUP framework. In the collaborative object detection stage (COD) stage, we rigorously calculate uncertainty quantification (UQ) of each object detection via direct modeling (DM) and conformal prediction (CP). In the motion prediction stage of MOT, we adopt a Standard Deviation-based Kalman Filter (SDKF) to enhance the Kalman Filter process, that leverages the UQ results and predicts the locations of the objects in the next time step with higher precision. In the association step, we first apply the baseline association method and then associate the unmatched detections and tracklets with the Negative Log Likelihood similarity metric, called NLLAI.


    Qualitative  Results

     


    Visualization of results of the detection, original SORT, and our MOT-CUP framework over consecutive three frames. The collaborative object detector we used here is Upper-bound. In this visualization, green boxes are ground truth bounding boxes, orange boxes are detected bounding boxes, and red boxes are tracklets' bounding boxes as the output of MOT. The numbers beside the red boxes indicate object IDs. We observe that our MOT-CUP outperforms the original SORT algorithm in tracking object 332, as indicated by the red arrow. Furthermore, our MOT-CUP improves the accuracy of location, compared with the object detector, such as object 332 in frame 60. Overall, our results demonstrate the importance of considering uncertainty in MOT.

    BibTeX

    @article{Su2023mot_cup,
          author    = {Su, Sanbao and Han, Songyang and Li, Yiming and Zhang, Zhili and  and Feng, Chen and Ding, Caiwen and Miao, Fei},
          title     = {Collaborative Multi-Object Tracking with Conformal Uncertainty Propagation},
          year={2023},
          
      }