Comprehending 3D scenes is paramount for tasks such as planning and mapping for autonomous vehicles and robotics. Camera-based 3D Semantic Occupancy Prediction (OCC) aims to infer scene geometry and semantics from limited observations. While it has gained popularity due to affordability and rich visual cues, existing methods often neglect the inherent uncertainty in models. To address this, we propose an uncertainty-aware OCC method (α-OCC). We first introduce Depth-UP, an uncertainty propagation framework that improves geometry completion by up to 11.58% and semantic segmentation by up to 12.95% across various OCC models. For uncertainty quantification (UQ), we propose the hierarchical conformal prediction (HCP) method, effectively handling the high-level class imbalance in OCC datasets. On the geometry level, the novel KL-based score function significantly improves the occupied recall (45%) of safety-critical classes with minimal performance overhead (3.4% reduction). On UQ, our HCP achieves smaller prediction set sizes while maintaining the defined coverage guarantee. Compared with baselines, it reduces up to 90% set size, with 18% further reduction when integrated with Depth-UP. Our contributions advance OCC accuracy and robustness, marking a noteworthy step forward in autonomous perception systems.
Overview of our α-OCC method. The non-black colors highlight the novelties and important techniques in our method. C denotes the concatenation of the depth feature FD and image feature FI. In the Depth-UP part, we calculate the uncertainty of depth estimation through direct modeling. For depth model retraining, we only train the additional standard deviation head while keeping the rest of the model frozen. Then we propagate it through depth feature extraction (for semantic segmentation) and building a probabilistic voxel grid map Mp by probabilistic geometry projection (for geometry completion). Each element of Mp is the occupied probability of the corresponding voxel, computed by considering the depth distribution of all rays across the voxel.
Overview of our HCP method. We predict voxels' occupied state by the quantile on the KL-based score, which can improve occupied recall of rare classes, and then only generate prediction sets for these predicted occupied voxels. The occupied quantile qoy and semantic quantile qsy are computed during the calibration step of HCP.
 
Qualitative results of the base VoxFormer model and that with our Depth-UP.
@article{Su2026alpha_occ,
author = {Su, Sanbao and Chen, Nuo and Lin, Chenchen and Juefei-Xu, Felix and Feng, Chen and Miao, Fei},
title = {$\alpha$-OCC: Uncertainty-Aware Camera-based 3D Semantic Occupancy Prediction},
year = {2026},
booktitle = {Transactions on Machine Learning Research}
}