Hierarchical Context Alignment with Disentangled Geometric and Temporal Modeling for Semantic Occupancy Prediction

1.Shanghai Jiao Tong University, 2.Eastern Institute of Technology, 3.University of Adelaide, 4.Tokyo Institute of Technology, 5.Chinese Academy of Sciences

Abstract

Camera-based 3D Semantic Occupancy Prediction (SOP) is crucial for understanding complex 3D scenes from limited 2D image observations. Existing SOP methods typically aggregate contextual features to assist the occupancy representation learning, alleviating issues like occlusion or ambiguity. However, these solutions often face misalignment issues wherein the corresponding features at the same position across different frames may have different semantic meanings during the aggregation process, which leads to unreliable contextual fusion results and an unstable representation learning process. To address this problem, we introduce a new Hierarchical context alignment paradigm for a more accurate SOP (Hi-SOP). Hi-SOP first disentangles the geometric and temporal context for separate alignment, which two branches are then composed to enhance the reliability of semantic occupancy prediction. This parsing of the visual input into a local-global alignment hierarchy includes: (I) disentangled geometric and temporal separate alignment, within each leverages depth confidence and camera pose as prior for relevant feature matching respectively; (II) global alignment and composition of the transformed geometric and temporal volumes based on semantics consistency. Our method outperforms SOTAs for semantic scene completion on the SemanticKITTI&NuScene-Occupancy datasets and LiDAR semantic segmentation on the NuScene dataset.


Teaser

Teaser

Our hierarchical context alignment learning method versus previous geometric modeling (e.g., OccFormer) and temporal modeling (e.g., VoxFormer-T) methods for semantic occupancy prediction.

Teaser

The effect of the hierarchical context alignment on the SemanticKITTI validation set. We remove both the temporal alignment and the geometric alignment to implement the setting of 'w/o align'. The proposed hierarchical context alignment strategy captures more reliable and comprehensive semantic scenes, and leads to more stable representation modeling in the learning process.


Overview

Teaser

Overall framework of our proposed hierarchical context alignment scheme, which is composed of the Geometric Alignment, the Temporal Alignment and the Global Composition. The Geometric Alignment is achieved with the Geometric Confidence-awareness Lifting (GCL) module. The Temporal Alignment is realized with the Cross-frame Pattern Affinity (CPA) measurement and Affinity-based Dynamic Refinement (ADR) module. Afterward, the Global Composition with the Depth-Hypothesis-Based Transformation (DHBT) module is introduced to aggregate the disentangled relevant content for reliable fine-grained semantic occupancy prediction.


Experimental Results

Quantitative Results

Teaser
Quantitative results on the SemanticKITTI validation set with the state-of-the-art camera-based semantic scene completion methods. The ``S-T'', ``S'' and ``M'' denote temporal stereo images, single-frame stereo images, and single-frame monocular images, respectively. The top two performers are marked bold and underline.
Teaser
Quantitative results on the SemanticKITTI test set with the state-of-the-art semantic scene completion methods. The ``S-T'', ``S'' and ``M'' denote temporal stereo images, single-frame stereo images, and single-frame monocular images, respectively. The top two performers are marked bold and underline.
Teaser
Quantitative results on the NuScene-Occupancy validation set with the state-of-the-art semantic scene completion methods. The top two performers are marked bold and underline. The ``L'', ``M'', ``M-D'' and ``M-T'' denote LiDAR inputs, monocular images, monocular images with depth maps and temporal monocular images, respectively. The LiDAR points are projected and densified to generate the depth maps.
Teaser
Quantitative results on the nuScene validation set with the state-of-the-art LiDAR semantic segmentation. The top two performers are marked bold and underline. The ``L'', ``M'' and ``M-T'' denote LiDAR inputs, monocular images and temporal monocular images, respectively.

Qualitative Results

Teaser
Qualitative results on the SemanticKITTI validation set. Our proposed Hi-SOP captures more complete and accurate scenery layouts compared with VoxFormer. Meanwhile, Hi-SOP hallucinates more proper scenery beyond the camera's field of view.
Teaser
Qualitative results on the NuScene-Occupancy validation set. Our proposed Hi-SOP can generate more complete and comprehensive semantic scenes compared with the ground truth.
Teaser
Qualitative results on the nuScene validation set. Our proposed Hi-SOP generates more accurate semantic labels compared with the results from TPVFormer.

Ablation Study

Teaser
Ablation study for different architectural components on the SemanticKITTI validation set.
Teaser
Ablation studies of quantity setting for the Multi-group Context Generation and Multi-level Deformable Block.