Rotation-Invariant Local-to-Global Representation
Learning for 3D Point Cloud

Seohyun Kim Jaeyoo Park Bohyung Han

ECE & ASRI, Seoul National University, Korea

Abstract

We propose a local-to-global representation learning algorithm for 3D point cloud data, which is appropriate to handle various geometric transformations, especially rotation, without explicit data augmentation with respect to the transformations. Our model takes advantage of multi-level abstraction based on graph convolutional neural networks, which constructs a descriptor hierarchy to encode rotation-invariant shape information of an input object in a bottom-up manner. The descriptors in each level are obtained from neural networks based on graphs via stochastic sampling of 3D points, which is effective to make the learned representations robust to the variations of input data. The proposed algorithm presents the state-of-the-art performance on the rotation-augmented 3D object recognition benchmarks and we further analyze its characteristics through comprehensive ablative experiments.


Overall Framework

The proposed network architecture for rotation-invariant 3D object classification. Descriptor extension has multiple stacks of the descriptor extraction module, which expands the scope of descriptors by grouping the local features while maintaining rotation invariance. Graph-based abstraction aggregates local descriptors with graph convolutional neural networks by constructing their topological structure at each hierarchy. Error while loading image

Results

Table 1: 3D object classification results on ModelNet40. The column located in the last, Drop, shows the performance difference of SO(3)/SO(3) and z/SO(3). Error while loading image

Paper

NeurIPS 2020 paper. (pdf, 2.4MB)

Citation

Seohyun Kim, Jaeyoo Park, and Bohyoung Han. Rotation-Invariant Local-to-Global Representation Learning for 3D Point Cloud, Advances in Neural Information Processing Systems (NeurIPS), 2020
Bibtex

Code

Github : Link