项目作者: QingyongHu

项目描述 :
Generalized 3D Surface Descriptor (CVPR 2021)
高级语言: Python
项目地址: git://github.com/QingyongHu/SpinNet.git
创建时间: 2020-11-18T09:18:56Z
项目社区:https://github.com/QingyongHu/SpinNet

开源协议:MIT License

下载


PWC
License CC BY-NC-SA 4.0
arXiv

SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration (CVPR 2021)

This is the official repository of SpinNet, a conceptually simple neural architecture to extract local
features which are rotationally invariant whilst sufficiently informative to enable accurate registration. For technical details, please refer to:

SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration

Sheng Ao*, Qingyong Hu*, Bo Yang, Andrew Markham, Yulan Guo.

( indicates equal contribution*)

[Paper] [Video] [Project page]

(1) Overview

(2) Setup

This code has been tested with Python 3.6, Pytorch 1.6.0, CUDA 10.2 on Ubuntu 18.04.

  • Clone the repository
    1. git clone https://github.com/QingyongHu/SpinNet && cd SpinNet
  • Setup conda virtual environment
    1. conda create -n spinnet python=3.6
    2. source activate spinnet
    3. conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.2 -c pytorch
    4. conda install -c open3d-admin open3d==0.11.1
    5. pip install "git+git://github.com/erikwijmans/Pointnet2_PyTorch.git#egg=pointnet2_ops&subdirectory=pointnet2_ops_lib"

(3) 3DMatch

Download the processed dataset from Google Drive, Baidu Yun (Verification code:d1vn) and put the folder into data.
Then the structure should be as follows:

  1. --data--3DMatch--fragments
  2. |--intermediate-files-real
  3. |--patches

Training

Training SpinNet on the 3DMatch dataset:

  1. cd ./ThreeDMatch/Train
  2. python train.py

Testing

Evaluate the performance of the trained models on the 3DMatch dataset:

  1. cd ./ThreeDMatch/Test
  2. python preparation.py

The learned descriptors for each point will be saved in ThreeDMatch/Test/SpinNet_{timestr}/ folder.
Then the Feature Matching Recall(FMR) and Inlier Ratio(IR) can be calculated by running:

  1. python evaluate.py [timestr]

The ground truth poses have been put in the ThreeDMatch/Test/gt_result folder.
The Registration Recall can be calculated by running the evaluate.m in ThreeDMatch/Test/3dmatch which are provided by 3DMatch.
Note that, you need to modify the descriptorName to SpinNet_{timestr} in the ThreeDMatch/Test/3dmatch/evaluate.m file.

(4) KITTI

Download the processed dataset from Google Drive, Baidu Yun (Verification code:d1vn), and put the folder into data.
Then the structure is as follows:

  1. --data--KITTI--dataset
  2. |--icp
  3. |--patches

Training

Training SpinNet on the KITTI dataset:

  1. cd ./KITTI/Train/
  2. python train.py

Testing

Evaluate the performance of the trained models on the KITTI dataset:

  1. cd ./KITTI/Test/
  2. python test_kitti.py

(5) ETH

The test set can be downloaded from here, and put the folder into data, then the structure is as follows:

  1. --data--ETH--gazebo_summer
  2. |--gazebo_winter
  3. |--wood_autmn
  4. |--wood_summer

(6) Generalization across Unseen Datasets

3DMatch to ETH

Generalization from 3DMatch dataset to ETH dataset:

  1. cd ./generalization/ThreeDMatch-to-ETH
  2. python preparation.py

The descriptors for each point will be generated and saved in the generalization/ThreeDMatch-to-ETH/SpinNet_{timestr}/ folder.
Then the Feature Matching Recall and inlier ratio can be caluclated by running

  1. python evaluate.py [timestr]

3DMatch to KITTI

Generalization from 3DMatch dataset to KITTI dataset:

  1. cd ./generalization/ThreeDMatch-to-KITTI
  2. python test.py

KITTI to 3DMatch

Generalization from KITTI dataset to 3DMatch dataset:

  1. cd ./generalization/KITTI-to-ThreeDMatch
  2. python preparation.py

The descriptors for each point will be generated and saved in generalization/KITTI-to-3DMatch/SpinNet_{timestr}/ folder.
Then the Feature Matching Recall and inlier ratio can be caluclated by running

  1. python evaluate.py [timestr]

Acknowledgement

In this project, we use (parts of) the implementations of the following works:

Citation

If you find our work useful in your research, please consider citing:

  1. @inproceedings{ao2020SpinNet,
  2. title={SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration},
  3. author={Ao, Sheng and Hu, Qingyong and Yang, Bo and Markham, Andrew and Guo, Yulan},
  4. booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  5. year={2021}
  6. }

References

[1] 3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions, Andy Zeng, Shuran Song, Matthias Nießner, Matthew Fisher, Jianxiong Xiao, and Thomas Funkhouser, CVPR 2017.

Updates

  • 03/04/2021: The code is released!
  • 01/03/2021: This paper has been accepted by CVPR 2021!
  • 25/11/2020: Initial release!
  1. RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds GitHub stars
  2. SoTA-Point-Cloud: Deep Learning for 3D Point Clouds: A Survey GitHub stars
  3. 3D-BoNet: Learning Object Bounding Boxes for 3D Instance Segmentation on Point Clouds GitHub stars
  4. SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point Clouds GitHub stars
  5. SQN: Weakly-Supervised Semantic Segmentation of Large-Scale 3D Point Clouds with 1000x Fewer Labels GitHub stars