A Convolutional Neural Network for Point Cloud Instance Segmentation in Cluttered Scene Trained by Synthetic Data without Color

Yajun Xu, Shogo Arai, Fuyuki Tokuda, Kazuhiro Kosuge

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

3D Instance segmentation is a fundamental task in computer vision. Effective segmentation plays an important role in robotic tasks, augmented reality, autonomous driving, etc. With the ascendancy of convolutional neural networks in 2D image processing, the use of deep learning methods to segment 3D point clouds receives much attention. A great convergence of training loss often requires a large amount of human-annotated data, while making such a 3D dataset is time-consuming. This paper proposes a method for training convolutional neural networks to predict instance segmentation results using synthetic data. The proposed method is based on the SGPN framework. We replaced the original feature extractor with 'dynamic graph convolutional neural networks' that learned how to extract local geometric features and proposed a simple and effective loss function, making the network more focused on hard examples. We experimentally proved that the proposed method significantly outperforms the state-of-the-art method in both Stanford 3D Indoor Semantics Dataset and our datasets.

Original languageEnglish
Article number9025047
Pages (from-to)70262-70269
Number of pages8
JournalIEEE Access
Volume8
DOIs
Publication statusPublished - 2020

Keywords

  • Point cloud
  • deep learning
  • instance segmentation

ASJC Scopus subject areas

  • Computer Science(all)
  • Materials Science(all)
  • Engineering(all)

Fingerprint Dive into the research topics of 'A Convolutional Neural Network for Point Cloud Instance Segmentation in Cluttered Scene Trained by Synthetic Data without Color'. Together they form a unique fingerprint.

Cite this