| 作成者 |
|
|
|
|
|
| 本文言語 |
|
| 出版者 |
|
| 発行日 |
|
| 収録物名 |
|
| 開始ページ |
|
| 終了ページ |
|
| 会議情報 |
|
| 出版タイプ |
|
| 権利関係 |
|
| 関連DOI |
|
|
|
| 関連DOI |
|
| 関連URI |
|
| 関連ISBN |
|
|
|
| 関連HDL |
|
|
|
| 関連情報 |
|
| 概要 |
3D LiDAR sensors are indispensable for the robust vision of autonomous mobile robots. However, deploying LiDAR-based perception algorithms often fails due to a domain gap from the training environment..., such as inconsistent angular resolution and missing properties. Existing studies have tackled the issue by learning inter-domain mapping, while the transferability is constrained by the training configuration and the training is susceptible to peculiar lossy noises called ray-drop. To address the issue, this paper proposes a generative model of LiDAR range images applicable to the data-level domain transfer. Motivated by the fact that LiDAR measurement is based on point-by-point range imaging, we train an implicit image representation-based generative adversarial networks along with a differentiable ray-drop effect. We demonstrate the fidelity and diversity of our model in comparison with the point-based and image-based state-of-the-art generative models. We also showcase upsampling and restoration applications. Furthermore, we introduce a Sim2Real application for LiDAR semantic segmentation. We demonstrate that our method is effective as a realistic ray-drop simulator and outperforms state-of-the-art methods.続きを見る
|