TY - GEN
T1 - A Generative Model of Underwater Images for Active Landmark Detection and Docking
AU - Liu, Shuang
AU - Ozay, Mete
AU - Xu, Hongli
AU - Lin, Yang
AU - Okatani, Takayuki
N1 - Funding Information:
ACKNOWLEDGMENT This work was partly supported by JST CREST Grant Number JPMJCR14D1.
Publisher Copyright:
© 2019 IEEE.
PY - 2019/11
Y1 - 2019/11
N2 - Underwater active landmarks (UALs) are widely used for short-range underwater navigation in underwater robotics tasks. Detection of UALs is challenging due to large variance of underwater illumination, water quality and change of camera viewpoint. Moreover, improvement of detection accuracy relies upon statistical diversity of images used to train detection models. We propose a generative adversarial network, called Tank-to-field GAN (T2FGAN), to learn generative models of underwater images, and use the learned models for data augmentation to improve detection accuracy. To this end, first a T2FGAN is trained using images of UALs captured in a tank. Then, the learned model of the T2FGAN is used to generate images of UALs according to different water quality, illumination, pose and landmark configurations (WIPCs). In experimental analyses, we first explore statistical properties of images of UALs generated by T2FGAN under various WIPCs for active landmark detection. Then, we use the generated images for training detection algorithms. Experimental results show that training detection algorithms using the generated images can improve detection accuracy. In field experiments, underwater docking tasks are successfully performed in a lake by employing detection models trained on datasets generated by T2FGAN.
AB - Underwater active landmarks (UALs) are widely used for short-range underwater navigation in underwater robotics tasks. Detection of UALs is challenging due to large variance of underwater illumination, water quality and change of camera viewpoint. Moreover, improvement of detection accuracy relies upon statistical diversity of images used to train detection models. We propose a generative adversarial network, called Tank-to-field GAN (T2FGAN), to learn generative models of underwater images, and use the learned models for data augmentation to improve detection accuracy. To this end, first a T2FGAN is trained using images of UALs captured in a tank. Then, the learned model of the T2FGAN is used to generate images of UALs according to different water quality, illumination, pose and landmark configurations (WIPCs). In experimental analyses, we first explore statistical properties of images of UALs generated by T2FGAN under various WIPCs for active landmark detection. Then, we use the generated images for training detection algorithms. Experimental results show that training detection algorithms using the generated images can improve detection accuracy. In field experiments, underwater docking tasks are successfully performed in a lake by employing detection models trained on datasets generated by T2FGAN.
UR - http://www.scopus.com/inward/record.url?scp=85081165475&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85081165475&partnerID=8YFLogxK
U2 - 10.1109/IROS40897.2019.8968146
DO - 10.1109/IROS40897.2019.8968146
M3 - Conference contribution
AN - SCOPUS:85081165475
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 8034
EP - 8039
BT - 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019
Y2 - 3 November 2019 through 8 November 2019
ER -