TY - GEN
T1 - Fast object detection for robots in a cluttered indoor environment using integral 3D feature table
AU - Kanezaki, Asako
AU - Suzuki, Takahiro
AU - Harada, Tatsuya
AU - Kuniyoshi, Yasuo
PY - 2011
Y1 - 2011
N2 - Realizing automatic object search by robots in an indoor environment is one of the most important and challenging topics in mobile robot research. If the target object does not exist in a nearby area, the obvious strategy is to go to the area in which it was last observed. We have developed a robot system that collects 3D-scene data in an indoor environment during automatic routine crawling, and also detects objects quickly through a global search of the collected 3D-scene data. The 3D-scene data can be obtained automatically by transforming color images and range images into a set of color voxel data using self-location information. To detect an object, the system moves the bounding box of the target object by a certain step in the color voxel data, extracts 3D features in each box region, and computes the similarity between these features and the target object's features, using an appropriate feature projection learned beforehand. Taking advantage of the additive property of our 3D features, both feature extraction and similarity calculation are considerably accelerated. In the object learning process, the system obtains the feature-projection matrix by weighting unique features of the target object rather than its common features, resulting in reducing object detection errors.
AB - Realizing automatic object search by robots in an indoor environment is one of the most important and challenging topics in mobile robot research. If the target object does not exist in a nearby area, the obvious strategy is to go to the area in which it was last observed. We have developed a robot system that collects 3D-scene data in an indoor environment during automatic routine crawling, and also detects objects quickly through a global search of the collected 3D-scene data. The 3D-scene data can be obtained automatically by transforming color images and range images into a set of color voxel data using self-location information. To detect an object, the system moves the bounding box of the target object by a certain step in the color voxel data, extracts 3D features in each box region, and computes the similarity between these features and the target object's features, using an appropriate feature projection learned beforehand. Taking advantage of the additive property of our 3D features, both feature extraction and similarity calculation are considerably accelerated. In the object learning process, the system obtains the feature-projection matrix by weighting unique features of the target object rather than its common features, resulting in reducing object detection errors.
UR - http://www.scopus.com/inward/record.url?scp=84867949589&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84867949589&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2011.5980129
DO - 10.1109/ICRA.2011.5980129
M3 - Conference contribution
AN - SCOPUS:84867949589
SN - 9781612843865
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 4026
EP - 4033
BT - 2011 IEEE International Conference on Robotics and Automation, ICRA 2011
T2 - 2011 IEEE International Conference on Robotics and Automation, ICRA 2011
Y2 - 9 May 2011 through 13 May 2011
ER -