Extracting a palm region with fixed location from an input hand image is a crucial task for palmprint recognition to realise reliable person authentication under contactless and unconstrained conditions. A palm region can be extracted from the fixed location using the gaps between fingers. An accurate and robust hand segmentation method is indispensable to extract a palm region from an image with complex background taken under various environments. In this study, HandSegNet, which is a hand segmentation method using Convolutional Neural Network (CNN) for contactless palmprint recognition, is proposed. HandSegNet employs a new CNN architecture consisting of an encoder–decoder model with a pyramid pooling module. Through performance evaluation using a set of synthesised hand images, HandSegNet exhibited the best segmentation results of 98.90% and 93.20% for accuracy and intersection over union, respectively. The effectiveness of HandSegNet in contactless palmprint recognition through experiments using a set of synthesised images of hand images is also demonstrated. Comparing the performance of palmprint recognition using three conventional methods and HandSegNet for palm region extraction, the proposed method has the lowest equal error rate of 4.995%, demonstrating its effectiveness in palm region extraction for contactless palmprint recognition.
ASJC Scopus subject areas
- コンピュータ ビジョンおよびパターン認識