TY - GEN
T1 - Understanding convolutional neural networks in terms of category-level attributes
AU - Ozeki, Makoto
AU - Okatani, Takayuki
PY - 2015/1/1
Y1 - 2015/1/1
N2 - It has been recently reported that convolutional neural networks (CNNs) show good performances in many image recognition tasks. They significantly outperform the previous approaches that are not based on neural networks particularly for object category recognition. These performances are arguably owing to their ability of discovering better image features for recognition tasks through learning, resulting in the acquisition of better internal representations of the inputs. However, in spite of the good performances, it remains an open question why CNNs work so well and/or how they can learn such good representations. In this study, we conjecture that the learned representation can be interpreted as category-level attributes that have good properties. We conducted several experiments by using the dataset AwA (Animals with Attributes) and a CNN trained for ILSVRC-2012 in a fully supervised setting to examine this conjecture. We report that there exist units in the CNN that can predict some of the 85 semantic attributes fairly accurately, along with a detailed observation that this is true only for visual attributes and not for non-visual ones. It is more natural to think that the CNN may discover not only semantic attributes but non-semantic ones (or ones that are difficult to represent as a word). To explore this possibility, we perform zero-shot learning by regarding the activation pattern of upper layers as attributes describing the categories. The result shows that it outperforms the state-of-the-art with a significant margin.
AB - It has been recently reported that convolutional neural networks (CNNs) show good performances in many image recognition tasks. They significantly outperform the previous approaches that are not based on neural networks particularly for object category recognition. These performances are arguably owing to their ability of discovering better image features for recognition tasks through learning, resulting in the acquisition of better internal representations of the inputs. However, in spite of the good performances, it remains an open question why CNNs work so well and/or how they can learn such good representations. In this study, we conjecture that the learned representation can be interpreted as category-level attributes that have good properties. We conducted several experiments by using the dataset AwA (Animals with Attributes) and a CNN trained for ILSVRC-2012 in a fully supervised setting to examine this conjecture. We report that there exist units in the CNN that can predict some of the 85 semantic attributes fairly accurately, along with a detailed observation that this is true only for visual attributes and not for non-visual ones. It is more natural to think that the CNN may discover not only semantic attributes but non-semantic ones (or ones that are difficult to represent as a word). To explore this possibility, we perform zero-shot learning by regarding the activation pattern of upper layers as attributes describing the categories. The result shows that it outperforms the state-of-the-art with a significant margin.
UR - http://www.scopus.com/inward/record.url?scp=84945972026&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84945972026&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-16808-1_25
DO - 10.1007/978-3-319-16808-1_25
M3 - Conference contribution
AN - SCOPUS:84945972026
SN - 9783319168074
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 362
EP - 375
BT - Computer Vision - ACCV 2014 - 12th Asian Conference on Computer Vision, Revised Selected Papers
A2 - Yang, Ming-Hsuan
A2 - Saito, Hideo
A2 - Cremers, Daniel
A2 - Reid, Ian
PB - Springer-Verlag
T2 - 12th Asian Conference on Computer Vision, ACCV 2014
Y2 - 1 November 2014 through 5 November 2014
ER -