TY - GEN
T1 - Truncating Wide Networks Using Binary Tree Architectures
AU - Zhangy, Yan
AU - Ozayy, Mete
AU - Li, Shuohao
AU - Okatani, Takayuki
N1 - Funding Information:
This work was partly supported by CREST, JST Grant Number JPMJCR14D1 and JSPS KAKENHI Grant Number JP15H05919.
Publisher Copyright:
© 2017 IEEE.
PY - 2017/12/22
Y1 - 2017/12/22
N2 - In this paper, we propose a binary tree architecture to truncate architecture of wide networks by reducing the width of the networks. More precisely, in the proposed architecture, the width is incrementally reduced from lower layers to higher layers in order to increase the expressive capacity of networks with a less increase on parameter size. Also, in order to ease the gradient vanishing problem, features obtained at different layers are concatenated to form the output of our architecture. By employing the proposed architecture on a baseline wide network, we can construct and train a new network with same depth but considerably less number of parameters. In our experimental analyses, we observe that the proposed architecture enables us to obtain better parameter size and accuracy trade-off compared to baseline networks using various benchmark image classification datasets. The results show that our model can decrease the classification error of a baseline from 20:43% to 19:22% on Cifar-100 using only 28% of parameters that the baseline has. Code is available at https://github.com/ZhangVision/bitnet.
AB - In this paper, we propose a binary tree architecture to truncate architecture of wide networks by reducing the width of the networks. More precisely, in the proposed architecture, the width is incrementally reduced from lower layers to higher layers in order to increase the expressive capacity of networks with a less increase on parameter size. Also, in order to ease the gradient vanishing problem, features obtained at different layers are concatenated to form the output of our architecture. By employing the proposed architecture on a baseline wide network, we can construct and train a new network with same depth but considerably less number of parameters. In our experimental analyses, we observe that the proposed architecture enables us to obtain better parameter size and accuracy trade-off compared to baseline networks using various benchmark image classification datasets. The results show that our model can decrease the classification error of a baseline from 20:43% to 19:22% on Cifar-100 using only 28% of parameters that the baseline has. Code is available at https://github.com/ZhangVision/bitnet.
UR - http://www.scopus.com/inward/record.url?scp=85041892006&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85041892006&partnerID=8YFLogxK
U2 - 10.1109/ICCV.2017.231
DO - 10.1109/ICCV.2017.231
M3 - Conference contribution
AN - SCOPUS:85041892006
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 2116
EP - 2124
BT - Proceedings - 2017 IEEE International Conference on Computer Vision, ICCV 2017
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 16th IEEE International Conference on Computer Vision, ICCV 2017
Y2 - 22 October 2017 through 29 October 2017
ER -