Error measures of the back-propagation learning algorithm

Sumiyoshi Fujiki, Mitsuyuki Nakao, Nahomi M. Fujiki

研究成果: Article査読

5 被引用数 (Scopus)

抄録

If the conventional quadratic error measure is used, the learning process of the error back-propagation algorithm often fails in metastable states and the learning process suffers serious inefficiency. If the Kullback-Leibler divergence is used, it can be shown numerically that the most typical metastable states are removed and that the learning efficiency is improved significantly as the number of hidden neurons is increased. This means that the Kullback-Leibler divergence is a superior error measure for the error back-propagation algorithm with scalability for the redundancy of hidden neurons. This scalability has a great advantage in applications since we can simply provide a network with large enough size for a given problem, and we need not know its optimal network size in advance.

本文言語English
ページ(範囲)1091-1095
ページ数5
ジャーナルJournal of the Korean Physical Society
40
6
出版ステータスPublished - 2002 6

ASJC Scopus subject areas

  • Physics and Astronomy(all)

フィンガープリント 「Error measures of the back-propagation learning algorithm」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル