Generalization ability of a perceptron with nonmonotonic transfer function

Jun ichi Inoue, Hidetoshi Nishimori, Yoshiyuki Kabashima

研究成果: Article査読

抄録

We investigate the generalization ability of a perceptron with nonmonotonic transfer function of a reversed-wedge type in on-line mode. This network is identical to a parity machine, a multilayer network. We consider several learning algorithms. By the perceptron algorithm the generalization error is shown to decrease by the [Formula Presented]-law similarly to the case of a simple perceptron in a restricted range of the parameter [Formula Presented] characterizing the nonmonotonic transfer function. For other values of [Formula Presented], the perceptron algorithm leads to the state where the weight vector of the student is just opposite to that of the teacher. The Hebbian learning algorithm has a similar property; it works only in a limited range of the parameter. The conventional AdaTron algorithm does not give a vanishing generalization error for any values of [Formula Presented]. We thus introduce a modified AdaTron algorithm that yields a good performance for all values of [Formula Presented]. We also investigate the effects of optimization of the learning rate as well as of the learning algorithm. Both methods give excellent learning curves proportional to [Formula Presented]. The latter optimization is related to the Bayes statistics and is shown to yield useful hints to extract maximum amount of information necessary to accelerate learning processes.

本文言語English
ページ(範囲)849-860
ページ数12
ジャーナルPhysical Review E - Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics
58
1
DOI
出版ステータスPublished - 1998

ASJC Scopus subject areas

  • Statistical and Nonlinear Physics
  • Statistics and Probability
  • Condensed Matter Physics

フィンガープリント 「Generalization ability of a perceptron with nonmonotonic transfer function」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル