Letter on Convergence of In-Parameter-Linear Nonlinear Neural Architectures With Gradient Learnings

Ivo Bukovsky, Gejza Dohnal, Peter M. Benes, Kei Ichiji, Noriyasu Homma

Research output: Contribution to journalArticlepeer-review

Abstract

This letter summarizes and proves the concept of bounded-input bounded-state (BIBS) stability for weight convergence of a broad family of in-parameter-linear nonlinear neural architectures (IPLNAs) as it generally applies to a broad family of incremental gradient learning algorithms. A practical BIBS convergence condition results from the derived proofs for every individual learning point or batches for real-time applications.

Original languageEnglish
JournalIEEE Transactions on Neural Networks and Learning Systems
DOIs
Publication statusAccepted/In press - 2021

Keywords

  • Bounded-input bounded-state stability (BIBS)
  • Convergence
  • extreme learning machines
  • in-parameter-linear nonlinear neural architectures (IPLNAs)
  • incremental gradient learnings
  • input-to-state stability (ISS)
  • Kernel
  • Learning systems
  • Loss measurement
  • Mechanical engineering
  • Neural networks
  • polynomial neural networks
  • random vector functional link networks
  • Stability criteria
  • weight convergence.

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Letter on Convergence of In-Parameter-Linear Nonlinear Neural Architectures With Gradient Learnings'. Together they form a unique fingerprint.

Cite this