Solving convex optimization problems using recurrent neural networks in finite time

Long Cheng, Zeng Guang Hou, Noriyasu Homma, Min Tan, Madam M. Gupta

Research output: Chapter in Book/Report/Conference proceedingConference contribution

11 Citations (Scopus)

Abstract

A recurrent neural network is proposed to deal with the convex optimization problem. By employing a specific nonlinear unit, the proposed neural network is proved to be convergent to the optimal solution in finite time, which increases the computation efficiency dramatically. Compared with most of existing stability conditions, i.e., asymptotical stability and exponential stability, the obtained finite-time stability result is more attractive, and therefore could be considered as a useful supplement to the current literature. In addition, a switching structure is suggested to further speed up the neural network convergence. Moreover, by using the penalty function method, the proposed neural network can be extended straightforwardly to solving the constrained optimization problem. Finally, the satisfactory performance of the proposed approach is illustrated by two simulation examples.

Original languageEnglish
Title of host publication2009 International Joint Conference on Neural Networks, IJCNN 2009
Pages538-543
Number of pages6
DOIs
Publication statusPublished - 2009 Nov 18
Event2009 International Joint Conference on Neural Networks, IJCNN 2009 - Atlanta, GA, United States
Duration: 2009 Jun 142009 Jun 19

Publication series

NameProceedings of the International Joint Conference on Neural Networks

Other

Other2009 International Joint Conference on Neural Networks, IJCNN 2009
Country/TerritoryUnited States
CityAtlanta, GA
Period09/6/1409/6/19

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Solving convex optimization problems using recurrent neural networks in finite time'. Together they form a unique fingerprint.

Cite this