Abstract
This paper demonstrates that the Lyapunov exponents of recurrent neural networks can be controlled by our proposed methods. One of the control methods minimize a squared error e λ = (λ - λ obj) 2/2 by a gradient method, where λ is the largest Lyapunov exponent of the network and λ obj is a desired exponent. λ implying the dynamical complexity is calculated by observing the state transition for a long-term period. This method is, however, computationally expensive for large-scale recurrent networks and the control is unstable for recurrent networks with chaotic dynamics since a gradient correction through time diverges due to the chaotic instability. We also propose an approximation method in order to reduce the computational cost and realize a 'stable' control for chaotic networks. The new method is based on a stochastic relation which allows us to calculate the correction through time in a fashion without time evolution. Simulation results show that the approximation method can control the exponent for recurrent networks with chaotic dynamics under a restriction.
Original language | English |
---|---|
Title of host publication | Proceedings of the IEEE International Conference on Systems, Man and Cybernetics |
Publisher | IEEE |
Volume | 1 |
Publication status | Published - 1999 |
Event | 1999 IEEE International Conference on Systems, Man, and Cybernetics 'Human Communication and Cybernetics' - Tokyo, Jpn Duration: 1999 Oct 12 → 1999 Oct 15 |
Other
Other | 1999 IEEE International Conference on Systems, Man, and Cybernetics 'Human Communication and Cybernetics' |
---|---|
City | Tokyo, Jpn |
Period | 99/10/12 → 99/10/15 |
ASJC Scopus subject areas
- Hardware and Architecture
- Control and Systems Engineering