Synergy Emergence in Deep Reinforcement Learning for Full-Dimensional Arm Manipulation

Jihui Han, Jiazheng Chai, Mitsuhiro Hayashibe

研究成果: Article査読

抄録

Full-dimensional natural arm manipulation is a challenging task in the field of model-based control due to its high degree of freedom and unknown dynamics of the given system. Deep reinforcement learning (DRL) offers a promising model-free approach for handling high-dimensional robotics problems. Although impressive results for the arm manipulation task have been reported, it still remains an open problem on how we can create human-like synergetic reaching motion using learning algorithms. In this study, we apply DRL for managing full-dimensional arm manipulation in a simulation study, and verify the relations among motion error, energy, and synergy emergence, to reveal the mechanism of employing motor synergy. Although synergy information has never been encoded into the reward function, the synergy naturally emerges along with feedforward control, leading to a similar situation as human motion learning. To the best of our knowledge, this is a pioneer study demonstrating the error and energy optimization issue exists behind the motor synergy employment in DRL for reaching tasks. In addition, our proposed feedback-augmented DRL controller shows better capability over DRL in terms of synergy development and the coupled criteria of error-energy index. This implies that feedback control can support the learning process under redundancy by voiding unnecessary random exploration.

本文言語English
論文番号9345796
ページ(範囲)498-509
ページ数12
ジャーナルIEEE Transactions on Medical Robotics and Bionics
3
2
DOI
出版ステータスPublished - 2021 5

ASJC Scopus subject areas

  • コンピュータ サイエンスの応用
  • 人工知能
  • 人間とコンピュータの相互作用
  • 制御と最適化
  • 生体医工学

フィンガープリント

「Synergy Emergence in Deep Reinforcement Learning for Full-Dimensional Arm Manipulation」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル