Full-dimensional natural arm manipulation is a challenging task in the field of model-based control due to its high degree of freedom and unknown dynamics of the given system. Deep reinforcement learning (DRL) offers a promising model-free approach for handling high-dimensional robotics problems. Although impressive results for the arm manipulation task have been reported, it still remains an open problem on how we can create human-like synergetic reaching motion using learning algorithms. In this study, we apply DRL for managing full-dimensional arm manipulation in a simulation study, and verify the relations among motion error, energy, and synergy emergence, to reveal the mechanism of employing motor synergy. Although synergy information has never been encoded into the reward function, the synergy naturally emerges along with feedforward control, leading to a similar situation as human motion learning. To the best of our knowledge, this is a pioneer study demonstrating the error and energy optimization issue exists behind the motor synergy employment in DRL for reaching tasks. In addition, our proposed feedback-augmented DRL controller shows better capability over DRL in terms of synergy development and the coupled criteria of error-energy index. This implies that feedback control can support the learning process under redundancy by voiding unnecessary random exploration.
ASJC Scopus subject areas
- コンピュータ サイエンスの応用