(T) This is probably the state of the art (or close to it) of character physics-based animation for generating humanoids through reinforcement learning (RL) from a team of researchers at UC Berkeley led by PhD Student Jason Peng and Professor Sergey Levine. You have a dataset of video clips and the system will generate automatically the moves of the humanoid. The system include two reinforcement learning sub-systems. A first RL system learns the desired behaviors (walks, runs, turns, jumps…) of how the humanoid will move from the video clips. A second RL system uses that knowledge to accomplish the moves for the simulated humanoid.
The policy of the first RL system is generated from the discriminative network of a GAN that classifies the moves from the simulated humanoid produced by the generative model and the moves in the video clips.
The system can simulate moves that are not in the initial clips but close to what it is being ask to do, or learn smart transitions between moves such as slowing down before turning in other directions.
Paper, code, and blog article can be found at the following link: https://xbpeng.github.io/projects/DeepMimic/index.html
- Professor Sergey Levine, Deep Reinforcement Learning class at UC Berkeley
- Developing Autonomous Decision Making Systems with Deep RL, A Silicon Valley Insider
- How Reinforcement Learning Enables a new Generation of Robots, A Silicon Valley Insider
Note: The picture above is from the YouTube videos of the project.
Copyright © 2005-2021 by Serge-Paul Carrasco. All rights reserved.
Contact Us: asvinsider at gmail dot com