How Reinforcement Learning Enables a new Generation of Robots

COVARIANT

(T) I had the pleasure to listen to several great talks about reinforcement learning from Professor Pieter Abbeel from UC Berkeley at the ODSC West Conference last Fall.

I am following in particular Professor Abbeel’s research in deep reinforcement learning and meta-learning.

Other well-known researchers in that space also include Professor Sergey Levine from UC Berkeley and Professor Chelsea Finn from Stanford University.

Professor Abbeel started Covariant with a few colleagues from UC Berkeley.

Following is a quick description of the underlying robotic technology for Covariant, and a video about its partnership with ABB:

Covariant’s approach, which uses a single deep learning system for all objects, enables an arm equipped with a camera and suction gripper to manipulate around 10,000 different items (and counting). The system can share skills with other arms, including those made by other companies.

Training starts with attempts at a few-shot adaptation. In many cases, the robot can learn from a limited number of attempts. For more intensive training, an engineer wearing virtual reality gear uses hand-tracking hardware to control the arm in a simulated environment. The model learns to mimic the motion.

The model stores basic movements then hones them using reinforcement learning in a variety of simulated situations. The team then uses behavioral cloning to transfer the robot’s learned skills into the real world.

 

 

Note: The picture above is from Covariant.

Copyright © 2005-2020 by Serge-Paul Carrasco. All rights reserved.
Contact Us: asvinsider at gmail dot com.