How to Learn Faster? Multi-Task and Meta-Learning to the Rescue

IMG_2129

(T) One of the key goals of many data scientist teams is to develop models faster for new use cases and new applications. To that end, “transfer learning” which aims to train on one task, and to transfer that learning to a new task has been widely used in particular in computer vision, and language models.

However, two new techniques have emerged recently: “multi-task learning” which aims to train a system on many tasks, and transfer that learning to the system for a new task, and “meta-learning” which empowers a system to learn to learn from many tasks, and transfer that learning to the system for similar tasks.

Multi-task and meta-learning seem to have found a sweet spot in deep reinforcement learning applications, and in particular robotics and games.

Well-known researchers, who are pioneering multi-task and meta-learning, include Professor Sergey Levine and Professor Pieter Abbeel from UC Berkeley, and Professor Chelsea Finn from Stanford University, and their PhD students.

Following is a gentle introduction to multi-task and meta-learning: Multi-Task and Meta-Learning.

Note: The picture above is “Image à La Maison Verte” a painting from René Magritte.

Copyright © 2005-2020 by Serge-Paul Carrasco. All rights reserved.
Contact Us: asvinsider at gmail dot com