(T) Four students of UC Berkeley, Caroline Chan, Shiry Ginosar, Tinghui Zhou, and Alexei A. Efros have developed a not-so-simple but efficient way based on GAN (Generative Adversarial Models) to transfer the dancing motion of a professional dancer to you.
As a result, everyone can now be a great dancer for any dance.
Their paper titled “Everybody Dance Now” is on arXIv with the following introduction:
“This paper presents a simple method for “do as I do” motion transfer: given a source video of a person dancing we can transfer that performance to a novel (amateur) target after only a few minutes of the target subject performing standard moves. We pose this problem as a per-frame image-to-image translation with spatio-temporal smoothing. Using pose detections as an intermediate representation between source and target, we learn a mapping from pose images to a target subject’s appearance. We adapt this setup for temporally coherent video generation including realistic face synthesis.”
Here is a fun video for their paper:
And, the Web site for their project: https://carolineec.github.io/everybody_dance_now/
Note: The picture above is from the San Francisco ballet dancing at the Stern Grove Festival.
Copyright © 2005-2018 by Serge-Paul Carrasco. All rights reserved.
Contact Us: asvinsider at gmail dot com.
Categories: Machine Learning