Latest Thinking: Scaled Machine Learning


(T)  I had the opportunity to attend last Saturday, the second Scaled Machine Learning conference @ Stanford University organized by Professor Reza Zadeh, and his venture Madroid. The speakers were some of the leading minds in the field, but every talk was quite understandable (at least the concepts). Following is the summary of a few selected talks with the video of the presentation:

Systems and Machine Learning – Jeff Dean from Google

The first part of Jeff’s talk was about some examples of machine learning applications that Google recently worked. The second part was about research for AutoML. And, the last part was about Google’s TPU:


RL Systems – RISELab – Professor Ion Stoica from UC Berkeley

The goal of the RISELab is to develop open source platforms, tools, and algorithms for intelligent real-time decisions on live-data. Professor Stoica presented his research in Reinforcement Learning tools and applications:


Meta Learning and Self Play – Ilya Sutskever from OpenAI

Like last year, Ilya presentation was also on OpenAI’s research in Reinforcement Learning, and in particular Self Play:


Scaling of Machine Leaning – Professor Bill Dally from Stanford and nVIDIA

The presentation from Professor Dally is very interesting and mostly about how nVIDIA is improving its GPUs for deep learning applications that require massive scale knowing that Moore’s law has ended. Examples of technologies being discussed include: sparsity, trained quantization, and accelerators.



Note: The picture above is from the conference.

Copyright © 2005-2018 by Serge-Paul Carrasco. All rights reserved.
Contact Us: asvinsider at gmail dot com.