(T) I had the opportunity to attend last Saturday, the Scaled Machine Learning conference @ Stanford University organized by Professor Reza Zadeh, and his venture Madroid. The speakers were some of the leading minds in the field, but every talk was quite understandable (at least the concepts). Following is the summary of a few talks with the speaker slides:
Scaled Machine Learning with TensorFlow – Jeff Dean from Google
The first part of Jeff’s talk was about the applications of deep learning that Google is pursuing with TensorFlow. The second part was about Automated ML e.g. learning to learn. To that end, Jeff presented two approaches: RL-based architecture search and model architecture evolution. He concluded how to dynamically learn and grow pathways through a single large model sparsely activated.
See Presentation: Google_SML
Real-time Intelligent Secure Extension – RISELab – Professor Ion Stoica from UC Berkeley
The goal of the RISELab is to develop open source platforms, tools, and algorithms for intelligent real-time decisions on live-data. Professor Stoica and his team are presently focusing on three research areas:
- Secure Real-time Decisions Stack (SRDS):
- Open source platform to develop RISE like apps
- Reinforcement Learning (RL) as one of the key app patterns
- Secure from ground up Learning control hierarchies: speedup learning, training
- Shared learning: learn over confidential data
See Presentation: Berkeley_SML
DAWN – Infrastructure for Usable Machine Learning – Professor Matei Zaharia from Stanford
The goal of DAWN is to provide machine learning for everyone via novel techniques and interfaces that span hardware, systems, and algorithms. Professor Zaharia and his team are presently working on the DAWN Stack that provides data acquisition, feature engineering, model training, and product ionizing for the full stack: hardware, systems, algos, and interfaces:
See Presentation: Stanford_SML
Evolution Strategies: A Scalable Alternative to Reinforcement Learning – Ilya Sutskever from OpenAI
Evolution strategies (ES) presented by Ilya Sutskever is an optimization technique that’s been known for decades and can rival with the performance of standard reinforcement learning (RL). Intuitively, the optimization is a “guess and check” process, where you start with some random parameters, and then repeatedly – first tweak the guess a bit randomly, and second move your guess slightly towards whatever tweaks worked better.
See Presentation: OpenAI_SML
TensorFlow on Apache Spark – Andy Feng from Yahoo
Andy Feng and his team @ Yahoo have been working on running deep learning clusters based on TensorFlow on a large number of Spark clusters.
See Presentation: Yahoo_SML
Memory Interoperability for Analytics and Machine Learning – Wes McKinney from Two Sigma
Wes is the creator of the very popular Python Pandas project, and presented some of his recent ideas and projects on zero-memory interfaces:
See Presentation: McKinnney_SML
Note: The picture above is from the conference.
Copyright © 2005-2017 by Serge-Paul Carrasco. All rights reserved.
Contact Us: asvinsider at gmail dot com.