Baidu Research in Artificial Intelligence

Baidu

(T) I attended last week three presentations from the Baidu AI Lab in Sunnyvale, part of the SF Big Analytics meet-up in San Francisco. Professor Andrew Ng’s Stanford University Professor and Coursera Co-Founder is leading Baidu’s research in Artificial Intelligence with a great team of scientists.

Following are my notes from the lecture presentations:

Professor Andrew Ng: Why is deep learning taking off?

An analogy to a rocket….Engine <=>large neural networks – Fuel <=>data

  • 2007: CPU -> 1 million connections
  • 2008: GPU -> 10 million connections
  • 2011: Many CPUs = Cloud -> 1 billion connections
  • 2015: Many GPUs = HPC -> 100 billion connections

Bryan Catanzaro: Why is HPC so important to AI?

  • Training deep neural networks is an HPC challenge

  • Using HPC hardware and traditional software approaches reduces training time

  • This lets scale to large models and data sets

  • Scaling brings AI progress

Many GPUs (scalable parallel processing) versus many CPUs (fast serial processing):

  • HPC (many GPUs): Computing at the limit – FLOPS/memory bandwidth – tightly coupled
  • Cloud (many CPUs): Hardware at the limit – I/O bound – Confederated

Awni Hannun: Deep learning for speech recognition

Audio -> acoustic model/phonemes -> prediction of words -> language model with probability of word

Key ingredients:

  • Model: bi-directional recurrent neural network (RNN) with CTC (Connectionist Temporal Classification)

  • Data: over 100,00 hours of synthesized data – (speech + noise)

  • Computation: GPU/ model and data parallel

Modeling problems:

  • Must handle variable length input and output
    • Loss function: audio -> letters
    • Inference

Reference: A Silicon Valley Insider, Deep Dive into Deep Learning

Note: The picture above is from the talk.

Copyright © 2005-2015 by Serge-Paul Carrasco. All rights reserved.
Contact Us: asvinsider at gmail dot com.