Google Brain Research in 2018


(T) As he did last year, Jeff Dean, the tech lead of the Google Brain Team and probably one of the most famous Silicon Valley engineers, shared some of the key achievements of his team for 2018 in a recent blog post:

I would recommend everyone interesting in machine and deep learning to thoroughly read that blog post.

Below are my key takeaways from the post:

Some area of focus

  • AI for Social Good: predictions of river floods, earthquake aftershocks
  • Assistive technology: “Can you book me a haircut at 4 PM today?“, and a virtual agent will interact on your behalf over the telephone to handle the necessary details
  • Quantum computing: Bristlecone, a new 72-qubit quantum computing device,
  • Natural language understanding: BERT pre-trained using only a plain text corpus, can then be fine-tuned on a wide variety of natural language tasks using transfer learning
  • Perception: stereo magnification, which enables synthesizing novel photorealistic views of a scene
  • Computational photography: Night Sight, which enables Pixel phone cameras to “see in the dark”

Core technologies

  • Algorithms: new gradient-based optimization methods, and learning with privacy
  • AutoML: evolutionary algorithms to automatically discover state-of-the-art neural network architectures
  • TPUs: free via Colab!
  • Expansion of the TensorFlow ecosystem: TF Lite, TF.js, and TF Probability
  • Robotics: ML teaching robots how to act

Other applications

  • Physical and biological science: finding planets, DNA sequences, cell images…
  • Healthcare: expanded across the broad space of computer-aided diagnostics to clinical task predictions

Reference: Looking Back at Google’s Research Efforts in 2018

Note: The picture above is a display of Buches de Noel.

Copyright © 2005-2019 by Serge-Paul Carrasco. All rights reserved.
Contact Us: asvinsider at gmail dot com.