(T) Graphs have recently been “the big thing” for deep learning. There are perfect examples of machine learning applications in the Non-Euclidian space. Graphs have been used to predict friends and user communities in Social Networks, predict relationships between items and users in recommender systems for e-commerce, discover drugs in biology, and simulate the interactions between particles in physics.
Google has notoriously used graphs to predict protein folding, predict your estimated time of arrivals in Google Maps, and to design the next generation of TPUs.
Neo4j has implemented many algorithms (node similarity and embeddings, node embeddings, pathfinding and selection…) in its graph database. And, Google just released last month a new TensorFlow library, TensorFlow GNNs, to develop graph neural networks (GNNs).
And last but not least, you can make the argument (but not so easy to make) as did PhD Student Chaitanya K. Joshi, that Transformers are Graph Neural Networks.
To deeper dive:
Tutorials and lectures:
Petar Veličković from Deep Mind – Introduction to GNN
Petar Veličković’s Geometric Deep Learning class:
Professor Jure Leskovec‘s class CS224W Machine Learning with Graphs at Stanford University:
Note: The picture above is la fête foraine de Rennes in France.
Copyright © 2005-2021 by Serge-Paul Carrasco. All rights reserved.
Contact Us: asvinsider at gmail dot com
Categories: Algorithms, Artificial Intelligence, Deep Learning, Mathematics