(T) The fundamental representation of any feature for any machine learning model is the vector and its multidimensional generalization which is the tensor. This has led to developing machine learning algorithms and data pipelines to compute vectors or matrices of real numbers. As the result, all machine learning operations are in the Euclidean space and inherit mathematical of classical geometry.

But “what if” we want to develop machine learning models that work with data representations for features and embeddings, and or model parameters that are not constrained by the “flatness” of the Euclidian space such as surfaces, graphs, and many kinds of manifolds. This is what a new field in machine learning where features and models will work in non-Euclidian space is attempting to do.

Professor Michael Bronstein from the Imperial College in London led a paper in 2016 “Geometric deep learning: going beyond Euclidean data” that started that field:

“*Many scientific fields study data with an underlying structure that is a non-Euclidean space. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure, and in cases where the invariances of these structures are built into networks used to model them. Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. The purpose of this paper is to overview different examples of geometric deep learning problems and present available solutions, key difficulties, applications, and future research directions in this nascent field.”*

More recenlty here is a blog post and a recent talk from Professor Bronstein given as the keynote at ICLR 2021:

NeurIPS 2020 had a workshop “Differential Geometry meets Deep Learning (DiffGeo4DL)” that is probably the best overview in terms of the state of the art.

Following is one of the tutorials from DiffGeo4DL given by Stanford PhD Student Ines Chami on hyperbolic embeddings and one of the papers “From Trees to Continuous Embeddings and Back: Hyperbolic Hierarchical Clustering“:

If you want to study more that field both Stanford University and the African Master in Machine Intelligence (given by Professor Bronstein research team) have a class:

**References**

- Quanta Magazine: “An Idea From Physics Helps AI See in Higher Dimensions“
- Michael M. Bronstein, Joan Bruna, Taco Cohen, Petar Veličković: Geometric Deep Learning
- Stanford University: “Into the Wild: Machine Learning In Non-Euclidean Spaces“, “Hyperbolic Embeddings with a Hopefully Right Amount of Hyperbole“

Note: The picture above is a representation of Non-Euclidian spaces.

*Copyright © 2005-2021 by Serge-Paul Carrasco. All rights reserved.**Contact Us: asvinsider at gmail dot com*

Categories: Artificial Intelligence, Deep Learning, Machine Learning, Mathematics