{"id":22554,"date":"2021-01-11T11:48:11","date_gmt":"2021-01-11T11:48:11","guid":{"rendered":"https:\/\/www.experfy.com\/blog\/what-2021-holds-graph-ml\/"},"modified":"2023-09-19T14:08:19","modified_gmt":"2023-09-19T14:08:19","slug":"what-2021-holds-graph-ml","status":"publish","type":"post","link":"https:\/\/www.experfy.com\/blog\/ai-ml\/what-2021-holds-graph-ml\/","title":{"rendered":"What 2021 Holds For Graph ML?"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"22554\" class=\"elementor elementor-22554\" data-elementor-post-type=\"post\">\n\t\t\t\t\t\t<section class=\"has_eae_slider elementor-section elementor-top-section elementor-element elementor-element-7c6681b elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"7c6681b\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"has_eae_slider elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-72dde3d\" data-id=\"72dde3d\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-2f306d0 elementor-widget elementor-widget-text-editor\" data-id=\"2f306d0\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p>The end of the year is a good time to recap and make predictions. 2020 has turned Graph ML into a celebrity of machine learning. For this post, I sought the opinion of prominent researchers in the field of graph ML and its applications trying to summarise the highlights of the past year and predict what is in store for 2021.<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-cde39b2 elementor-widget elementor-widget-heading\" data-id=\"cde39b2\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Beyond message passing<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-df06306 elementor-widget elementor-widget-text-editor\" data-id=\"df06306\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"94e3\"><a href=\"https:\/\/williamleif.github.io\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Will Hamilton<\/strong><\/a>,\u00a0<em>Assistant Professor at McGill University and CIFAR Chair at Mila, author of\u00a0<\/em><a href=\"http:\/\/snap.stanford.edu\/graphsage\/\" target=\"_blank\" rel=\"noreferrer noopener\"><em>GraphSAGE<\/em><\/a><em>.<\/em><\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-13dd946 elementor-widget elementor-widget-text-editor\" data-id=\"13dd946\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<blockquote class=\"wp-block-quote\"><p>\u201c2020 saw the field of Graph ML come to terms with the fundamental limitations of the message-passing paradigm.<\/p><\/blockquote>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-55ca35d elementor-widget elementor-widget-text-editor\" data-id=\"55ca35d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"9516\">These limitations include the so-called \u201cbottleneck\u201d issue [1], problems with over-smoothing [2], and theoretical limits in terms of representational capacity [3,4]. Looking forward, I expect that in 2021 we will be searching for the next big paradigm for Graph ML. I am not sure what exactly the next generation of Graph ML algorithms will look like, but I am confident that progress will require breaking away from the message-passing schemes that dominated the field in 2020 and before.<\/p>\n\n<p id=\"cb28\">I am also hopeful that 2021 will also see Graph ML move into more impactful and challenging application domains. Too much recent research focuses on simple, homophilous node-classification tasks. I also hope to see methodological advancements towards tasks that require more complex algorithmic reasoning, such as tasks involving knowledge graphs, reinforcement learning, and combinatorial optimisation.\u201d<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-b3e3cc7 elementor-widget elementor-widget-heading\" data-id=\"b3e3cc7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Algorithmic reasoning<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-12a4f9d elementor-widget elementor-widget-image\" data-id=\"12a4f9d\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img fetchpriority=\"high\" decoding=\"async\" width=\"1024\" height=\"432\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1fMuK5PgCZjD8YR1XtjfucQ-1024x432.png\" class=\"attachment-large size-large wp-image-18411\" alt=\"What 2021 Holds For Graph ML?\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1fMuK5PgCZjD8YR1XtjfucQ-1024x432.png 1024w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1fMuK5PgCZjD8YR1XtjfucQ-300x127.png 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1fMuK5PgCZjD8YR1XtjfucQ-768x324.png 768w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1fMuK5PgCZjD8YR1XtjfucQ-1536x648.png 1536w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1fMuK5PgCZjD8YR1XtjfucQ-2048x864.png 2048w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1fMuK5PgCZjD8YR1XtjfucQ-610x257.png 610w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1fMuK5PgCZjD8YR1XtjfucQ-750x316.png 750w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1fMuK5PgCZjD8YR1XtjfucQ-1140x481.png 1140w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Pointer Graph Networks incorporate structural inductive biases from classical computer science. Image credit: P. Veli\u010dkovi\u0107.<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-66be3cb elementor-widget elementor-widget-text-editor\" data-id=\"66be3cb\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"8f24\"><a href=\"https:\/\/petar-v.com\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Petar Veli\u010dkovi\u0107<\/strong><\/a><strong>,\u00a0<\/strong><em>Senior Researcher at DeepMind, author of\u00a0<\/em><a href=\"https:\/\/petar-v.com\/GAT\/\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Graph Attention Networks<\/em><\/a><em>.<\/em><\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-230029f elementor-widget elementor-widget-text-editor\" data-id=\"230029f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<blockquote class=\"wp-block-quote\"><p>\u201c2020 has definitively and irreversibly turned graph representation learning into a first-class citizen in ML.\u201d<\/p><\/blockquote>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-4f218a0 elementor-widget elementor-widget-text-editor\" data-id=\"4f218a0\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"dd23\">The great advances made this year are far too many to enumerate briefly, but I am personally most excited about neural algorithmic reasoning. Neural networks are traditionally very powerful in the interpolation regime, but are known to be terrible extrapolators \u2014 and hence inadequate reasoners; as one of the main traits of reasoning is being able to function out-of-distribution. Reasoning tasks are likely to be ideal for further development of GNNs, not only because they are known to align very well with such tasks [5], but also because many real-world graph tasks exhibit homophily, meaning that the most impactful and scalable approaches typically will be much simpler forms of GNNs [6,7].<\/p>\n\n<p id=\"4b4d\">Building on the historical successes of previous neural executors such as the Neural Turing Machine [8] and the Differentiable Neural Computer [9], and reinforced by the now-omnipresent graph machine learning toolbox, several works published in 2020 explored the theoretical limits of neural executors [5,10,11], derived novel and stronger reasoning architectures based on GNNs [12\u201315], and enabled perfect strong generalisation on neural reasoning tasks [16]. While such architectures could naturally translate into wins for combinatorial optimisation [17] in 2021, I am personally most thrilled about how pre-trained algorithmic executors can allow us to apply classical algorithms to inputs that are too raw or otherwise unsuitable for the algorithm. As one example, our XLVIN agent [18] used exactly these concepts to allow a GNN to execute value iteration style algorithms within the reinforcement learning pipeline, even though the specifics of the underlying MDP were not known. I believe 2021 will be ripe with GNN applications to reinforcement learning in general.\u201d<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-256da18 elementor-widget elementor-widget-heading\" data-id=\"256da18\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Relational structure discovery<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-0b7bf3f elementor-widget elementor-widget-image\" data-id=\"0b7bf3f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"1024\" height=\"252\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/17_MMxRHt0gSMsyiXrUWUSA-1024x252.png\" class=\"attachment-large size-large wp-image-18412\" alt=\"What 2021 Holds For Graph ML?\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/17_MMxRHt0gSMsyiXrUWUSA-1024x252.png 1024w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/17_MMxRHt0gSMsyiXrUWUSA-300x74.png 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/17_MMxRHt0gSMsyiXrUWUSA-768x189.png 768w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/17_MMxRHt0gSMsyiXrUWUSA-1536x378.png 1536w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/17_MMxRHt0gSMsyiXrUWUSA-2048x504.png 2048w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/17_MMxRHt0gSMsyiXrUWUSA-610x150.png 610w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/17_MMxRHt0gSMsyiXrUWUSA-750x185.png 750w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/17_MMxRHt0gSMsyiXrUWUSA-1140x281.png 1140w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">GNNs allow learning a state transition graph (right) that explains a complex mult-particle system (left). Image credit: T. Kipf.<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-2cd5380 elementor-widget elementor-widget-text-editor\" data-id=\"2cd5380\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"c6d7\"><a href=\"https:\/\/tkipf.github.io\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Thomas Kipf<\/strong><\/a>,\u00a0<em>Research Scientist at Google Brain, author of\u00a0<\/em><a href=\"https:\/\/tkipf.github.io\/graph-convolutional-networks\/\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Graph Convolutional Networks<\/em><\/a><em>.<\/em><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-8285a9b elementor-widget elementor-widget-text-editor\" data-id=\"8285a9b\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<blockquote class=\"wp-block-quote\"><p>\u201cOne particularly noteworthy trend in the Graph ML community since the recent widespread adoption of GNN-based models is the separation of computation structure from the data structure.<\/p><\/blockquote>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6ec6b4f elementor-widget elementor-widget-text-editor\" data-id=\"6ec6b4f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"fc89\">In a recent ICML\u00a0<a href=\"https:\/\/slideslive.com\/38930558\/relational-structure-discovery\" target=\"_blank\" rel=\"noreferrer noopener\">workshop talk<\/a>, I termed this trend\u00a0<em>relational structure discovery<\/em>. Typically, we design graph neural networks to pass messages on a fixed (or temporally evolving) structure provided by the dataset, i.e. the nodes and edges of the dataset are taken as the gold standard for the computation structure or\u00a0<em>message passing structure<\/em>\u00a0of our model.<\/p>\n\n<p id=\"5239\">In 2020, we have seen rising interest in models that are able to adapt the <em>computation structure<\/em>, i.e., which components they use as nodes and over which pairs of nodes they perform message passing, on the fly \u2014 while going beyond simple attention-based models. Influential examples in 2020 include Amortised Causal Discovery [19\u201320], which makes use of Neural Relational Inference to infer (and reason with) causal graphs from time-series data, GNNs with learnable pointer [21,15] and relation mechanisms [22\u201323], learning mesh-based physical simulators with adaptive computation graphs [24], and models that learn to infer abstract nodes over which to perform computations [25\u201326]. This development has widespread implications, as it allows us to effectively utilise symmetries (e.g. node permutation equivariance) and inductive biases (e.g. modeling of pairwise interaction functions) afforded by GNN architectures in other domains, such as text or video processing.<\/p>\n\n<p id=\"9fb0\">Going forward, I expect that we will see many developments in how one can learn the optimal computational graph structure (both in terms of nodes and relations) given some data and tasks without relying on explicit supervision. Inspection of such learned structures will likely be valuable in deriving better explanations and interpretations of the computations that learned models perform to solve a task, and will likely allow us to draw further analogies to causal reasoning.\u201d<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-b6d53ea elementor-widget elementor-widget-heading\" data-id=\"b6d53ea\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Expressive power<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-b147d73 elementor-widget elementor-widget-text-editor\" data-id=\"b147d73\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"1a77\"><a href=\"https:\/\/haggaim.github.io\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Haggai Maron<\/strong><\/a>,\u00a0<em>Research Scientist at Nvidia, author of\u00a0<\/em><a href=\"http:\/\/irregulardeep.org\/How-expressive-are-Invariant-Graph-Networks-(2-2)\/\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\"><em>provably expressive high-dimensional graph neural networks<\/em><\/a><em>.<\/em><\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-8fb71a4 elementor-widget elementor-widget-text-editor\" data-id=\"8fb71a4\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<blockquote class=\"wp-block-quote\"><p>\u201cThe expressive power of graph neural networks was one of the central topics in Graph ML in 2020.<\/p><\/blockquote>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d4acad4 elementor-widget elementor-widget-text-editor\" data-id=\"d4acad4\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"4ac2\">There were many excellent papers discussing the expressive power of various GNN architectures [27] and showing fundamental expressivity limits of GNNs when their depth and width is restricted [28], describing what kind of structures can be detected and counted using GNNs [29], showing that using a fixed number of GNNs does not make sense for many graph tasks and suggesting an iterative GNN that learns to terminate the message passing process adaptively [14].<\/p>\n\n<p id=\"d73a\">In 2021, I would be happy to see advancements in principled approaches for generative models for graphs, connections between graph matching with GNNs and the expressive power of GNNs, learning graphs of structured data like images and audio, and developing stronger connections between the GNN community and the computer vision community working on scene graphs.\u201d<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d83cc81 elementor-widget elementor-widget-heading\" data-id=\"d83cc81\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Scalability<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-de36909 elementor-widget elementor-widget-text-editor\" data-id=\"de36909\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"de45\"><a href=\"https:\/\/rusty1s.github.io\/#\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Matthias Fey<\/strong><\/a>,\u00a0<em>PhD student at TU Dortmund, developer of\u00a0<\/em><a href=\"https:\/\/pytorch-geometric.readthedocs.io\/en\/latest\/\" target=\"_blank\" rel=\"noreferrer noopener\"><em>PyTorch Geometric<\/em><\/a><em>\u00a0and\u00a0<\/em><a href=\"https:\/\/ogb.stanford.edu\/\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Open Graph Benchmark<\/em><\/a><em>.<\/em><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e792ab2 elementor-widget elementor-widget-text-editor\" data-id=\"e792ab2\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<blockquote class=\"wp-block-quote\"><p>\u201cOne of the most trending topics in Graph ML research in 2020 was tackling the scalability issues of GNNs.<\/p><\/blockquote>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-401da1a elementor-widget elementor-widget-text-editor\" data-id=\"401da1a\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"30e8\">Several approaches relied on simplifying the underlying computation by decoupling prediction from propagation. We have seen numerous papers that simply combine a non-trainable propagation scheme with a graph-agnostic module, either as a pre- [30,7] or post-processing [6] step. This leads to superb runtime and, remarkably, mostly on par performance on homophily graphs. With access to increasingly bigger datasets, I am eager to see how to advance from here and how to make use of trainable and expressive propagation in a scalable fashion.\u201d<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-788e5e7 elementor-widget elementor-widget-heading\" data-id=\"788e5e7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Dynamic graphs<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-9433164 elementor-widget elementor-widget-image\" data-id=\"9433164\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img decoding=\"async\" width=\"1024\" height=\"435\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/01d07Ui25f3OxwCa8-1024x435.png\" class=\"attachment-large size-large wp-image-18413\" alt=\"What 2021 Holds For Graph ML?\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/01d07Ui25f3OxwCa8-1024x435.png 1024w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/01d07Ui25f3OxwCa8-300x127.png 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/01d07Ui25f3OxwCa8-768x326.png 768w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/01d07Ui25f3OxwCa8-610x259.png 610w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/01d07Ui25f3OxwCa8-750x318.png 750w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/01d07Ui25f3OxwCa8-1140x484.png 1140w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/01d07Ui25f3OxwCa8.png 1185w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">A dynamic graph.<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-e293414 elementor-widget elementor-widget-text-editor\" data-id=\"e293414\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"4dd9\"><a href=\"https:\/\/www.emanuelerossi.co.uk\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Emanuele Rossi<\/strong><\/a>,\u00a0<em>ML Researcher at Twitter and PhD student at Imperial College London, author of\u00a0<\/em><a href=\"https:\/\/towardsdatascience.com\/temporal-graph-networks-ab8f327f2efe\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Temporal Graph Networks<\/em><\/a><em>.<\/em><\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3bccbe2 elementor-widget elementor-widget-text-editor\" data-id=\"3bccbe2\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<blockquote class=\"wp-block-quote\"><p>\u201cMany interesting Graph ML applications are inherently dynamic, where both the graph topology and the attributes evolve over time.<\/p><\/blockquote>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-1e4642c elementor-widget elementor-widget-text-editor\" data-id=\"1e4642c\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"5f4a\">This is the case in social networks, financial transaction networks, or user-item interaction networks. Until recently, the vast majority of research on Graph ML has focused on static graphs. The few works attempting to deal with dynamic graphs mainly considered <em>discrete-time dynamic graphs<\/em>, a series of graph snapshots at regular intervals. In 2020, we saw an emerging set of works [31\u201334] on a more general category of <em>continuous-time dynamic graphs<\/em>, that can be thought of as an asynchronous stream of timed events. Moreover, the first interesting successful applications of models for dynamic graphs are also starting to emerge: we saw fake account detection [35], fraud detection [36], and controlling the spreading of an epidemic [37].<\/p>\n\n<p id=\"a2e5\">I think that we are only scratching the surface of this exciting direction and many interesting questions remain unanswered. Among important open problems are scalability, better theoretical understanding of dynamic models, and combining spatial and temporal diffusion of information in a single framework. We also need more reliable and challenging benchmarks to make sure progress can be better evaluated and tracked. Finally, I hope to see more successful applications of dynamic graph neural architectures, especially in the industry.\u201d<\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-ef805ee elementor-widget elementor-widget-heading\" data-id=\"ef805ee\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">New hardware<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-43a9d0f elementor-widget elementor-widget-image\" data-id=\"43a9d0f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"298\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1Fx2ZTfHSRnaWS3KdhZAs7w-1024x298.png\" class=\"attachment-large size-large wp-image-18414\" alt=\"What 2021 Holds For Graph ML?\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1Fx2ZTfHSRnaWS3KdhZAs7w-1024x298.png 1024w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1Fx2ZTfHSRnaWS3KdhZAs7w-300x87.png 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1Fx2ZTfHSRnaWS3KdhZAs7w-768x224.png 768w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1Fx2ZTfHSRnaWS3KdhZAs7w-1536x447.png 1536w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1Fx2ZTfHSRnaWS3KdhZAs7w-610x178.png 610w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1Fx2ZTfHSRnaWS3KdhZAs7w-750x218.png 750w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1Fx2ZTfHSRnaWS3KdhZAs7w-1140x332.png 1140w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1Fx2ZTfHSRnaWS3KdhZAs7w.png 1883w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Graphcore is a semiconductor company developing new hardware for graphs. Image credit: Graphcore<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-7c9905f elementor-widget elementor-widget-text-editor\" data-id=\"7c9905f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"a012\"><strong>Mark Saroufim,\u00a0<\/strong><em>ML Engineer at\u00a0<\/em><a href=\"https:\/\/www.graphcore.ai\/\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Graphcore<\/em><\/a><em>.<\/em><\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-be8e95f elementor-widget elementor-widget-text-editor\" data-id=\"be8e95f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<blockquote class=\"wp-block-quote\"><p>\u201cI cannot think of a single customer I have worked with who has not either deployed a Graph Neural Network in production or is planning to.<\/p><\/blockquote>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-a94a8d4 elementor-widget elementor-widget-text-editor\" data-id=\"a94a8d4\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"6495\">Part of this trend is that the natural graph structure in applications such as NLP, protein design, or molecule property prediction has been traditionally ignored, and instead the data was treated as sequences amenable for existing and well-established ML models such as Transformers. We know, however, that Transformers are\u00a0<a href=\"https:\/\/www.experfy.com\/blog\/ai-ml\/transformers-are-graph-neural-networks\/\" target=\"_blank\" rel=\"noreferrer noopener\">nothing but GNNs<\/a>\u00a0where attention is used as the neighbourhood aggregation function. In computing, the phenomenon when certain algorithms win not because they are ideally suited to solve a certain problem, but because they run well on existing hardware is called\u00a0<em>Hardware Lottery<\/em>\u00a0[38] \u2014 and this is the case with Transformers running on GPUs.<\/p>\n\n<p id=\"3f80\">At Graphcore, we have built a new MIMD architecture with 1472 cores that can run a total of 8832 programs in parallel, which we call the Intelligence Processing Unit (IPU). This architecture is ideally suited for accelerating GNNs. Our Poplar software stack takes advantage of sparsity to allocate different nodes of a computational graph to different cores. For models that can fit into the IPU\u2019s 900MB on-chip memory, our architecture offers substantial improvement of the throughput over GPUs; otherwise, with just a few lines of code, it is possible to distribute the model over thousands of IPUs.<\/p>\n\n<p id=\"61cb\">I am excited to see our customers building a\u00a0<a href=\"https:\/\/www.graphcore.ai\/resources\/research-papers\" target=\"_blank\" rel=\"noreferrer noopener\">large body of research<\/a>\u00a0taking advantage of our architecture, including applications such as bundle adjustment for SLAM, training deep networks using local updates, or\u00a0<a href=\"https:\/\/www.graphcore.ai\/mk2-benchmarks\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">speeding up<\/a>\u00a0a variety of problems in particle physics. I hope to see more researchers taking advantage of our advanced ML hardware in 2021.\u201d<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-4bb7981 elementor-widget elementor-widget-heading\" data-id=\"4bb7981\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"heading.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t<h2 class=\"elementor-heading-title elementor-size-default\">Applications in the industry, physics, medicine, and beyond<\/h2>\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-797cbe6 elementor-widget elementor-widget-image\" data-id=\"797cbe6\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"512\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1MOgFyXYH6R-hx0uMaTlL6g-1024x512.png\" class=\"attachment-large size-large wp-image-18415\" alt=\"MagicLeap\u2019s SuperGlue uses GNN\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1MOgFyXYH6R-hx0uMaTlL6g-1024x512.png 1024w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1MOgFyXYH6R-hx0uMaTlL6g-300x150.png 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1MOgFyXYH6R-hx0uMaTlL6g-768x384.png 768w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1MOgFyXYH6R-hx0uMaTlL6g-1536x768.png 1536w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1MOgFyXYH6R-hx0uMaTlL6g-2048x1024.png 2048w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1MOgFyXYH6R-hx0uMaTlL6g-610x305.png 610w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1MOgFyXYH6R-hx0uMaTlL6g-360x180.png 360w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1MOgFyXYH6R-hx0uMaTlL6g-750x375.png 750w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1MOgFyXYH6R-hx0uMaTlL6g-1140x570.png 1140w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">MagicLeap\u2019s SuperGlue uses GNN to solve a classical computer vision problem of feature matching. Image credit: P.-E. Sarlin et al.<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-62a145e elementor-widget elementor-widget-text-editor\" data-id=\"62a145e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"dea7\"><a href=\"https:\/\/ivanovml.com\/\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\"><strong>Sergey Ivanov<\/strong><\/a><strong>,\u00a0<\/strong><em>Research Scientist at Criteo, editor of the\u00a0<\/em><a href=\"https:\/\/graphml.substack.com\/\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Graph Machine Learning newsletter<\/em><\/a><em>.<\/em><\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-a7b9fcb elementor-widget elementor-widget-text-editor\" data-id=\"a7b9fcb\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<blockquote class=\"wp-block-quote\"><p>\u201cIt was an astounding year for Graph ML research. All major ML conferences had about 10\u201320% of all papers dedicated to this field and at this scale, everyone can find an interesting graph topic to their taste.<\/p><\/blockquote>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-2bdb222 elementor-widget elementor-widget-text-editor\" data-id=\"2bdb222\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"4b5a\">The\u00a0<a href=\"https:\/\/gm-neurips-2020.github.io\/\" target=\"_blank\" rel=\"noreferrer noopener\">Google Graph Mining<\/a>\u00a0team was prominently present at NeurIPS. Looking at the\u00a0<a href=\"https:\/\/gm-neurips-2020.github.io\/master-deck.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">312-page presentation<\/a>, one can say that Google has advanced in utilising graphs in production more than anyone else. The applications they address using Graph ML include modeling COVID-19 with spatio-temporal GNNs, fraud detection, privacy preservation, and more. Furthermore, DeepMind rolled out GNNs in production for travel\u00a0<a href=\"https:\/\/deepmind.com\/blog\/article\/traffic-prediction-with-advanced-graph-neural-networks\" target=\"_blank\" rel=\"noreferrer noopener\">time predictions<\/a>\u00a0globally in Google Maps. An interesting detail of their method is the integration of an RL model to select similar sampled subgraphs into a single batch for training parameters of GNNs. This and advanced hyperparameter tuning brought up to +50% improvement in the accuracy of real-time time-of-arrival estimation.<\/p>\n\n<p id=\"742b\">Another notable application of GNNs was done at Magic Leap, which specialises in 3D computer-generated graphics. Their SuperGlue architecture [39] applies GNNs to feature matching in images \u2014 an important subject for 3D reconstruction, place recognition, localisation, and mapping. This end-to-end feature representation paired with optimal transport optimisation triumphed on real-time indoor and outdoor pose estimation. These results just scratch the surface of what has been achieved in 2020.<\/p>\n\n<p id=\"2337\">Next year, I believe we will see further use of Graph ML developments in industrial settings. This would include production pipelines and frameworks, new open-source graph datasets, and deployment of GNNs at scale for e-commerce, engineering design, and the pharmaceutical industry.\u201d<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-6b6878e elementor-widget elementor-widget-image\" data-id=\"6b6878e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"720\" height=\"360\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/16K9UlTiexmr5L0RvJmXyaA.png\" class=\"attachment-large size-large wp-image-18416\" alt=\"What 2021 Holds For Graph ML?\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/16K9UlTiexmr5L0RvJmXyaA.png 720w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/16K9UlTiexmr5L0RvJmXyaA-300x150.png 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/16K9UlTiexmr5L0RvJmXyaA-610x305.png 610w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/16K9UlTiexmr5L0RvJmXyaA-360x180.png 360w\" sizes=\"(max-width: 720px) 100vw, 720px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Particle jet represented as a graph. GNNs are being explored to detect events in particle physics. Image credit: LHC<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-d33fa7f elementor-widget elementor-widget-text-editor\" data-id=\"d33fa7f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"14fb\"><a href=\"http:\/\/theoryandpractice.org\/\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Kyle Cranmer<\/strong><\/a>,\u00a0<em>Professor of Physics at NYU, one of the discoverers of the Higgs boson.<\/em><\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-0fcad52 elementor-widget elementor-widget-text-editor\" data-id=\"0fcad52\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<blockquote class=\"wp-block-quote\"><p>\u201cIt has been amazing to see how in the last two years Graph ML has become very popular in the field of physics.<\/p><\/blockquote>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-1056f8c elementor-widget elementor-widget-text-editor\" data-id=\"1056f8c\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"dcf8\">Early work with deep learning in particle physics often forced the data into an image representation to work with CNNs, which was not natural as our data are not natively grid-like and the image representation is very sparse. Graphs are a much more natural representation of our data [40,41]. Researchers on the Large Hadron Collider are now working to integrate Graph ML into the real-time data processing systems that process billions of collisions per second. There is an effort to achieve this by deploying\u00a0<a href=\"https:\/\/news.fnal.gov\/2020\/09\/the-next-big-thing-the-use-of-graph-neural-networks-to-discover-particles\/\" target=\"_blank\" rel=\"noreferrer noopener\">inference servers<\/a>\u00a0to integrate Graph ML with the real-time data acquisition systems [42] and efforts to implement these algorithms on FPGAs and other special hardware [43].<\/p>\n\n<p id=\"cc41\">Another highlight from Graph ML in 2020 is the demonstration that its inductive bias can pair with symbolic approaches. For example, we used a GNN to learn how to predict various dynamical systems, and then we ran symbolic regression on the messages being sent along the edges [44]. Not only were we able to recover the ground-truth force laws for those dynamical systems, but we were also able to extract equations in situations where we don\u2019t have ground truth. Amazingly, the symbolic equations that were extracted could then be re-introduced into the GNN, replacing the original learned components, and we obtained even better generalisation to out of distribution data.\u201d<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-4245809 elementor-widget elementor-widget-image\" data-id=\"4245809\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"720\" height=\"187\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1h6swNZLVP2Z8ZXHz6kw3UA.png\" class=\"attachment-large size-large wp-image-18417\" alt=\"What 2021 Holds For Graph ML?\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1h6swNZLVP2Z8ZXHz6kw3UA.png 720w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1h6swNZLVP2Z8ZXHz6kw3UA-300x78.png 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1h6swNZLVP2Z8ZXHz6kw3UA-610x158.png 610w\" sizes=\"(max-width: 720px) 100vw, 720px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">GNNs can exploit the population graphs for disease classification. Image credit: S. Parisot.<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-586b9d8 elementor-widget elementor-widget-text-editor\" data-id=\"586b9d8\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"6e90\"><a href=\"http:\/\/campar.in.tum.de\/Main\/AneesKazi\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Anees Kazi<\/strong><\/a><strong>,\u00a0<\/strong><em>PhD student at TUM, author of multiple papers on Graph ML in medical imaging.<\/em><\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-28a6d20 elementor-widget elementor-widget-text-editor\" data-id=\"28a6d20\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<blockquote class=\"wp-block-quote\"><p>\u201cIn the medical domain, Graph ML transformed the way of analyzing multimodal data in a way that closely resembles how experts look at the patient\u2019s condition from all the available dimensions in clinical routines.<\/p><\/blockquote>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-8ede2db elementor-widget elementor-widget-text-editor\" data-id=\"8ede2db\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"ab96\"><mark>There has recently been a huge spike in the research related to Graph ML in medical imaging and healthcare applications [45], including brain segmentation [46], brain structure analysis using MRI\/fMRI data targeted towards disease prediction [47], and drug effect analysis [48].<\/mark><\/p>\n\n<p id=\"c0fd\">Among topics in Graph ML, several stood out in the medical domain in 2020. First, <em>latent graph learning<\/em> [22,49,50], as empirically defining a graph for the given data was till then a bottleneck for optimal outcomes, now has been solved by methods which learn the latent graph structure automatically. Second,<em> data imputation<\/em> [51], as missing data is one standing problem in a lot of datasets in the medical domain, graph-based methods have helped in the imputation of data depending on relations coming from graph neighbourhood. Third, the <em>interpretability<\/em> for Graph ML models [52], since it is important for clinical and technical experts to focus on reasoning the outcomes of Graph ML models for their reliable incorporation into a CADx system. Another important highlight of 2020 in the medical domain was of course the coronavirus pandemic, and Graph ML methods were used for detection of Covid-19 [53].<\/p>\n\n<p id=\"ff1f\">In 2021, Graph ML could be used to further the interpretability of ML models for better decision making. Secondly, it has been observed that Graph ML methods are still sensitive to the graph structure, hence robustness to graph perturbations and adversarial attacks is an important topic. Finally, it would be interesting to see the integration of self-supervised learning with Graph ML applied to the medical domain.\u201d<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-1949e63 elementor-widget elementor-widget-image\" data-id=\"1949e63\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"453\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1K-y2vJq0a6lJZdYhJLS-oQ-1024x453.png\" class=\"attachment-large size-large wp-image-18418\" alt=\"geometric ML architecture MaSIF\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1K-y2vJq0a6lJZdYhJLS-oQ-1024x453.png 1024w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1K-y2vJq0a6lJZdYhJLS-oQ-300x133.png 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1K-y2vJq0a6lJZdYhJLS-oQ-768x339.png 768w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1K-y2vJq0a6lJZdYhJLS-oQ-1536x679.png 1536w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1K-y2vJq0a6lJZdYhJLS-oQ-610x270.png 610w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1K-y2vJq0a6lJZdYhJLS-oQ-750x332.png 750w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1K-y2vJq0a6lJZdYhJLS-oQ-1140x504.png 1140w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/1K-y2vJq0a6lJZdYhJLS-oQ.png 2000w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">Different protein binders for an oncological target designed using geometric ML architecture MaSIF. Image credit: Pablo Gainza.<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-ac68af5 elementor-widget elementor-widget-text-editor\" data-id=\"ac68af5\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"6a8d\"><a href=\"https:\/\/people.epfl.ch\/bruno.correia\/?lang=en\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Bruno Correia<\/strong><\/a><strong>,\u00a0<\/strong><em>Assistant Professor at EPFL, head of the Protein Design and Immunoengineering Laboratory, one of the developers of\u00a0<\/em><a href=\"https:\/\/github.com\/LPDI-EPFL\/masif\" rel=\"noopener\"><em>MaSIF<\/em><\/a><em>.<\/em><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-efd2132 elementor-widget elementor-widget-text-editor\" data-id=\"efd2132\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<blockquote class=\"wp-block-quote\"><p>\u201cIn 2020, exciting progress has been made in protein structure prediction, a key problem in bioinformatics. Yet, ultimately the chemical and geometric pattern displayed at the surface of these molecules are critical for protein function.<\/p><\/blockquote>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-47d1130 elementor-widget elementor-widget-text-editor\" data-id=\"47d1130\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"ddb4\">Surface-based representations of molecules have been used for decades but they pose challenges for machine learning methods. Approaches from the realm of Geometric Deep Learning have brought impressive capabilities to the field of protein modeling given their ability to deal with irregular data, which are particularly well-suited for protein representations. In MaSIF [1], we used geometric deep learning on mesh-based molecular surface representations to learn patterns that allow us to predict interactions of proteins with other molecules (proteins and metabolites) and speed up docking calculations by several orders of magnitude. This, in turn, could facilitate a much larger scale of prediction of protein-protein interaction networks.<\/p>\n\n<p id=\"fa1d\">In a further development of the MaSIF framework [2], we managed to generate our surface and chemical feature on the fly avoiding all precomputation stages. I anticipate that such advances will be transformative for protein and small molecule design, and in the long term could help faster development of biological drugs.\u201d<\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-06587a7 elementor-widget elementor-widget-image\" data-id=\"06587a7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"image.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t\t\t\t<figure class=\"wp-caption\">\n\t\t\t\t\t\t\t\t\t\t<img loading=\"lazy\" decoding=\"async\" width=\"844\" height=\"485\" src=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/04mE6r7DgCQNSNtHS.png\" class=\"attachment-large size-large wp-image-18419\" alt=\"GNNs were used in Decagon for polypharmacy side effect prediction\" srcset=\"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/04mE6r7DgCQNSNtHS.png 844w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/04mE6r7DgCQNSNtHS-300x172.png 300w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/04mE6r7DgCQNSNtHS-768x441.png 768w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/04mE6r7DgCQNSNtHS-610x351.png 610w, https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2021\/05\/04mE6r7DgCQNSNtHS-750x431.png 750w\" sizes=\"(max-width: 844px) 100vw, 844px\" \/>\t\t\t\t\t\t\t\t\t\t\t<figcaption class=\"widget-image-caption wp-caption-text\">GNNs were used in Decagon for polypharmacy side effect prediction. Image credit: M. Zitnik.<\/figcaption>\n\t\t\t\t\t\t\t\t\t\t<\/figure>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-8ab9877 elementor-widget elementor-widget-text-editor\" data-id=\"8ab9877\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"f955\"><a href=\"https:\/\/dbmi.hms.harvard.edu\/people\/marinka-zitnik\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>Marinka Zitnik<\/strong><\/a>,\u00a0<em>Assistant Professor of Biomedical Informatics at Harvard Medical School, author of\u00a0<\/em><a href=\"http:\/\/snap.stanford.edu\/decagon\/\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Decagon<\/em><\/a><em>.<\/em><\/p>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-491342f elementor-widget elementor-widget-text-editor\" data-id=\"491342f\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<blockquote class=\"wp-block-quote\"><p>\u201cIt was exciting to see how Graph ML entered the fields of life sciences in 2020.<\/p><\/blockquote>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<div class=\"elementor-element elementor-element-3bff96e elementor-widget elementor-widget-text-editor\" data-id=\"3bff96e\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p id=\"61c7\">We have seen how graph neural networks not only outperform earlier methods on carefully designed benchmark datasets but can open up avenues for developing new medicines to help people and understanding nature at the fundamental level. Highlights include advances in single-cell biology [56], protein and structural biology [54,57], and drug discovery[58] and repositioning [59].<\/p>\n\n<p id=\"52fc\">For centuries, the scientific method \u2014 the fundamental practice of science that scientists use to systematically and logically explain the natural world \u2014 has remained largely the same. I hope that in 2021, we will make substantial progress on using Graph ML to change that. To do that, I think we need to design methods that can optimize and manipulate networked systems and predict their behavior, such as how genomics \u2014 Nature\u2019s experiments on people \u2014 influences human traits in the context of disease. Such methods need to work with perturbational and interventional data (not only ingest observational measurements of our world). Also, I hope we will develop more methods for learning actionable representations that readily lend themselves to actionable hypotheses in science. Such methods can enable decision making in high-stakes settings (e.g., chemistry tests, particle physics, human clinical trials) where we need precise, robust predictions that can be interpreted meaningfully.\u201d<\/p>\n\n<p id=\"89f0\">[1] U. Alon and E. Yahav,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2006.05205.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">On the bottleneck of graph neural networks and its practical implications<\/a>\u00a0(2020) arXiv:2006.05205.<\/p>\n\n<p id=\"5c7b\">[2] Q. Li, Z. Han, X.-M. Wu,\u00a0<a href=\"https:\/\/www.aaai.org\/ocs\/index.php\/AAAI\/AAAI18\/paper\/download\/16098\/16553\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">Deeper insights into graph convolutional networks for semi-supervised learning<\/a>\u00a0(2019) Proc. AAAI.<\/p>\n\n<p id=\"7fda\">[3] K. Xu\u00a0<em>et al.<\/em>\u00a0<a href=\"https:\/\/arxiv.org\/abs\/1810.00826\" target=\"_blank\" rel=\"noreferrer noopener\">How powerful are graph neural networks?<\/a>\u00a0(2019) Proc. ICLR.<\/p>\n\n<p id=\"d999\">[4] C. Morris\u00a0<em>et al.<\/em>\u00a0<a href=\"https:\/\/aaai.org\/ojs\/index.php\/AAAI\/article\/view\/4384\/4262\" target=\"_blank\" rel=\"noreferrer noopener\">Weisfeiler and Leman go neural: Higher-order graph neural networks<\/a>\u00a0(2019) Proc. AAAI.<\/p>\n\n<p id=\"ae2d\">[5] K. Xu <em>et al.<\/em> What can neural networks reason about? (2019) arXiv:1905.13211.<\/p>\n\n<p id=\"194e\">[6] Q. Huang\u00a0<em>et al.<\/em>\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2010.13993.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Combining label propagation and simple models out-performs graph neural networks<\/a>\u00a0(2020) arXiv:2010.13993.<\/p>\n\n<p id=\"dcb3\">[7] F. Frasca\u00a0<em>et al.<\/em>\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2004.11198.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">SIGN: Scalable Inception Graph Neural Networks<\/a>\u00a0(2020) arXiv:2004.11198.<\/p>\n\n<p id=\"f18f\">[8] A. Graves, G. Wayne, and I. Danihelka,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/1410.5401.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Neural Turing Machines<\/a>\u00a0(2014) arXiv:1410.5401.<\/p>\n\n<p id=\"1647\">[9] A. Graves\u00a0<em>et al.<\/em>\u00a0<a href=\"https:\/\/www.nature.com\/articles\/nature20101\" target=\"_blank\" rel=\"noreferrer noopener\">Hybrid computing using a neural network with dynamic external memory<\/a>\u00a0(2016). Nature 538:471\u2013476.<\/p>\n\n<p id=\"911d\">[10] G. Yehuda, M. Gabel, and A. Schuster.\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2002.09398.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">It\u2019s not what machines can learn, it\u2019s what we cannot teach<\/a>\u00a0(2020) arXiv:2002.09398.<\/p>\n\n<p id=\"e716\">[11] K. Xu\u00a0<em>et al.<\/em>\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2009.11848.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">How neural networks extrapolate: From feedforward to graph neural networks<\/a>\u00a0(2020) arXiv:2009.11848.<\/p>\n\n<p id=\"52e3\">[12] P. Veli\u010dkovi\u0107\u00a0<em>et al.,<\/em>\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/1910.10593.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Neural execution of graph algorithms<\/a>\u00a0(2019) arXiv:1910.10593.<\/p>\n\n<p id=\"0605\">[13] O. Richter and R. Wattenhofer,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2005.09561.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Normalized attention without probability cage<\/a>\u00a0(2020) arXiv:2005.09561.<\/p>\n\n<p id=\"0985\">[14] H. Tang\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2010.13547.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Towards scale-invariant graph-related problem solving by iterative homogeneous graph neural networks<\/a>\u00a0(2020) arXiv:2010.13547.<\/p>\n\n<p id=\"e071\">[15] P. Veli\u010dkovi\u0107\u00a0<em>et al.<\/em>\u00a0<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/176bf6219855a6eb1f3a30903e34b6fb-Paper.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Pointer Graph Networks<\/a>\u00a0(2020) Proc. NeurIPS.<\/p>\n\n<p id=\"33ce\">[16] Y. Yan\u00a0<em>et al.<\/em>\u00a0<a href=\"https:\/\/openreview.net\/pdf?id=rJg7BA4YDr\" target=\"_blank\" rel=\"noreferrer noopener\">Neural execution engines: Learning to execute subroutines<\/a>\u00a0(2020) Proc. ICLR.<\/p>\n\n<p id=\"8dea\">[17] C. K. Joshi\u00a0<em>et al.<\/em>\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2006.07054.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Learning TSP requires rethinking generalization<\/a>\u00a0(2020) arXiv:2006.07054.<\/p>\n\n<p id=\"26c7\">[18] A. Deac\u00a0<em>et al.<\/em>\u00a0<a href=\"https:\/\/openreview.net\/pdf?id=OodqmQT3fir\" target=\"_blank\" rel=\"noreferrer noopener\">XLVIN: eXecuted Latent Value Iteration Nets<\/a>\u00a0(2020) arXiv:2010.13146.<\/p>\n\n<p id=\"9b5c\">[19] S. L\u00f6we\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2006.10833.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Amortized Causal Discovery: Learning to infer causal graphs from time-series data<\/a>\u00a0(2020) arXiv:2006.10833.<\/p>\n\n<p id=\"36fd\">[20] Y. Li\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/papers.nips.cc\/paper\/2020\/file\/6822951732be44edf818dc5a97d32ca6-Paper.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Causal discovery in physical systems from videos<\/a>\u00a0(2020) Proc. NeurIPS.<\/p>\n\n<p id=\"9030\">[21] D. Bieber\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/62326dc7c4f7b849d6f013ba46489d6c-Paper.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Learning to execute programs with instruction pointer attention graph neural networks<\/a>\u00a0(2020) Proc. NeurIPS.<\/p>\n\n<p id=\"ba97\">[22] A. Kazi\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2002.04999.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Differentiable Graph Module (DGM) for graph convolutional networks<\/a>\u00a0(2020) arXiv:2002.04999<\/p>\n\n<p id=\"4405\">[23] D. D. Johnson, H. Larochelle, and D. Tarlow<em>.<\/em>,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2007.04929.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Learning graph structure with a finite-state automaton layer<\/a>\u00a0(2020). arXiv:2007.04929.<\/p>\n\n<p id=\"3913\">[24] T. Pfaff\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2010.03409.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Learning mesh-based simulation with graph networks<\/a>\u00a0(2020) arXiv:2010.03409.<\/p>\n\n<p id=\"a8c7\">[25] T. Kipf\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/openreview.net\/pdf?id=H1gax6VtDB\" target=\"_blank\" rel=\"noreferrer noopener\">Contrastive learning of structured world models<\/a>\u00a0(2020) Proc. ICLR<\/p>\n\n<p id=\"9b80\">[26] F. Locatello\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/papers.nips.cc\/paper\/2020\/file\/8511df98c02ab60aea1b2356c013bc0f-Paper.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Object-centric learning with slot attention<\/a>\u00a0(2020) Proc. NeurIPS.<\/p>\n\n<p id=\"ba99\">[27] W. Azizian and M. Lelarge,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2006.15646.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Characterizing the expressive power of invariant and equivariant graph neural networks<\/a>\u00a0(2020) arXiv:2006.15646.<\/p>\n\n<p id=\"04aa\">[28] A. Loukas,\u00a0<a href=\"https:\/\/openreview.net\/pdf?id=B1l2bp4YwS\" target=\"_blank\" rel=\"noreferrer noopener\">What graph neural networks cannot learn: depth vs width<\/a>\u00a0(2020) Proc. ICLR.<\/p>\n\n<p id=\"f5bf\">[29] Z. Chen\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/75877cb75154206c4e65e76b88a12712-Paper.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Can graph neural networks count substructures?<\/a>\u00a0(2020) Proc. NeurIPS.<\/p>\n\n<p id=\"2949\">[30] A. Bojchevski\u00a0<em>et al.<\/em>,<a href=\"https:\/\/arxiv.org\/pdf\/2007.01570.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">\u00a0Scaling graph neural networks with approximate PageRank<\/a>\u00a0(2020) Proc. KDD.<\/p>\n\n<p id=\"b329\">[31] E. Rossi\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2006.10637.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Temporal Graph Networks for deep learning on dynamic graphs<\/a>\u00a0(2020) arXiv:2006.10637.<\/p>\n\n<p id=\"1d28\">[32] S. Kumar, X. Zhang, and J. Leskovec,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/1908.01207.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Predicting dynamic embedding trajectory in temporal interaction networks<\/a>\u00a0(2019) Proc. KDD.<\/p>\n\n<p id=\"17b6\">[33] R. Trivedi\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/openreview.net\/pdf?id=HyePrhR5KX\" target=\"_blank\" rel=\"noreferrer noopener\">DyRep: Learning representations over dynamic graphs<\/a>\u00a0(2019) Proc. ICLR.<\/p>\n\n<p id=\"aa3f\">[34] D. Xu\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/openreview.net\/pdf?id=rJeW1yHYwH\" target=\"_blank\" rel=\"noreferrer noopener\">Inductive representation learning on temporal graphs<\/a>\u00a0(2019) Proc. ICLR.<\/p>\n\n<p id=\"80da\">[35] M. Noorshams, S. Verma, and A. Hofleitner,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2002.07917.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">TIES: Temporal Interaction Embeddings for enhancing social media integrity at Facebook<\/a>\u00a0(2020) arXiv:2002.07917.<\/p>\n\n<p id=\"e5e0\">[36] X. Wang\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2011.11545.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">APAN: Asynchronous Propagation Attention Network for real-time temporal graph embedding<\/a>\u00a0(2020) arXiv:2011.11545.<\/p>\n\n<p id=\"403b\">[37] E. A. Meirom\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2010.05313.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">How to stop epidemics: Controlling graph dynamics with reinforcement learning and graph neural networks<\/a>\u00a0(2020) arXiv:2010.05313.<\/p>\n\n<p id=\"1883\">[38] S. Hooker,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2009.06489.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Hardware lottery<\/a>\u00a0(2020), arXiv:2009.06489.<\/p>\n\n<p id=\"50f0\">[39] P. E. Sarlin\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/openaccess.thecvf.com\/content_CVPR_2020\/papers\/Sarlin_SuperGlue_Learning_Feature_Matching_With_Graph_Neural_Networks_CVPR_2020_paper.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">SuperGlue: Learning feature matching with graph neural networks<\/a>\u00a0(2020). Proc. CVPR.<\/p>\n\n<p id=\"c7e2\">[40] S. Ruhk\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/1902.07987.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Learning representations of irregular particle-detector geometry with distance-weighted graph networks<\/a>\u00a0(2019) arXiv:1902.07987.<\/p>\n\n<p id=\"fcf8\">[41] J. Shlomi, P. Battaglia, J.-R. Vlimant,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2007.13681.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Graph Neural Networks in particle physics<\/a>\u00a0(2020) arXiv:2007.13681.<\/p>\n\n<p id=\"bddb\">[42] J. Krupa\u00a0<em>et al<\/em>.,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2007.10359.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">GPU coprocessors as a service for deep learning inference in high energy physics<\/a>\u00a0(2020) arXiv:2007.10359.<\/p>\n\n<p id=\"c247\">[43] A. Heintz\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2012.01563.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Accelerated charged particle tracking with graph neural networks on FPGAs<\/a>\u00a0(2020) arXiv:2012.01563.<\/p>\n\n<p id=\"310d\">[44] M. Cranmer\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2006.11287.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Discovering symbolic models from deep learning with inductive biases<\/a>\u00a0(2020) arXiv:2006.11287. Miles Cranmer is unrelated to Kyle Cranmer, though both are co-authors of the paper. See also the\u00a0<a href=\"https:\/\/www.youtube.com\/watch?v=LMb5tvW-UoQ\" target=\"_blank\" rel=\"noreferrer noopener\">video presentation<\/a>\u00a0of the paper.<\/p>\n\n<p id=\"4cae\">[45] Q. Cai\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/ieeexplore.ieee.org\/document\/8836450\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">A survey on multimodal data-driven smart healthcare systems: Approaches and applications<\/a>\u00a0(2020)\u00a0<em>IEEE Access<\/em>\u00a0<em>7<\/em>:133583\u2013133599<\/p>\n\n<p id=\"2348\">[46] K. Gopinath, C. Desrosiers, and H. Lombaert,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2004.00074.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Graph domain adaptation for alignment-invariant brain surface segmentation<\/a>\u00a0(2020)arXiv:2004.00074<\/p>\n\n<p id=\"e890\">[47] J. Liu\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/bmcbioinformatics.biomedcentral.com\/articles\/10.1186\/s12859-020-3437-6\" target=\"_blank\" rel=\"noreferrer noopener\">Identification of early mild cognitive impairment using multi-modal data and graph convolutional networks<\/a>\u00a0(2020)\u00a0<em>BMC Bioinformatics<\/em>\u00a021(6):1\u201312<\/p>\n\n<p id=\"15e4\">[48] H. E. Manoochehri and M. Nourani,\u00a0<a href=\"https:\/\/bmcbioinformatics.biomedcentral.com\/articles\/10.1186\/s12859-020-3518-6\" target=\"_blank\" rel=\"noreferrer noopener\">Drug-target interaction prediction using semi-bipartite graph model and deep learning<\/a>\u00a0(2020).\u00a0<em>BMC Bioinformatics<\/em>\u00a0<em>21<\/em>(4):1\u201316<\/p>\n\n<p id=\"7424\">[49] Y. Huang and A. C. Chung,\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2009.02759.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Edge-variational graph convolutional networks for uncertainty-aware disease prediction<\/a>\u00a0(2020) Proc. MICCAI<\/p>\n\n<p id=\"0f8d\">[50] L. Cosmo\u00a0<em>et al.,<\/em>\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2003.13620.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Latent-graph learning for disease prediction<\/a>\u00a0(2020) Proc. MICCAI<\/p>\n\n<p id=\"74e4\">[51] G. Vivar\u00a0<em>et al.,<\/em>\u00a0<a href=\"https:\/\/arxiv.org\/pdf\/2005.06935.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">Simultaneous imputation and disease classification in incomplete medical datasets using Multigraph Geometric Matrix Completion<\/a>\u00a0(2020) arXiv<em>:2005.06935.<\/em><\/p>\n\n<p id=\"0fba\">[52] X. Li and J. Duncan,\u00a0<a href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2020.05.16.100057v1\" target=\"_blank\" rel=\"noreferrer noopener\">BrainGNN: Interpretable brain graph neural network for fMRI analysis<\/a>\u00a0(2020) bioRxiv:2020.05.16.100057<\/p>\n\n<p id=\"4a17\">[53] X. Yu\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0925231220319184\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">ResGNet-C: A graph convolutional neural network for detection of COVID-19<\/a>\u00a0(2020) Neurocomputing.<\/p>\n\n<p id=\"21eb\">[54] P. Gainza\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/www.nature.com\/articles\/s41592-019-0666-6\" target=\"_blank\" rel=\"noreferrer noopener\">Deciphering interaction fingerprints from protein molecular surfaces using geometric deep learning<\/a>\u00a0(2020) Nature Methods 17(2):184\u2013192.<\/p>\n\n<p id=\"e6e4\">[55] F. Sverrisson\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2020.12.28.424589v1\" target=\"_blank\" rel=\"noreferrer noopener\">Fast end-to-end learning on protein surfaces<\/a>\u00a0(2020) bioRxiv:2020.12.28.424589.<\/p>\n\n<p id=\"817a\">[56] A. Klimovskaia\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/www.nature.com\/articles\/s41467-020-16822-4\" target=\"_blank\" rel=\"noreferrer noopener\">Poincar\u00e9 maps for analyzing complex hierarchies in single-cell data<\/a>\u00a0(2020) Nature Communications 11.<\/p>\n\n<p id=\"f499\">[57] J. Jumper\u00a0<em>et al.<\/em>, High accuracy protein structure prediction using deep learning (2020) a.k.a.\u00a0<a href=\"https:\/\/deepmind.com\/blog\/article\/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology\" target=\"_blank\" rel=\"noreferrer noopener\">AlphaFold 2.0<\/a>\u00a0(paper not yet available).<\/p>\n\n<p id=\"5c7d\">[58] J. M. Stokes\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/www.cell.com\/cell\/fulltext\/S0092-8674(20)30102-1?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0092867420301021%3Fshowall%3Dtrue\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\">A deep learning approach to antibiotic discovery<\/a>\u00a0(2020) Cell 180(4):688\u2013702.<\/p>\n\n<p id=\"32e9\">[59] D. Morselli Gysi\u00a0<em>et al.<\/em>,\u00a0<a href=\"https:\/\/arxiv.org\/abs\/2004.07229\" target=\"_blank\" rel=\"noreferrer noopener\">Network medicine framework for identifying drug repurposing opportunities for COVID-19<\/a>\u00a0(2020) arXiv:2004.07229.<\/p>\n\n<p id=\"641e\"><em>I am grateful to Bruno Correia, Kyle Cranmer, Matthias Fey, Will Hamilton, Sergey Ivanov, Anees Kazi, Thomas Kipf, Haggai Maron, Emanuele Rossi, Mark Saroufim, Petar Veli\u010dkovi\u0107, and Marinka Zitnik for their inspiring comments and predictions. This is my first experiment with a new format of \u201cscientific journalism\u201d and I appreciate suggestions for improvement. Needless to say, all the credit goes to the aforementioned people, whereas any criticism should be my sole responsibility. A\u00a0<\/em><a href=\"https:\/\/zhuanlan.zhihu.com\/p\/342662347\" target=\"_blank\" rel=\"noreferrer noopener\" class=\"broken_link\"><em>Chinese translation<\/em><\/a><em>\u00a0of this post is available courtesy of\u00a0<\/em><a href=\"https:\/\/twitter.com\/zhong_zhiqiang\" target=\"_blank\" rel=\"noreferrer noopener\"><em>Zhiqiang Zhong<\/em><\/a><em>.<\/em><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":"<p>Here are the opinions of prominent researchers in the field of graph ML and its applications trying to summarise the highlights of the past year and predict what is in store for 2021.<\/p>\n","protected":false},"author":874,"featured_media":18420,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[183],"tags":[97,1227,1228],"ppma_author":[3686],"class_list":["post-22554","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-ml","tag-artificial-intelligence","tag-graph-machine-learning","tag-graph-ml"],"authors":[{"term_id":3686,"user_id":874,"is_guest":0,"slug":"michael-bronstein","display_name":"Michael Bronstein","avatar_url":"https:\/\/www.experfy.com\/blog\/wp-content\/uploads\/2020\/08\/Michael-Bronstein-150x150.jpg","user_url":"https:\/\/www.imperial.ac.uk\/people\/m.bronstein","last_name":"Bronstein","first_name":"Michael","job_title":"","description":"Michael Bronstein is Professor, Chair in Machine Learning and Pattern Recognition at Imperial College, London, besides Head of Graph ML at Twitter \/ ML Lead at ProjectCETI\/ ex Founder &amp; Chief Scientist at Fabula_ai\/ ex at Intel #AI #ML #graphs. His main expertise is in theoretical and computational geometric methods for data analysis, and his research encompasses a broad spectrum of applications ranging from machine learning, computer vision, and pattern recognition to geometry processing, computer graphics, and imaging. He has authored over 150 papers, the book Numerical geometry of non-rigid shapes (Springer 2008), and holds over 30 granted patents."}],"_links":{"self":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/22554","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/users\/874"}],"replies":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/comments?post=22554"}],"version-history":[{"count":4,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/22554\/revisions"}],"predecessor-version":[{"id":32539,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/posts\/22554\/revisions\/32539"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media\/18420"}],"wp:attachment":[{"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/media?parent=22554"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/categories?post=22554"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/tags?post=22554"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.experfy.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=22554"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}