Overview of Graph Representation Learning

  • How to apply power of deep learning to more complex domains?

  • Graphs are the new frontier of deep learning (connect things)

    • GNN: hottest subfield in ML

      • Flexibility to model complex data

  • Graphs

    • Molecules, knowledge graph, software

    • Applications: recommender system, neutrino detection, LHC, fake news detection, drug repurposing, chemistry, computer graphics, VR, robotics, autonomous driving, medicine

  • Stanford: deployed technology

  • Workshop: representation learning for Graph

Discuss

  • Tools and frameworks

  • Short talks about a wide range of application domains

    • Graphics and Vision

    • Fraud and intrusion detection

    • Financial networks

    • Knowledge Graph and Reasoning

    • ...

  • Software tools

Goal: Representation Learning

  • Map nodes to d-dim embedding such that similar nodes in the network are embedded close together

    • Feature representation, embedding

  • DL in graphs

    • Input: network

    • Predictions: node labels, new links, generated graphs and subgraphs

  • Why hard

    • Networks are complex

      • Arbitrary size and complex topological structure (no spatial locality like grids)

      • No fixed node ordering or reference point

      • Often dynamic

  • Problem setup

    • Graph G, vertex set V, adjacency matrix A, node features X

  • Key idea: network is a computation graph

    • Learn how to represent through the network

    • Each node defines a computation graph

    • Train on a set of nodes, i.e. a batch of computation graphs

      • Back prop through the computation graphs

      • Scale (small # of parameters)

  • Benefits

    • No manual feature engineering is needed

    • End-to-end learning results in optimal features

    • Any graph ML task

      • Node-level, link-level

    • Scalable

  • Key idea

    • GNN adapts to the shape of the data

      • Other DL assume fixed input

      • GNN makes no such assumptions

Last updated