# Overview of Graph Representation Learning

* How to apply power of deep learning to more complex domains?&#x20;
* Graphs are the new frontier of deep learning (connect things)&#x20;
  * GNN: hottest subfield in ML&#x20;
    * Flexibility to model complex data&#x20;
* Graphs&#x20;
  * Molecules, knowledge graph, software
  * Applications: recommender system, neutrino detection, LHC, fake news detection, drug repurposing, chemistry, computer graphics, VR, robotics, autonomous driving, medicine&#x20;
* Stanford: deployed technology&#x20;
* Workshop: representation learning for Graph&#x20;

Discuss

* Tools and frameworks&#x20;
* Short talks about a wide range of application domains&#x20;
  * Graphics and Vision
  * Fraud and intrusion detection&#x20;
  * Financial networks&#x20;
  * Knowledge Graph and Reasoning&#x20;
  * ...
* Software tools&#x20;

Goal: Representation Learning&#x20;

* Map nodes to d-dim embedding such that similar nodes in the network are embedded close together&#x20;
  * Feature representation, embedding&#x20;
* DL in graphs&#x20;
  * Input: network&#x20;
  * Predictions: node labels, new links, generated graphs and subgraphs&#x20;
* Why hard&#x20;
  * Networks are complex&#x20;
    * Arbitrary size and complex topological structure (no spatial locality like grids)
    * No fixed node ordering or reference point&#x20;
    * Often dynamic&#x20;
* Problem setup&#x20;
  * Graph G, vertex set V, adjacency matrix A, node features X&#x20;
* Key idea: network is a computation graph&#x20;
  * Learn how to represent through the network&#x20;
  * Each node defines a computation graph&#x20;
  * Train on a set of nodes, i.e. a batch of computation graphs&#x20;
    * Back prop through the computation graphs&#x20;
    * Scale (small # of parameters)&#x20;
* Benefits&#x20;
  * No manual feature engineering is needed&#x20;
  * End-to-end learning results in optimal features&#x20;
  * Any graph ML task&#x20;
    * Node-level, link-level
  * Scalable&#x20;
* Key idea
  * GNN adapts to the shape of the data&#x20;
    * Other DL assume fixed input&#x20;
    * GNN makes no such assumptions&#x20;
