Graph neural network

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Lua error in package.lua at line 80: module 'strict' not found.

A graph neural network (GNN) is a class of artificial neural networks for processing data that can be represented as graphs.[1][2][3][4][5]

File:GNN building blocks.png
Basic building blocks of a graph neural network (GNN). (1) Permutation equivariant layer. (2) Local pooling layer. (3) Global pooling (or readout) layer. Colors indicate features.

In the more general subject of "geometric deep learning", certain existing neural network architectures can be interpreted as GNNs operating on suitably defined graphs.[6] Convolutional neural networks, in the context of computer vision, can be seen as a GNN applied to graphs structured as grids of pixels. Transformers, in the context of natural language processing, can be seen as GNNs applied to complete graphs whose nodes are words in a sentence.

The key design element of GNNs is the use of pairwise message passing, such that graph nodes iteratively update their representations by exchanging information with their neighbors. Since their inception, several different GNN architectures have been proposed,[2][3][7][8][9] which implement different flavors of message passing,[6] started by recursive[2] or convolutional constructive[3] approaches. As of 2022, whether it is possible to define GNN architectures "going beyond" message passing, or if every GNN can be built on message passing over suitably defined graphs, is an open research question.[10]

Relevant application domains for GNNs include Natural Language Processing, [11] social networks,[12] citation networks,[13] molecular biology,[14] chemistry,[15] physics[16] and NP-hard combinatorial optimization problems.[17]

Several open source libraries implementing graph neural networks are available, such as PyTorch Geometric[18] (PyTorch), TensorFlow GNN[19] (TensorFlow), and jraph[20] (Google JAX).

Architecture

The architecture of a generic GNN implements the following fundamental layers:[6]

  1. Permutation equivariant: a permutation equivariant layer maps a representation of a graph into an updated representation of the same graph. In the literature, permutation equivariant layers are implemented via pairwise message passing between graph nodes.[6][10] Intuitively, in a message passing layer, nodes update their representations by aggregating the messages received from their immediate neighbours. As such, each message passing layer increases the receptive field of the GNN by one hop.
  2. Local pooling: a local pooling layer coarsens the graph via downsampling. Local pooling is used to increase the receptive field of a GNN, in a similar fashion to pooling layers in convolutional neural networks. Examples include k-nearest neighbours pooling, top-k pooling,[21] and self-attention pooling.[22]
  3. Global pooling: a global pooling layer, also known as readout layer, provides fixed-size representation of the whole graph. The global pooling layer must be permutation invariant, such that permutations in the ordering of graph nodes and edges do not alter the final output.[23] Examples include element-wise sum, mean or maximum.

It has been demonstrated that GNNs cannot be more expressive than the Weisfeiler–Lehman Graph Isomorphism Test.[24][25] In practice, this means that there exist different graph structures (e.g., molecules with the same atoms but different bonds) that cannot be distinguished by GNNs. More powerful GNNs operating on higher-dimension geometries such as simplicial complexes can be designed.[26] As of 2022, whether or not future architectures will overcome the message passing primitive is an open research question.[10]

File:GNN representational limits.png
Non-isomorphic graphs that cannot be distinguished by a GNN due to the limitations of the Weisfeiler-Lehman Graph Isomorphism Test. Colors indicate node features.

Message passing layers

File:Message Passing Neural Network.png
Node representation update in a Message Passing Neural Network (MPNN) layer. Node \mathbf{x}_0 receives messages sent by all of his immediate neighbours \mathbf{x}_1 to \mathbf{x}_4. Messages are computing via the message function \phi, which accounts for the features of both senders and receiver.

Message passing layers are permutation-equivariant layers mapping a graph into an updated representation of the same graph. Formally, they can be expressed as message passing neural networks (MPNNs).[6]

Let G = (V,E) be a graph, where V is the node set and E is the edge set. Let N_u be the neighbourhood of some node u \in V. Additionally, let \mathbf{x}_u be the features of node u \in V, and \mathbf{e}_{uv} be the features of edge (u, v) \in E. An MPNN layer can be expressed as follows:[6]

\mathbf{h}_u = \phi \left( \mathbf{x}_u, \bigoplus_{v \in N_u} \psi(\mathbf{x}_u, \mathbf{x}_v, \mathbf{e}_{uv}) \right)

where \phi and \psi are differentiable functions (e.g., artificial neural networks), and \bigoplus is a permutation invariant aggregation operator that can accept an arbitrary number of inputs (e.g., element-wise sum, mean, or max). In particular, \phi and \psi are referred to as update and message functions, respectively. Intuitively, in an MPNN computational block, graph nodes update their representations by aggregating the messages received from their neighbours.

The outputs of one or more MPNN layers are node representations \mathbf{h}_u for each node u \in V in the graph. Node representations can be employed for any downstream task, such as node/graph classification or edge prediction.

Graph nodes in an MPNN update their representation aggregating information from their immediate neighbours. As such, stacking n MPNN layers means that one node will be able to communicate with nodes that are at most n "hops" away. In principle, to ensure that every node receives information from every other node, one would need to stack a number of MPNN layers equal to the graph diameter. However, stacking many MPNN layers may cause issues such as oversmoothing[27] and oversquashing.[28] Oversmoothing refers to the issue of node representations becoming indistinguishable. Oversquashing refers to the bottleneck that is created by squeezing long-range dependencies into fixed-size representations. Countermeasures such as skip connections[8][29] (as in residual neural networks), gated update rules[30] and jumping knowledge[31] can mitigate oversmoothing. Modifying the final layer to be a fully-adjacent layer, i.e., by considering the graph as a complete graph, can mitigate oversquashing in problems where long-range dependencies are required.[28]

Other "flavours" of MPNN have been developed in the literature,[6] such as graph convolutional networks[7] and graph attention networks,[9] whose definitions can be expressed in terms of the MPNN formalism.

Graph convolutional network

The graph convolutional network (GCN) was first introduced by Thomas Kipf and Max Welling in 2017.[7]

A GCN layer defines a first-order approximation of a localized spectral filter on graphs. GCNs can be understood as a generalization of convolutional neural networks to graph-structured data.

The formal expression of a GCN layer reads as follows:

\mathbf{H} = \sigma\left(\tilde{\mathbf{D}}^{-\frac{1}{2}} \tilde{\mathbf{A}} \tilde{\mathbf{D}}^{-\frac{1}{2}} \mathbf{X} \mathbf{\Theta}\right)

where \mathbf{H} is the matrix of node representations \mathbf{h}_u, \mathbf{X} is the matrix of node features \mathbf{x}_u, \sigma(\cdot) is an activation function (e.g., ReLU), \tilde{\mathbf{A}} is the graph adjacency matrix with the addition of self-loops, \tilde{\mathbf{D}} is the graph degree matrix with the addition of self-loops, and \mathbf{\Theta} is a matrix of trainable parameters.

In particular, let \mathbf{A} be the graph adjacency matrix: then, one can define \tilde{\mathbf{A}} = \mathbf{A} + \mathbf{I} and \tilde{\mathbf{D}}_{ii} = \sum_{j \in V} \tilde{A}_{ij}, where \mathbf{I} denotes the identity matrix. This normalization ensures that the eigenvalues of \tilde{\mathbf{D}}^{-\frac{1}{2}} \tilde{\mathbf{A}} \tilde{\mathbf{D}}^{-\frac{1}{2}} are bounded in the range [0, 1], avoiding numerical instabilities and exploding/vanishing gradients.

A limitation of GCNs is that they do not allow multidimensional edge features \mathbf{e}_{uv}.[7] It is however possible to associate scalar weights w_{uv} to each edge by imposing A_{uv} = w_{uv}, i.e., by setting each nonzero entry in the adjacency matrix equal to the weight of the corresponding edge.

Graph attention network

The graph attention network (GAT) was introduced by Petar Veličković et al. in 2018.[9]

Graph attention network is a combination of a graph neural network and an attention layer. The implementation of attention layer in graphical neural networks helps provide attention or focus to the important information from the data instead of focusing on the whole data.

A multi-head GAT layer can be expressed as follows:

Failed to parse (Missing <code>texvc</code> executable. Please see math/README to configure.): \mathbf{h}_u = \overset{K}{\underset{k=1}{\Big\Vert}} \sigma \left(\sum_{v \in N_u} \alpha_{uv} \mathbf{W}^k \mathbf{x}_v\right)


where K is the number of attention heads, \Big\Vert denotes vector concatenation, \sigma(\cdot) is an activation function (e.g., ReLU), \alpha_{ij} are attention coefficients, and W^k is a matrix of trainable parameters for the k-th attention head.

For the final GAT layer, the outputs from each attention head are averaged before the application of the activation function. Formally, the final GAT layer can be written as:

 \mathbf{h}_u = \sigma \left(\frac{1}{K}\sum_{k=1}^K \sum_{v \in N_u} \alpha_{uv} \mathbf{W}^k \mathbf{x}_v\right)

Attention in Machine Learning is a technique that mimics cognitive attention. In the context of learning on graphs, the attention coefficient \alpha_{uv} measures how important is node u \in V to node v \in V.

Normalized attention coefficients are computed as follows:

Failed to parse (Missing <code>texvc</code> executable. Please see math/README to configure.): \alpha_{uv} = \frac{\exp(\text{LeakyReLU}\left(\mathbf{a}^T [\mathbf{W} \mathbf{h}_u \Vert \mathbf{W} \mathbf{h}_v \Vert \mathbf{e}_{uv}]\right))}{\sum_{z \in N_u}\exp(\text{LeakyReLU}\left(\mathbf{a}^T [\mathbf{W} \mathbf{h}_u \Vert \mathbf{W} \mathbf{h}_z \Vert \mathbf{e}_{uz}]\right))}


where \mathbf{a} is a vector of learnable weights, \cdot^T indicates transposition, and Failed to parse (Missing <code>texvc</code> executable. Please see math/README to configure.): \text{LeakyReLU}

is a modified ReLU activation function. Attention coefficients are normalized to make them easily comparable across different nodes.[9]

A GCN can be seen as a special case of a GAT where attention coefficients are not learnable, but fixed and equal to the edge weights w_{uv}.

Gated graph sequence neural network

The gated graph sequence neural network (GGS-NN) was introduced by Yujia Li et al. in 2015.[30] The GGS-NN extends the GNN formulation by Scarselli et al.[2] to output sequences. The message passing framework is implemented as an update rule to a gated recurrent unit (GRU) cell.

A GGS-NN can be expressed as follows:

\mathbf{h}_u^{(0)} = \mathbf{x}_u \, \Vert \, \mathbf{0}
\mathbf{m}_u^{(l+1)} = \sum_{v \in N_u} \mathbf{\Theta} \mathbf{h}_v
Failed to parse (Missing <code>texvc</code> executable. Please see math/README to configure.): \mathbf{h}_u^{(l+1)} = \text{GRU}(\mathbf{m}_u^{(l+1)}, \mathbf{h}_u^{(l)})


where \Vert denotes vector concatenation, \mathbf{0} is a vector of zeros, \mathbf{\Theta} is a matrix of learnable parameters, Failed to parse (Missing <code>texvc</code> executable. Please see math/README to configure.): \text{GRU}

is a GRU cell, and l denotes the sequence index. In a GGS-NN, the node representations are regarded as the hidden states of a GRU cell. The initial node features \mathbf{x}_u^{(0)} are zero-padded up to the hidden state dimension of the GRU cell. The same GRU cell is used for updating representations for each node.

Local pooling layers

Local pooling layers coarsen the graph via downsampling. We present here several learnable local pooling strategies that have been proposed.[23] For each cases, the input is the initial graph is represented by a matrix \mathbf{X} of node features, and the graph adjacency matrix \mathbf{A}. The output is the new matrix \mathbf{X}'of node features, and the new graph adjacency matrix \mathbf{A}'.

Top-k pooling

We first set

\mathbf{y} = \frac{\mathbf{X}\mathbf{p}}{\Vert\mathbf{p}\Vert}

where \mathbf{p} is a learnable projection vector. The projection vector \mathbf{p} computes a scalar projection value for each graph node.

The top-k pooling layer [21] can then be formalised as follows:

Failed to parse (Missing <code>texvc</code> executable. Please see math/README to configure.): \mathbf{X}' = (\mathbf{X} \odot \text{sigmoid}(\mathbf{y}))_{\mathbf{i}}


\mathbf{A}' = \mathbf{A}_{\mathbf{i}, \mathbf{i}}

where Failed to parse (Missing <code>texvc</code> executable. Please see math/README to configure.): \mathbf{i} = \text{top}_k(\mathbf{y})

is the subset of nodes with the top-k highest projection scores, \odot denotes element-wise matrix multiplication, and Failed to parse (Missing <code>texvc</code> executable. Please see math/README to configure.): \text{sigmoid}(\cdot)
is the sigmoid function. In other words, the nodes with the top-k highest projection scores are retained in the new adjacency matrix \mathbf{A}'. The Failed to parse (Missing <code>texvc</code> executable. Please see math/README to configure.): \text{sigmoid}(\cdot)
operation makes the projection vector \mathbf{p} trainable by backpropagation, which otherwise would produce discrete outputs.[21]

Self-attention pooling

We first set

Failed to parse (Missing <code>texvc</code> executable. Please see math/README to configure.): \mathbf{y} = \text{GNN}(\mathbf{X}, \mathbf{A})

where Failed to parse (Missing <code>texvc</code> executable. Please see math/README to configure.): \text{GNN}

is a generic permutation equivariant GNN layer (e.g., GCN, GAT, MPNN).

The Self-attention pooling layer[22] can then be formalised as follows:

Failed to parse (Missing <code>texvc</code> executable. Please see math/README to configure.): \mathbf{X}' = (\mathbf{X} \odot \mathbf{y})_{\mathbf{i}}


\mathbf{A}' = \mathbf{A}_{\mathbf{i}, \mathbf{i}}

where Failed to parse (Missing <code>texvc</code> executable. Please see math/README to configure.): \mathbf{i} = \text{top}_k(\mathbf{y})

is the subset of nodes with the top-k highest projection scores, \odot denotes element-wise matrix multiplication.

The self-attention pooling layer can be seen as an extension of the top-k pooling layer. Differently from top-k pooling, the self-attention scores computed in self-attention pooling account both for the graph features and the graph topology.

Applications

Protein folding

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Graph neural networks are one of the main building blocks of AlphaFold, an artificial intelligence program developed by Google's DeepMind for solving the protein folding problem in biology. AlphaFold achieved first place in several CASP competitions.[32][33][31]

Social networks

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Social networks are a major application domain for GNNs due to their natural representation as social graphs. GNNs are used to develop recommender systems based on both social relations and item relations.[34][12]

Combinatorial optimization

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

GNNs are used as fundamental building blocks for several combinatorial optimization algorithms.[35] Examples include computing shortest paths or Eulerian circuits for a given graph,[30] deriving chip placements superior or competitive to handcrafted human solutions,[36] and improving expert-designed branching rules in branch and bound.[37]

Cyber security

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

When viewed as a graph, a network of computers can be analyzed with GNNs for anomaly detection. Anomalies within provenance graphs often correlate to malicious activity within the network. GNNs have been used to identify these anomalies on individual nodes[38] and within paths[39] to detect malicious processes, or on the edge level[40] to detect lateral movement.

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. 2.0 2.1 2.2 2.3 Lua error in package.lua at line 80: module 'strict' not found.
  3. 3.0 3.1 3.2 Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. Lua error in package.lua at line 80: module 'strict' not found.
  6. 6.0 6.1 6.2 6.3 6.4 6.5 6.6 Lua error in package.lua at line 80: module 'strict' not found.
  7. 7.0 7.1 7.2 7.3 Lua error in package.lua at line 80: module 'strict' not found.
  8. 8.0 8.1 Lua error in package.lua at line 80: module 'strict' not found.
  9. 9.0 9.1 9.2 9.3 Lua error in package.lua at line 80: module 'strict' not found.
  10. 10.0 10.1 10.2 Lua error in package.lua at line 80: module 'strict' not found.
  11. Lua error in package.lua at line 80: module 'strict' not found.
  12. 12.0 12.1 Lua error in package.lua at line 80: module 'strict' not found.
  13. Lua error in package.lua at line 80: module 'strict' not found.
  14. Lua error in package.lua at line 80: module 'strict' not found.
  15. Lua error in package.lua at line 80: module 'strict' not found.
  16. Lua error in package.lua at line 80: module 'strict' not found.
  17. Lua error in package.lua at line 80: module 'strict' not found.
  18. Lua error in package.lua at line 80: module 'strict' not found.
  19. Lua error in package.lua at line 80: module 'strict' not found.
  20. Lua error in package.lua at line 80: module 'strict' not found.
  21. 21.0 21.1 21.2 Lua error in package.lua at line 80: module 'strict' not found.
  22. 22.0 22.1 Lua error in package.lua at line 80: module 'strict' not found.
  23. 23.0 23.1 Lua error in package.lua at line 80: module 'strict' not found.
  24. Lua error in package.lua at line 80: module 'strict' not found.
  25. Lua error in package.lua at line 80: module 'strict' not found.
  26. Lua error in package.lua at line 80: module 'strict' not found.
  27. Lua error in package.lua at line 80: module 'strict' not found.
  28. 28.0 28.1 Lua error in package.lua at line 80: module 'strict' not found.
  29. Lua error in package.lua at line 80: module 'strict' not found.
  30. 30.0 30.1 30.2 Lua error in package.lua at line 80: module 'strict' not found.
  31. 31.0 31.1 Lua error in package.lua at line 80: module 'strict' not found.
  32. Lua error in package.lua at line 80: module 'strict' not found.
  33. Lua error in package.lua at line 80: module 'strict' not found.
  34. Lua error in package.lua at line 80: module 'strict' not found.
  35. Lua error in package.lua at line 80: module 'strict' not found.
  36. Lua error in package.lua at line 80: module 'strict' not found.
  37. Lua error in package.lua at line 80: module 'strict' not found.
  38. Lua error in package.lua at line 80: module 'strict' not found.
  39. Lua error in package.lua at line 80: module 'strict' not found.
  40. Lua error in package.lua at line 80: module 'strict' not found.

External links