topobench.nn.backbones.graph.identity_gnn module#

This module contains the implementation of identity GNNs.

class GAT(in_channels, hidden_channels, num_layers, out_channels=None, dropout=0.0, act='relu', act_first=False, act_kwargs=None, norm=None, norm_kwargs=None, jk=None, **kwargs)#

Bases: BasicGNN

The Graph Neural Network from “Graph Attention Networks” or “How Attentive are Graph Attention Networks?” papers, using the GATConv or GATv2Conv operator for message passing, respectively.

Parameters:
  • in_channels (int or tuple) – Size of each input sample, or -1 to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities.

  • hidden_channels (int) – Size of each hidden sample.

  • num_layers (int) – Number of message passing layers.

  • out_channels (int, optional) – If not set to None, will apply a final linear transformation to convert hidden node embeddings to output size out_channels. (default: None)

  • v2 (bool, optional) – If set to True, will make use of GATv2Conv rather than GATConv. (default: False)

  • dropout (float, optional) – Dropout probability. (default: 0.)

  • act (str or Callable, optional) – The non-linear activation function to use. (default: "relu")

  • act_first (bool, optional) – If set to True, activation is applied before normalization. (default: False)

  • act_kwargs (Dict[str, Any], optional) – Arguments passed to the respective activation function defined by act. (default: None)

  • norm (str or Callable, optional) – The normalization function to use. (default: None)

  • norm_kwargs (Dict[str, Any], optional) – Arguments passed to the respective normalization function defined by norm. (default: None)

  • jk (str, optional) – The Jumping Knowledge mode. If specified, the model will additionally apply a final linear transformation to transform node embeddings to the expected output feature dimensionality. (None, "last", "cat", "max", "lstm"). (default: None)

  • **kwargs (optional) – Additional arguments of torch_geometric.nn.conv.GATConv or torch_geometric.nn.conv.GATv2Conv.

init_conv(in_channels, out_channels, **kwargs)#
supports_edge_attr: Final[bool] = True#
supports_edge_weight: Final[bool] = False#
supports_norm_batch: Final[bool]#
class GCN(in_channels, hidden_channels, num_layers, out_channels=None, dropout=0.0, act='relu', act_first=False, act_kwargs=None, norm=None, norm_kwargs=None, jk=None, **kwargs)#

Bases: BasicGNN

The Graph Neural Network from the “Semi-supervised Classification with Graph Convolutional Networks” paper, using the GCNConv operator for message passing.

Parameters:
  • in_channels (int) – Size of each input sample, or -1 to derive the size from the first input(s) to the forward method.

  • hidden_channels (int) – Size of each hidden sample.

  • num_layers (int) – Number of message passing layers.

  • out_channels (int, optional) – If not set to None, will apply a final linear transformation to convert hidden node embeddings to output size out_channels. (default: None)

  • dropout (float, optional) – Dropout probability. (default: 0.)

  • act (str or Callable, optional) – The non-linear activation function to use. (default: "relu")

  • act_first (bool, optional) – If set to True, activation is applied before normalization. (default: False)

  • act_kwargs (Dict[str, Any], optional) – Arguments passed to the respective activation function defined by act. (default: None)

  • norm (str or Callable, optional) – The normalization function to use. (default: None)

  • norm_kwargs (Dict[str, Any], optional) – Arguments passed to the respective normalization function defined by norm. (default: None)

  • jk (str, optional) – The Jumping Knowledge mode. If specified, the model will additionally apply a final linear transformation to transform node embeddings to the expected output feature dimensionality, while default will not. (None, "last", "cat", "max", "lstm"). (default: None)

  • **kwargs (optional) – Additional arguments of torch_geometric.nn.conv.GCNConv.

init_conv(in_channels, out_channels, **kwargs)#
supports_edge_attr: Final[bool] = False#
supports_edge_weight: Final[bool] = True#
supports_norm_batch: Final[bool]#
class GIN(in_channels, hidden_channels, num_layers, out_channels=None, dropout=0.0, act='relu', act_first=False, act_kwargs=None, norm=None, norm_kwargs=None, jk=None, **kwargs)#

Bases: BasicGNN

The Graph Neural Network from the “How Powerful are Graph Neural Networks?” paper, using the GINConv operator for message passing.

Parameters:
  • in_channels (int) – Size of each input sample.

  • hidden_channels (int) – Size of each hidden sample.

  • num_layers (int) – Number of message passing layers.

  • out_channels (int, optional) – If not set to None, will apply a final linear transformation to convert hidden node embeddings to output size out_channels. (default: None)

  • dropout (float, optional) – Dropout probability. (default: 0.)

  • act (str or Callable, optional) – The non-linear activation function to use. (default: "relu")

  • act_first (bool, optional) – If set to True, activation is applied before normalization. (default: False)

  • act_kwargs (Dict[str, Any], optional) – Arguments passed to the respective activation function defined by act. (default: None)

  • norm (str or Callable, optional) – The normalization function to use. (default: None)

  • norm_kwargs (Dict[str, Any], optional) – Arguments passed to the respective normalization function defined by norm. (default: None)

  • jk (str, optional) – The Jumping Knowledge mode. If specified, the model will additionally apply a final linear transformation to transform node embeddings to the expected output feature dimensionality. (None, "last", "cat", "max", "lstm"). (default: None)

  • **kwargs (optional) – Additional arguments of torch_geometric.nn.conv.GINConv.

init_conv(in_channels, out_channels, **kwargs)#
supports_edge_attr: Final[bool] = False#
supports_edge_weight: Final[bool] = False#
supports_norm_batch: Final[bool]#
class GraphSAGE(in_channels, hidden_channels, num_layers, out_channels=None, dropout=0.0, act='relu', act_first=False, act_kwargs=None, norm=None, norm_kwargs=None, jk=None, **kwargs)#

Bases: BasicGNN

The Graph Neural Network from the “Inductive Representation Learning on Large Graphs” paper, using the SAGEConv operator for message passing.

Parameters:
  • in_channels (int or tuple) – Size of each input sample, or -1 to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities.

  • hidden_channels (int) – Size of each hidden sample.

  • num_layers (int) – Number of message passing layers.

  • out_channels (int, optional) – If not set to None, will apply a final linear transformation to convert hidden node embeddings to output size out_channels. (default: None)

  • dropout (float, optional) – Dropout probability. (default: 0.)

  • act (str or Callable, optional) – The non-linear activation function to use. (default: "relu")

  • act_first (bool, optional) – If set to True, activation is applied before normalization. (default: False)

  • act_kwargs (Dict[str, Any], optional) – Arguments passed to the respective activation function defined by act. (default: None)

  • norm (str or Callable, optional) – The normalization function to use. (default: None)

  • norm_kwargs (Dict[str, Any], optional) – Arguments passed to the respective normalization function defined by norm. (default: None)

  • jk (str, optional) – The Jumping Knowledge mode. If specified, the model will additionally apply a final linear transformation to transform node embeddings to the expected output feature dimensionality. (None, "last", "cat", "max", "lstm"). (default: None)

  • **kwargs (optional) – Additional arguments of torch_geometric.nn.conv.SAGEConv.

init_conv(in_channels, out_channels, **kwargs)#
supports_edge_attr: Final[bool] = False#
supports_edge_weight: Final[bool] = False#
supports_norm_batch: Final[bool]#
class IdentityGAT(in_channels, hidden_channels, out_channels, num_layers, norm, heads=1, dropout=0.0)#

Bases: Module

Graph Attention Network (GAT) with identity activation function.

Parameters:
in_channelsint

Number of input features.

hidden_channelsint

Number of hidden units.

out_channelsint

Number of output features.

num_layersint

Number of layers.

normtorch.nn.Module

Normalization layer.

headsint, optional

Number of attention heads. Defaults to 1.

dropoutfloat, optional

Dropout rate. Defaults to 0.0.

__init__(in_channels, hidden_channels, out_channels, num_layers, norm, heads=1, dropout=0.0)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, edge_index)#

Forward pass.

Parameters:
xtorch.Tensor

Input node features.

edge_indextorch.Tensor

Edge indices.

Returns:
torch.Tensor

Output node features.

class IdentityGCN(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#

Bases: Module

Graph Convolutional Network (GCN) with identity activation function.

Parameters:
in_channelsint

Number of input features.

hidden_channelsint

Number of hidden units.

out_channelsint

Number of output features.

num_layersint

Number of layers.

normtorch.nn.Module

Normalization layer.

dropoutfloat, optional

Dropout rate. Defaults to 0.0.

__init__(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, edge_index)#

Forward pass.

Parameters:
xtorch.Tensor

Input node features.

edge_indextorch.Tensor

Edge indices.

Returns:
torch.Tensor

Output node features.

class IdentityGIN(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#

Bases: Module

Graph Isomorphism Network (GIN) with identity activation function.

Parameters:
in_channelsint

Number of input features.

hidden_channelsint

Number of hidden units.

out_channelsint

Number of output features.

num_layersint

Number of layers.

normtorch.nn.Module

Normalization layer.

dropoutfloat, optional

Dropout rate. Defaults to 0.0.

__init__(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, edge_index)#

Forward pass.

Parameters:
xtorch.Tensor

Input node features.

edge_indextorch.Tensor

Edge indices.

Returns:
torch.Tensor

Output node features.

class IdentitySAGE(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#

Bases: Module

GraphSAGE with identity activation function.

Parameters:
in_channelsint

Number of input features.

hidden_channelsint

Number of hidden units.

out_channelsint

Number of output features.

num_layersint

Number of layers.

normtorch.nn.Module

Normalization layer.

dropoutfloat, optional

Dropout rate. Defaults to 0.0.

__init__(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, edge_index)#

Forward pass.

Parameters:
xtorch.Tensor

Input node features.

edge_indextorch.Tensor

Edge indices.

Returns:
torch.Tensor

Output node features.