topobench.nn.backbones.graph.identity_gnn module#
This module contains the implementation of identity GNNs.
- class GAT(in_channels, hidden_channels, num_layers, out_channels=None, dropout=0.0, act='relu', act_first=False, act_kwargs=None, norm=None, norm_kwargs=None, jk=None, **kwargs)#
Bases:
BasicGNNThe Graph Neural Network from “Graph Attention Networks” or “How Attentive are Graph Attention Networks?” papers, using the
GATConvorGATv2Convoperator for message passing, respectively.- Parameters:
in_channels (int or tuple) – Size of each input sample, or
-1to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities.hidden_channels (int) – Size of each hidden sample.
num_layers (int) – Number of message passing layers.
out_channels (int, optional) – If not set to
None, will apply a final linear transformation to convert hidden node embeddings to output sizeout_channels. (default:None)v2 (bool, optional) – If set to
True, will make use ofGATv2Convrather thanGATConv. (default:False)dropout (float, optional) – Dropout probability. (default:
0.)act (str or Callable, optional) – The non-linear activation function to use. (default:
"relu")act_first (bool, optional) – If set to
True, activation is applied before normalization. (default:False)act_kwargs (Dict[str, Any], optional) – Arguments passed to the respective activation function defined by
act. (default:None)norm (str or Callable, optional) – The normalization function to use. (default:
None)norm_kwargs (Dict[str, Any], optional) – Arguments passed to the respective normalization function defined by
norm. (default:None)jk (str, optional) – The Jumping Knowledge mode. If specified, the model will additionally apply a final linear transformation to transform node embeddings to the expected output feature dimensionality. (
None,"last","cat","max","lstm"). (default:None)**kwargs (optional) – Additional arguments of
torch_geometric.nn.conv.GATConvortorch_geometric.nn.conv.GATv2Conv.
- init_conv(in_channels, out_channels, **kwargs)#
- class GCN(in_channels, hidden_channels, num_layers, out_channels=None, dropout=0.0, act='relu', act_first=False, act_kwargs=None, norm=None, norm_kwargs=None, jk=None, **kwargs)#
Bases:
BasicGNNThe Graph Neural Network from the “Semi-supervised Classification with Graph Convolutional Networks” paper, using the
GCNConvoperator for message passing.- Parameters:
in_channels (int) – Size of each input sample, or
-1to derive the size from the first input(s) to the forward method.hidden_channels (int) – Size of each hidden sample.
num_layers (int) – Number of message passing layers.
out_channels (int, optional) – If not set to
None, will apply a final linear transformation to convert hidden node embeddings to output sizeout_channels. (default:None)dropout (float, optional) – Dropout probability. (default:
0.)act (str or Callable, optional) – The non-linear activation function to use. (default:
"relu")act_first (bool, optional) – If set to
True, activation is applied before normalization. (default:False)act_kwargs (Dict[str, Any], optional) – Arguments passed to the respective activation function defined by
act. (default:None)norm (str or Callable, optional) – The normalization function to use. (default:
None)norm_kwargs (Dict[str, Any], optional) – Arguments passed to the respective normalization function defined by
norm. (default:None)jk (str, optional) – The Jumping Knowledge mode. If specified, the model will additionally apply a final linear transformation to transform node embeddings to the expected output feature dimensionality, while default will not. (
None,"last","cat","max","lstm"). (default:None)**kwargs (optional) – Additional arguments of
torch_geometric.nn.conv.GCNConv.
- init_conv(in_channels, out_channels, **kwargs)#
- class GIN(in_channels, hidden_channels, num_layers, out_channels=None, dropout=0.0, act='relu', act_first=False, act_kwargs=None, norm=None, norm_kwargs=None, jk=None, **kwargs)#
Bases:
BasicGNNThe Graph Neural Network from the “How Powerful are Graph Neural Networks?” paper, using the
GINConvoperator for message passing.- Parameters:
in_channels (int) – Size of each input sample.
hidden_channels (int) – Size of each hidden sample.
num_layers (int) – Number of message passing layers.
out_channels (int, optional) – If not set to
None, will apply a final linear transformation to convert hidden node embeddings to output sizeout_channels. (default:None)dropout (float, optional) – Dropout probability. (default:
0.)act (str or Callable, optional) – The non-linear activation function to use. (default:
"relu")act_first (bool, optional) – If set to
True, activation is applied before normalization. (default:False)act_kwargs (Dict[str, Any], optional) – Arguments passed to the respective activation function defined by
act. (default:None)norm (str or Callable, optional) – The normalization function to use. (default:
None)norm_kwargs (Dict[str, Any], optional) – Arguments passed to the respective normalization function defined by
norm. (default:None)jk (str, optional) – The Jumping Knowledge mode. If specified, the model will additionally apply a final linear transformation to transform node embeddings to the expected output feature dimensionality. (
None,"last","cat","max","lstm"). (default:None)**kwargs (optional) – Additional arguments of
torch_geometric.nn.conv.GINConv.
- init_conv(in_channels, out_channels, **kwargs)#
- class GraphSAGE(in_channels, hidden_channels, num_layers, out_channels=None, dropout=0.0, act='relu', act_first=False, act_kwargs=None, norm=None, norm_kwargs=None, jk=None, **kwargs)#
Bases:
BasicGNNThe Graph Neural Network from the “Inductive Representation Learning on Large Graphs” paper, using the
SAGEConvoperator for message passing.- Parameters:
in_channels (int or tuple) – Size of each input sample, or
-1to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities.hidden_channels (int) – Size of each hidden sample.
num_layers (int) – Number of message passing layers.
out_channels (int, optional) – If not set to
None, will apply a final linear transformation to convert hidden node embeddings to output sizeout_channels. (default:None)dropout (float, optional) – Dropout probability. (default:
0.)act (str or Callable, optional) – The non-linear activation function to use. (default:
"relu")act_first (bool, optional) – If set to
True, activation is applied before normalization. (default:False)act_kwargs (Dict[str, Any], optional) – Arguments passed to the respective activation function defined by
act. (default:None)norm (str or Callable, optional) – The normalization function to use. (default:
None)norm_kwargs (Dict[str, Any], optional) – Arguments passed to the respective normalization function defined by
norm. (default:None)jk (str, optional) – The Jumping Knowledge mode. If specified, the model will additionally apply a final linear transformation to transform node embeddings to the expected output feature dimensionality. (
None,"last","cat","max","lstm"). (default:None)**kwargs (optional) – Additional arguments of
torch_geometric.nn.conv.SAGEConv.
- init_conv(in_channels, out_channels, **kwargs)#
- class IdentityGAT(in_channels, hidden_channels, out_channels, num_layers, norm, heads=1, dropout=0.0)#
Bases:
ModuleGraph Attention Network (GAT) with identity activation function.
- Parameters:
- in_channelsint
Number of input features.
- hidden_channelsint
Number of hidden units.
- out_channelsint
Number of output features.
- num_layersint
Number of layers.
- normtorch.nn.Module
Normalization layer.
- headsint, optional
Number of attention heads. Defaults to 1.
- dropoutfloat, optional
Dropout rate. Defaults to 0.0.
- __init__(in_channels, hidden_channels, out_channels, num_layers, norm, heads=1, dropout=0.0)#
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, edge_index)#
Forward pass.
- Parameters:
- xtorch.Tensor
Input node features.
- edge_indextorch.Tensor
Edge indices.
- Returns:
- torch.Tensor
Output node features.
- class IdentityGCN(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#
Bases:
ModuleGraph Convolutional Network (GCN) with identity activation function.
- Parameters:
- in_channelsint
Number of input features.
- hidden_channelsint
Number of hidden units.
- out_channelsint
Number of output features.
- num_layersint
Number of layers.
- normtorch.nn.Module
Normalization layer.
- dropoutfloat, optional
Dropout rate. Defaults to 0.0.
- __init__(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, edge_index)#
Forward pass.
- Parameters:
- xtorch.Tensor
Input node features.
- edge_indextorch.Tensor
Edge indices.
- Returns:
- torch.Tensor
Output node features.
- class IdentityGIN(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#
Bases:
ModuleGraph Isomorphism Network (GIN) with identity activation function.
- Parameters:
- in_channelsint
Number of input features.
- hidden_channelsint
Number of hidden units.
- out_channelsint
Number of output features.
- num_layersint
Number of layers.
- normtorch.nn.Module
Normalization layer.
- dropoutfloat, optional
Dropout rate. Defaults to 0.0.
- __init__(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, edge_index)#
Forward pass.
- Parameters:
- xtorch.Tensor
Input node features.
- edge_indextorch.Tensor
Edge indices.
- Returns:
- torch.Tensor
Output node features.
- class IdentitySAGE(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#
Bases:
ModuleGraphSAGE with identity activation function.
- Parameters:
- in_channelsint
Number of input features.
- hidden_channelsint
Number of hidden units.
- out_channelsint
Number of output features.
- num_layersint
Number of layers.
- normtorch.nn.Module
Normalization layer.
- dropoutfloat, optional
Dropout rate. Defaults to 0.0.
- __init__(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, edge_index)#
Forward pass.
- Parameters:
- xtorch.Tensor
Input node features.
- edge_indextorch.Tensor
Edge indices.
- Returns:
- torch.Tensor
Output node features.