topobench.nn.backbones.graph package#

Graph backbones with automated exports.

class GPSEncoder(input_dim, hidden_dim, num_layers=4, heads=4, dropout=0.1, attn_type='multihead', local_conv_type='gin', use_edge_attr=False, redraw_interval=None, attn_kwargs=None)#

Bases: Module

GPS Encoder that can be used with the training framework.

Uses the official PyTorch Geometric GPSConv implementation. This encoder combines local message passing with global attention mechanisms for powerful graph representation learning.

Parameters:
input_dimint

Dimension of input node features.

hidden_dimint

Dimension of hidden layers.

num_layersint, optional

Number of GPS layers. Default is 4.

headsint, optional

Number of attention heads in GPSConv layers. Default is 4.

dropoutfloat, optional

Dropout rate for GPSConv layers. Default is 0.1.

attn_typestr, optional

Type of attention mechanism to use. Options are ‘multihead’, ‘performer’, etc. Default is ‘multihead’.

local_conv_typestr, optional

Type of local message passing layer. Options are ‘gin’, ‘pna’, etc. Default is ‘gin’.

use_edge_attrbool, optional

Whether to use edge attributes in GPSConv layers. Default is False.

redraw_intervalint or None, optional

Interval for redrawing random projections in Performer attention. If None, projections are not redrawn. Default is None.

attn_kwargsdict, optional

Additional keyword arguments for the attention mechanism.

__init__(input_dim, hidden_dim, num_layers=4, heads=4, dropout=0.1, attn_type='multihead', local_conv_type='gin', use_edge_attr=False, redraw_interval=None, attn_kwargs=None)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, edge_index, batch=None, edge_attr=None, **kwargs)#

Forward pass of GPS encoder.

Parameters:
xtorch.Tensor

Node feature matrix of shape [num_nodes, input_dim].

edge_indextorch.Tensor

Edge indices of shape [2, num_edges].

batchtorch.Tensor, optional

Batch vector assigning each node to a specific graph. Shape [num_nodes]. Default is None.

edge_attrtorch.Tensor, optional

Edge feature matrix of shape [num_edges, edge_dim]. Default is None.

**kwargsdict

Additional arguments (not used).

Returns:
torch.Tensor

Output node feature matrix of shape [num_nodes, hidden_dim].

class GraphMLP(in_channels, hidden_channels, order=1, dropout=0.0, **kwargs)#

Bases: Module

“Graph MLP backbone.

Parameters:
in_channelsint

Number of input features.

hidden_channelsint

Number of hidden units.

orderint, optional

To compute order-th power of adj matrix (default: 1).

dropoutfloat, optional

Dropout rate (default: 0.0).

**kwargs

Additional arguments.

__init__(in_channels, hidden_channels, order=1, dropout=0.0, **kwargs)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)#

Forward pass.

Parameters:
xtorch.Tensor

Input tensor.

Returns:
torch.Tensor

Output tensor.

class IdentityGAT(in_channels, hidden_channels, out_channels, num_layers, norm, heads=1, dropout=0.0)#

Bases: Module

Graph Attention Network (GAT) with identity activation function.

Parameters:
in_channelsint

Number of input features.

hidden_channelsint

Number of hidden units.

out_channelsint

Number of output features.

num_layersint

Number of layers.

normtorch.nn.Module

Normalization layer.

headsint, optional

Number of attention heads. Defaults to 1.

dropoutfloat, optional

Dropout rate. Defaults to 0.0.

__init__(in_channels, hidden_channels, out_channels, num_layers, norm, heads=1, dropout=0.0)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, edge_index)#

Forward pass.

Parameters:
xtorch.Tensor

Input node features.

edge_indextorch.Tensor

Edge indices.

Returns:
torch.Tensor

Output node features.

class IdentityGCN(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#

Bases: Module

Graph Convolutional Network (GCN) with identity activation function.

Parameters:
in_channelsint

Number of input features.

hidden_channelsint

Number of hidden units.

out_channelsint

Number of output features.

num_layersint

Number of layers.

normtorch.nn.Module

Normalization layer.

dropoutfloat, optional

Dropout rate. Defaults to 0.0.

__init__(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, edge_index)#

Forward pass.

Parameters:
xtorch.Tensor

Input node features.

edge_indextorch.Tensor

Edge indices.

Returns:
torch.Tensor

Output node features.

class IdentityGIN(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#

Bases: Module

Graph Isomorphism Network (GIN) with identity activation function.

Parameters:
in_channelsint

Number of input features.

hidden_channelsint

Number of hidden units.

out_channelsint

Number of output features.

num_layersint

Number of layers.

normtorch.nn.Module

Normalization layer.

dropoutfloat, optional

Dropout rate. Defaults to 0.0.

__init__(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, edge_index)#

Forward pass.

Parameters:
xtorch.Tensor

Input node features.

edge_indextorch.Tensor

Edge indices.

Returns:
torch.Tensor

Output node features.

class IdentitySAGE(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#

Bases: Module

GraphSAGE with identity activation function.

Parameters:
in_channelsint

Number of input features.

hidden_channelsint

Number of hidden units.

out_channelsint

Number of output features.

num_layersint

Number of layers.

normtorch.nn.Module

Normalization layer.

dropoutfloat, optional

Dropout rate. Defaults to 0.0.

__init__(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, edge_index)#

Forward pass.

Parameters:
xtorch.Tensor

Input node features.

edge_indextorch.Tensor

Edge indices.

Returns:
torch.Tensor

Output node features.

class Mlp(input_dim, hid_dim, dropout)#

Bases: Module

MLP module.

Parameters:
input_dimint

Input dimension.

hid_dimint

Hidden dimension.

dropoutfloat

Dropout rate.

__init__(input_dim, hid_dim, dropout)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x)#

Forward pass.

Parameters:
xtorch.Tensor

Input tensor.

Returns:
torch.Tensor

Output tensor.

class NSDEncoder(input_dim, hidden_dim, num_layers=2, sheaf_type='diag', d=2, dropout=0.1, input_dropout=0.1, device='cpu', sheaf_act='tanh', orth='cayley', **kwargs)#

Bases: Module

Neural Sheaf Diffusion Encoder that can be used with the training framework.

This encoder learns representations using sheaf structure with node and edge stalks communicating via transport maps / restriction maps. Supports three types of sheaf structures: diagonal, bundle, and general.

Parameters:
input_dimint

Dimension of input node features.

hidden_dimint

Dimension of hidden layers. Must be divisible by d.

num_layersint, optional

Number of sheaf diffusion layers. Default is 2.

sheaf_typestr, optional

Type of sheaf structure. Options are ‘diag’, ‘bundle’, or ‘general’. Default is ‘diag’.

dint, optional

Dimension of the stalk space. For ‘diag’, d >= 1. For ‘bundle’ and ‘general’, d > 1. Default is 2.

dropoutfloat, optional

Dropout rate for hidden layers. Default is 0.1.

input_dropoutfloat, optional

Dropout rate for input layer. Default is 0.1.

devicestr, optional

Device to run the model on (‘cpu’ or ‘cuda’). Default is ‘cpu’.

sheaf_actstr, optional

Activation function for sheaf learning. Options are ‘tanh’, ‘elu’, ‘id’. Default is ‘tanh’.

orthstr, optional

Orthogonalization method for bundle sheaf type. Options are ‘cayley’ or ‘matrix_exp’. Default is ‘cayley’.

**kwargsdict

Additional keyword arguments (not used).

__init__(input_dim, hidden_dim, num_layers=2, sheaf_type='diag', d=2, dropout=0.1, input_dropout=0.1, device='cpu', sheaf_act='tanh', orth='cayley', **kwargs)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, edge_index, edge_attr=None, edge_weight=None, batch=None, **kwargs)#

Forward pass of Neural Sheaf Diffusion encoder.

Parameters:
xtorch.Tensor

Node feature matrix of shape [num_nodes, input_dim].

edge_indextorch.Tensor

Edge indices of shape [2, num_edges]. Will be automatically converted to undirected.

edge_attrtorch.Tensor, optional

Edge feature matrix (not used). Default is None.

edge_weighttorch.Tensor, optional

Edge weights (not used). Default is None.

batchtorch.Tensor, optional

Batch vector assigning each node to a specific graph (not used). Default is None.

**kwargsdict

Additional arguments (not used).

Returns:
torch.Tensor

Output node feature matrix of shape [num_nodes, hidden_dim].

get_sheaf_model()#

Get the underlying sheaf model.

Returns:
SheafDiffusion

The sheaf diffusion model instance.

class RedrawProjection(model, redraw_interval=None)#

Bases: object

Helper class to handle redrawing of random projections in Performer attention.

This is crucial for maintaining the quality of the random feature approximation.

Parameters:
modeltorch.nn.Module

The model containing PerformerAttention modules.

redraw_intervalint or None, optional

Interval for redrawing random projections. If None, projections are not redrawn. Default is None.

__init__(model, redraw_interval=None)#
redraw_projections()#

Redraw random projections in PerformerAttention modules if needed.

Returns:
None

None.

Subpackages#

Submodules#