topobench.nn.backbones.hypergraph package#

Submodules#

topobench.nn.backbones.hypergraph.edgnn module#

Class implementing the EDGNN model.

class topobench.nn.backbones.hypergraph.edgnn.EDGNN(num_features, input_dropout=0.2, dropout=0.2, activation='relu', MLP_num_layers=2, MLP2_num_layers=-1, MLP3_num_layers=-1, All_num_layers=2, edconv_type='EquivSet', restart_alpha=0.5, aggregate='add', normalization='None', AllSet_input_norm=False)[source]#

Bases: Module

EDGNN model.

Parameters:
num_featuresint

Number of input features.

input_dropoutfloat, optional

Dropout rate for input features. Defaults to 0.2.

dropoutfloat, optional

Dropout rate for hidden layers. Defaults to 0.2.

activationstr, optional

Activation function. Defaults to ‘relu’.

MLP_num_layersint, optional

Number of layers in MLP. Defaults to 2.

MLP2_num_layersint, optional

Number of layers in the second MLP. Defaults to -1.

MLP3_num_layersint, optional

Number of layers in the third MLP. Defaults to -1.

All_num_layersint, optional

Number of layers in the EDConv. Defaults to 2.

edconv_typestr, optional

Type of EDConv. Defaults to ‘EquivSet’.

restart_alphafloat, optional

Restart alpha. Defaults to 0.5.

aggregatestr, optional

Aggregation method. Defaults to ‘add’.

normalizationstr, optional

Normalization method. Defaults to ‘None’.

AllSet_input_normbool, optional

Whether to normalize input features. Defaults to False.

forward(x, edge_index)[source]#

Forward pass.

Parameters:
xTensor

Input features.

edge_indexLongTensor

Edge index.

Returns:
Tensor

Output features.

None

None object needed for compatibility.

reset_parameters()[source]#

Reset parameters.

class topobench.nn.backbones.hypergraph.edgnn.EquivSetConv(in_features, out_features, mlp1_layers=1, mlp2_layers=1, mlp3_layers=1, aggr='add', alpha=0.5, dropout=0.0, normalization='None', input_norm=False)[source]#

Bases: Module

Class implementing the Equivariant Set Convolution.

Parameters:
in_featuresint

Number of input features.

out_featuresint

Number of output features.

mlp1_layersint, optional

Number of layers in the first MLP. Defaults to 1.

mlp2_layersint, optional

Number of layers in the second MLP. Defaults to 1.

mlp3_layersint, optional

Number of layers in the third MLP. Defaults to 1.

aggrstr, optional

Aggregation method. Defaults to ‘add’.

alphafloat, optional

Alpha value. Defaults to 0.5.

dropoutfloat, optional

Dropout rate. Defaults to 0.0.

normalizationstr, optional

Normalization method. Defaults to ‘None’.

input_normbool, optional

Whether to normalize input features. Defaults to False.

forward(X, vertex, edges, X0)[source]#

Forward pass.

Parameters:
XTensor

Input features.

vertexLongTensor

Vertex index.

edgesLongTensor

Edge index.

X0Tensor

Initial features.

Returns:
Tensor

Output features.

reset_parameters()[source]#

Reset parameters.

class topobench.nn.backbones.hypergraph.edgnn.JumpLinkConv(in_features, out_features, mlp_layers=2, aggr='add', alpha=0.5)[source]#

Bases: Module

Class implementing the JumpLink Convolution.

Parameters:
in_featuresint

Number of input features.

out_featuresint

Number of output features.

mlp_layersint, optional

Number of layers in the MLP. Defaults to 2.

aggrstr, optional

Aggregation method. Defaults to ‘add’.

alphafloat, optional

Alpha value. Defaults to 0.5.

forward(X, vertex, edges, X0, beta=1.0)[source]#

Forward pass.

Parameters:
XTensor

Input features.

vertexLongTensor

Vertex index.

edgesLongTensor

Edge index.

X0Tensor

Initial features.

betafloat, optional

Beta value. Defaults to 1.0.

Returns:
Tensor

Output features.

reset_parameters()[source]#

Reset parameters.

class topobench.nn.backbones.hypergraph.edgnn.MLP(in_channels, hidden_channels, out_channels, num_layers, dropout=0.5, Normalization='bn', InputNorm=False)[source]#

Bases: Module

Class implementing a multi-layer perceptron.

Adapted from CUAI/CorrectAndSmooth

Parameters:
in_channelsint

Number of input features.

hidden_channelsint

Number of hidden features.

out_channelsint

Number of output features.

num_layersint

Number of layers.

dropoutfloat, optional

Dropout rate. Defaults to 0.5.

Normalizationstr, optiona

Normalization method. Defaults to ‘bn’.

InputNormbool, optional

Whether to normalize input features. Defaults to False.

flops(x)[source]#

Calculate FLOPs.

Parameters:
xTensor

Input features.

Returns:
int

FLOPs.

forward(x)[source]#

Forward pass.

Parameters:
xTensor

Input features.

Returns:
Tensor

Output features.

reset_parameters()[source]#

Reset parameters.

class topobench.nn.backbones.hypergraph.edgnn.MeanDegConv(in_features, out_features, init_features=None, mlp1_layers=1, mlp2_layers=1, mlp3_layers=2)[source]#

Bases: Module

Class implementing the Mean Degree Convolution.

Parameters:
in_featuresint

Number of input features.

out_featuresint

Number of output features.

init_featuresint, optional

Number of initial features. Defaults to None.

mlp1_layersint, optional

Number of layers in the first MLP. Defaults to 1.

mlp2_layersint, optional

Number of layers in the second MLP. Defaults to 1.

mlp3_layersint, optional

Number of layers in the third MLP. Defaults to 2.

forward(X, vertex, edges, X0)[source]#

Forward pass.

Parameters:
XTensor

Input features.

vertexLongTensor

Vertex index.

edgesLongTensor

Edge index.

X0Tensor

Initial features.

Returns:
Tensor

Output features.

reset_parameters()[source]#

Reset parameters.

class topobench.nn.backbones.hypergraph.edgnn.PlainMLP(in_channels, hidden_channels, out_channels, num_layers, dropout=0.5)[source]#

Bases: Module

Class implementing a multi-layer perceptron without normalization.

Adapted from CUAI/CorrectAndSmooth.

Parameters:
in_channelsint

Number of input features.

hidden_channelsint

Number of hidden features.

out_channelsint

Number of output features.

num_layersint

Number of layers.

dropoutfloat, optional

Dropout rate. Defaults to 0.5.

forward(x)[source]#

Forward pass.

Parameters:
xTensor

Input features.

Returns:
Tensor

Output features.

reset_parameters()[source]#

Reset parameters.

Module contents#

Hypergraph backbones with automated exports.

class topobench.nn.backbones.hypergraph.EDGNN(num_features, input_dropout=0.2, dropout=0.2, activation='relu', MLP_num_layers=2, MLP2_num_layers=-1, MLP3_num_layers=-1, All_num_layers=2, edconv_type='EquivSet', restart_alpha=0.5, aggregate='add', normalization='None', AllSet_input_norm=False)#

Bases: Module

EDGNN model.

Parameters:
num_featuresint

Number of input features.

input_dropoutfloat, optional

Dropout rate for input features. Defaults to 0.2.

dropoutfloat, optional

Dropout rate for hidden layers. Defaults to 0.2.

activationstr, optional

Activation function. Defaults to ‘relu’.

MLP_num_layersint, optional

Number of layers in MLP. Defaults to 2.

MLP2_num_layersint, optional

Number of layers in the second MLP. Defaults to -1.

MLP3_num_layersint, optional

Number of layers in the third MLP. Defaults to -1.

All_num_layersint, optional

Number of layers in the EDConv. Defaults to 2.

edconv_typestr, optional

Type of EDConv. Defaults to ‘EquivSet’.

restart_alphafloat, optional

Restart alpha. Defaults to 0.5.

aggregatestr, optional

Aggregation method. Defaults to ‘add’.

normalizationstr, optional

Normalization method. Defaults to ‘None’.

AllSet_input_normbool, optional

Whether to normalize input features. Defaults to False.

forward(x, edge_index)#

Forward pass.

Parameters:
xTensor

Input features.

edge_indexLongTensor

Edge index.

Returns:
Tensor

Output features.

None

None object needed for compatibility.

reset_parameters()#

Reset parameters.

class topobench.nn.backbones.hypergraph.EquivSetConv(in_features, out_features, mlp1_layers=1, mlp2_layers=1, mlp3_layers=1, aggr='add', alpha=0.5, dropout=0.0, normalization='None', input_norm=False)#

Bases: Module

Class implementing the Equivariant Set Convolution.

Parameters:
in_featuresint

Number of input features.

out_featuresint

Number of output features.

mlp1_layersint, optional

Number of layers in the first MLP. Defaults to 1.

mlp2_layersint, optional

Number of layers in the second MLP. Defaults to 1.

mlp3_layersint, optional

Number of layers in the third MLP. Defaults to 1.

aggrstr, optional

Aggregation method. Defaults to ‘add’.

alphafloat, optional

Alpha value. Defaults to 0.5.

dropoutfloat, optional

Dropout rate. Defaults to 0.0.

normalizationstr, optional

Normalization method. Defaults to ‘None’.

input_normbool, optional

Whether to normalize input features. Defaults to False.

forward(X, vertex, edges, X0)#

Forward pass.

Parameters:
XTensor

Input features.

vertexLongTensor

Vertex index.

edgesLongTensor

Edge index.

X0Tensor

Initial features.

Returns:
Tensor

Output features.

reset_parameters()#

Reset parameters.

class topobench.nn.backbones.hypergraph.JumpLinkConv(in_features, out_features, mlp_layers=2, aggr='add', alpha=0.5)#

Bases: Module

Class implementing the JumpLink Convolution.

Parameters:
in_featuresint

Number of input features.

out_featuresint

Number of output features.

mlp_layersint, optional

Number of layers in the MLP. Defaults to 2.

aggrstr, optional

Aggregation method. Defaults to ‘add’.

alphafloat, optional

Alpha value. Defaults to 0.5.

forward(X, vertex, edges, X0, beta=1.0)#

Forward pass.

Parameters:
XTensor

Input features.

vertexLongTensor

Vertex index.

edgesLongTensor

Edge index.

X0Tensor

Initial features.

betafloat, optional

Beta value. Defaults to 1.0.

Returns:
Tensor

Output features.

reset_parameters()#

Reset parameters.

class topobench.nn.backbones.hypergraph.MLP(in_channels, hidden_channels, out_channels, num_layers, dropout=0.5, Normalization='bn', InputNorm=False)#

Bases: Module

Class implementing a multi-layer perceptron.

Adapted from CUAI/CorrectAndSmooth

Parameters:
in_channelsint

Number of input features.

hidden_channelsint

Number of hidden features.

out_channelsint

Number of output features.

num_layersint

Number of layers.

dropoutfloat, optional

Dropout rate. Defaults to 0.5.

Normalizationstr, optiona

Normalization method. Defaults to ‘bn’.

InputNormbool, optional

Whether to normalize input features. Defaults to False.

flops(x)#

Calculate FLOPs.

Parameters:
xTensor

Input features.

Returns:
int

FLOPs.

forward(x)#

Forward pass.

Parameters:
xTensor

Input features.

Returns:
Tensor

Output features.

reset_parameters()#

Reset parameters.

class topobench.nn.backbones.hypergraph.MeanDegConv(in_features, out_features, init_features=None, mlp1_layers=1, mlp2_layers=1, mlp3_layers=2)#

Bases: Module

Class implementing the Mean Degree Convolution.

Parameters:
in_featuresint

Number of input features.

out_featuresint

Number of output features.

init_featuresint, optional

Number of initial features. Defaults to None.

mlp1_layersint, optional

Number of layers in the first MLP. Defaults to 1.

mlp2_layersint, optional

Number of layers in the second MLP. Defaults to 1.

mlp3_layersint, optional

Number of layers in the third MLP. Defaults to 2.

forward(X, vertex, edges, X0)#

Forward pass.

Parameters:
XTensor

Input features.

vertexLongTensor

Vertex index.

edgesLongTensor

Edge index.

X0Tensor

Initial features.

Returns:
Tensor

Output features.

reset_parameters()#

Reset parameters.

class topobench.nn.backbones.hypergraph.PlainMLP(in_channels, hidden_channels, out_channels, num_layers, dropout=0.5)#

Bases: Module

Class implementing a multi-layer perceptron without normalization.

Adapted from CUAI/CorrectAndSmooth.

Parameters:
in_channelsint

Number of input features.

hidden_channelsint

Number of hidden features.

out_channelsint

Number of output features.

num_layersint

Number of layers.

dropoutfloat, optional

Dropout rate. Defaults to 0.5.

forward(x)#

Forward pass.

Parameters:
xTensor

Input features.

Returns:
Tensor

Output features.

reset_parameters()#

Reset parameters.