topobench.nn.backbones.simplicial.sccnn module#

Implementation of the Simplicial Complex Convolutional Neural Network (SCCNN) for complex classification.

class Parameter(data=None, requires_grad=True)#

Bases: Tensor

A kind of Tensor that is to be considered a module parameter.

Parameters are Tensor subclasses, that have a very special property when used with Module s - when they’re assigned as Module attributes they are automatically added to the list of its parameters, and will appear e.g. in parameters() iterator. Assigning a Tensor doesn’t have such effect. This is because one might want to cache some temporary state, like last hidden state of the RNN, in the model. If there was no such class as Parameter, these temporaries would get registered too.

Parameters:
  • data (Tensor) – parameter tensor.

  • requires_grad (bool, optional) – if the parameter requires gradient. Note that the torch.no_grad() context does NOT affect the default behavior of Parameter creation–the Parameter will still have requires_grad=True in no_grad mode. See locally-disable-grad-doc for more details. Default: True

class SCCNNCustom(in_channels_all, hidden_channels_all, conv_order, sc_order, aggr_norm=False, update_func=None, n_layers=2)#

Bases: Module

SCCNN implementation for complex classification.

Note: In this task, we can consider the output on any order of simplices for the classification task, which of course can be amended by a readout layer.

Parameters:
in_channels_alltuple of int

Dimension of input features on (nodes, edges, faces).

hidden_channels_alltuple of int

Dimension of features of hidden layers on (nodes, edges, faces).

conv_orderint

Order of convolutions, we consider the same order for all convolutions.

sc_orderint

Order of simplicial complex.

aggr_normbool, optional

Whether to normalize the aggregation (default: False).

update_funcstr, optional

Update function for the simplicial complex convolution (default: None).

n_layersint, optional

Number of layers (default: 2).

__init__(in_channels_all, hidden_channels_all, conv_order, sc_order, aggr_norm=False, update_func=None, n_layers=2)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x_all, laplacian_all, incidence_all)#

Forward computation.

Parameters:
x_alltuple(tensors)

Tuple of feature tensors (node, edge, face).

laplacian_alltuple(tensors)

Tuple of Laplacian tensors (graph laplacian L0, down edge laplacian L1_d, upper edge laplacian L1_u, face laplacian L2).

incidence_alltuple(tensors)

Tuple of order 1 and 2 incidence matrices.

Returns:
tuple(tensors)

Tuple of final hidden state tensors (node, edge, face).

class SCCNNLayer(in_channels, out_channels, conv_order, sc_order, aggr_norm=False, update_func=None, initialization='xavier_normal')#

Bases: Module

Layer of a Simplicial Complex Convolutional Neural Network.

Parameters:
in_channelstuple of int

Dimensions of input features on nodes, edges, and faces.

out_channelstuple of int

Dimensions of output features on nodes, edges, and faces.

conv_orderint

Convolution order of the simplicial filters.

sc_orderint

SC order.

aggr_normbool, optional

Whether to normalize the aggregated message by the neighborhood size (default: False).

update_funcstr, optional

Activation function used in aggregation layers (default: None).

initializationstr, optional

Initialization method for the weights (default: “xavier_normal”).

__init__(in_channels, out_channels, conv_order, sc_order, aggr_norm=False, update_func=None, initialization='xavier_normal')#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

aggr_norm_func(conv_operator, x)#

Perform aggregation normalization.

Parameters:
conv_operatortorch.sparse

Convolution operator.

xtorch.Tensor

Feature tensor.

Returns:
torch.Tensor

Normalized feature tensor.

chebyshev_conv(conv_operator, conv_order, x)#

Perform Chebyshev convolution.

Parameters:
conv_operatortorch.sparse

Convolution operator.

conv_orderint

Order of the convolution.

xtorch.Tensor

Feature tensor.

Returns:
torch.Tensor

Output tensor.

forward(x_all, laplacian_all, incidence_all)#

Forward computation.

Parameters:
x_alltuple of tensors

Tuple of input feature tensors (node, edge, face).

laplacian_alltuple of tensors

Tuple of Laplacian tensors (graph laplacian L0, down edge laplacian L1_d, upper edge laplacian L1_u, face laplacian L2).

incidence_alltuple of tensors

Tuple of order 1 and 2 incidence matrices.

Returns:
torch.Tensor

Output tensor for each 0-cell.

torch.Tensor

Output tensor for each 1-cell.

torch.Tensor

Output tensor for each 2-cell.

reset_parameters(gain=1.414)#

Reset learnable parameters.

Parameters:
gainfloat

Gain for the weight initialization.

update(x)#

Update embeddings on each cell (step 4).

Parameters:
xtorch.Tensor

Input tensor.

Returns:
torch.Tensor

Updated tensor.