topobench.nn.encoders.all_cell_encoder module#

Class to apply BaseEncoder to the features of higher order structures.

class AbstractFeatureEncoder#

Bases: Module

Abstract class to define a custom feature encoder.

__init__()#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

abstract forward(data)#

Forward pass of the feature encoder model.

Parameters:
datatorch_geometric.data.Data

Input data object which should contain x features.

class AllCellFeatureEncoder(in_channels, out_channels, proj_dropout=0, selected_dimensions=None, **kwargs)#

Bases: AbstractFeatureEncoder

Encoder class to apply BaseEncoder.

The BaseEncoder is applied to the features of higher order structures. The class creates a BaseEncoder for each dimension specified in selected_dimensions. Then during the forward pass, the BaseEncoders are applied to the features of the corresponding dimensions.

Parameters:
in_channelslist[int]

Input dimensions for the features.

out_channelslist[int]

Output dimensions for the features.

proj_dropoutfloat, optional

Dropout for the BaseEncoders (default: 0).

selected_dimensionslist[int], optional

List of indexes to apply the BaseEncoders to (default: None).

**kwargsdict, optional

Additional keyword arguments.

__init__(in_channels, out_channels, proj_dropout=0, selected_dimensions=None, **kwargs)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(data)#

Forward pass.

The method applies the BaseEncoders to the features of the selected_dimensions.

Parameters:
datatorch_geometric.data.Data

Input data object which should contain x_{i} features for each i in the selected_dimensions.

Returns:
torch_geometric.data.Data

Output data object with updated x_{i} features.

class BaseEncoder(in_channels, out_channels, dropout=0)#

Bases: Module

Base encoder class used by AllCellFeatureEncoder.

This class uses two linear layers with GraphNorm, Relu activation function, and dropout between the two layers.

Parameters:
in_channelsint

Dimension of input features.

out_channelsint

Dimensions of output features.

dropoutfloat, optional

Percentage of channels to discard between the two linear layers (default: 0).

__init__(in_channels, out_channels, dropout=0)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, batch)#

Forward pass of the encoder.

It applies two linear layers with GraphNorm, Relu activation function, and dropout between the two layers.

Parameters:
xtorch.Tensor

Input tensor of dimensions [N, in_channels].

batchtorch.Tensor

The batch vector which assigns each element to a specific example.

Returns:
torch.Tensor

Output tensor of shape [N, out_channels].

class GraphNorm(in_channels, eps=1e-05, device=None)#

Bases: Module

Applies graph normalization over individual graphs as described in the “GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training” paper.

\[\mathbf{x}^{\prime}_i = \frac{\mathbf{x} - \alpha \odot \textrm{E}[\mathbf{x}]} {\sqrt{\textrm{Var}[\mathbf{x} - \alpha \odot \textrm{E}[\mathbf{x}]] + \epsilon}} \odot \gamma + \beta\]

where \(\alpha\) denotes parameters that learn how much information to keep in the mean.

Parameters:
  • in_channels (int) – Size of each input sample.

  • eps (float, optional) – A value added to the denominator for numerical stability. (default: 1e-5)

  • device (torch.device, optional) – The device to use for the module. (default: None)

__init__(in_channels, eps=1e-05, device=None)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

forward(x, batch=None, batch_size=None)#

Forward pass.

Parameters:
  • x (torch.Tensor) – The source tensor.

  • batch (torch.Tensor, optional) – The batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), which assigns each element to a specific example. (default: None)

  • batch_size (int, optional) – The number of examples \(B\). Automatically calculated if not given. (default: None)

reset_parameters()#

Resets all learnable parameters of the module.