topobench.nn.backbones.non_relational.mlp module#

MLP implementation.

class MLP(in_channels, hidden_layers, out_channels, dropout=0.25, norm=None, norm_kwargs=None, act=None, act_kwargs=None, final_act=None, final_act_kwargs=None, num_nodes=None, task_level=None, **kwargs)#

Bases: Module

Multi-Layer Perceptron (MLP).

This class implements a multi-layer perceptron architecture with customizable activation functions and normalization layers.

Parameters:
in_channelsint

The dimensionality of the input features.

hidden_layersint

The dimensionality of the hidden features.

out_channelsint

The dimensionality of the output features.

dropoutfloat, optional

The dropout rate (default 0.25).

normstr, optional

The normalization layer to use (default None).

norm_kwargsdict, optional

Additional keyword arguments for the normalization layer (default None).

actstr, optional

The activation function to use (default “relu”).

act_kwargsdict, optional

Additional keyword arguments for the activation function (default None).

final_actstr, optional

The final activation function to use (default “sigmoid”).

final_act_kwargsdict, optional

Additional keyword arguments for the final activation function (default None).

num_nodesint, optional

The number of nodes in the input graph (default None).

task_levelint, optional

The task level for the model (default None).

**kwargs

Additional keyword arguments.

__init__(in_channels, hidden_layers, out_channels, dropout=0.25, norm=None, norm_kwargs=None, act=None, act_kwargs=None, final_act=None, final_act_kwargs=None, num_nodes=None, task_level=None, **kwargs)#

Initialize internal Module state, shared by both nn.Module and ScriptModule.

build_mlp_layers()#

Build the MLP layers.

Returns:
nn.Sequential

The MLP layers.

build_norm_layers(norm, norm_kwargs)#

Build the normalization layers.

Parameters:
normstr

The normalization layer to use.

norm_kwargsdict

Additional keyword arguments for the normalization layer.

Returns:
list

A list of normalization layers.

forward(x, batch_size)#

Forward pass through the MLP.

Parameters:
xtorch.Tensor

Input tensor.

batch_sizeint

Batch size.

Returns:
torch.Tensor

Output tensor.

activation_resolver(query='relu', *args, **kwargs)#
normalization_resolver(query, *args, **kwargs)#