topobench.nn.backbones package#
Subpackages#
- topobench.nn.backbones.cell package
- topobench.nn.backbones.combinatorial package
- topobench.nn.backbones.graph package
- topobench.nn.backbones.hypergraph package
- topobench.nn.backbones.simplicial package
Module contents#
Some models implemented for TopoBenchX with automated exports.
- class topobench.nn.backbones.CCCN(in_channels, n_layers=2, dropout=0.0, last_act=False)#
Bases:
Module
CCCN model.
- Parameters:
- in_channelsint
Number of input channels.
- n_layersint, optional
Number of layers (default: 2).
- dropoutfloat, optional
Dropout rate (default: 0).
- last_actbool, optional
If True, the last activation function is applied (default: False).
- forward(x, Ld, Lu)#
Forward pass.
- Parameters:
- xtorch.Tensor
Input tensor.
- Ldtorch.Tensor
Domain adjacency matrix.
- Lutorch.Tensor
Label adjacency matrix.
- Returns:
- torch.Tensor
Output tensor.
- class topobench.nn.backbones.CW(F_in, F_out)#
Bases:
Module
Layer of the CCCN model.
- Parameters:
- F_inint
Number of input channels.
- F_outint
Number of output channels.
- forward(xe, Lu, Ld)#
Forward pass.
- Parameters:
- xetorch.Tensor
Input tensor.
- Lutorch.Tensor
Domain adjacency matrix.
- Ldtorch.Tensor
Label adjacency matrix.
- Returns:
- torch.Tensor
Output tensor.
- class topobench.nn.backbones.EDGNN(num_features, input_dropout=0.2, dropout=0.2, activation='relu', MLP_num_layers=2, MLP2_num_layers=-1, MLP3_num_layers=-1, All_num_layers=2, edconv_type='EquivSet', restart_alpha=0.5, aggregate='add', normalization='None', AllSet_input_norm=False)#
Bases:
Module
EDGNN model.
- Parameters:
- num_featuresint
Number of input features.
- input_dropoutfloat, optional
Dropout rate for input features. Defaults to 0.2.
- dropoutfloat, optional
Dropout rate for hidden layers. Defaults to 0.2.
- activationstr, optional
Activation function. Defaults to ‘relu’.
- MLP_num_layersint, optional
Number of layers in MLP. Defaults to 2.
- MLP2_num_layersint, optional
Number of layers in the second MLP. Defaults to -1.
- MLP3_num_layersint, optional
Number of layers in the third MLP. Defaults to -1.
- All_num_layersint, optional
Number of layers in the EDConv. Defaults to 2.
- edconv_typestr, optional
Type of EDConv. Defaults to ‘EquivSet’.
- restart_alphafloat, optional
Restart alpha. Defaults to 0.5.
- aggregatestr, optional
Aggregation method. Defaults to ‘add’.
- normalizationstr, optional
Normalization method. Defaults to ‘None’.
- AllSet_input_normbool, optional
Whether to normalize input features. Defaults to False.
- forward(x, edge_index)#
Forward pass.
- Parameters:
- xTensor
Input features.
- edge_indexLongTensor
Edge index.
- Returns:
- Tensor
Output features.
- None
None object needed for compatibility.
- reset_parameters()#
Reset parameters.
- class topobench.nn.backbones.EquivSetConv(in_features, out_features, mlp1_layers=1, mlp2_layers=1, mlp3_layers=1, aggr='add', alpha=0.5, dropout=0.0, normalization='None', input_norm=False)#
Bases:
Module
Class implementing the Equivariant Set Convolution.
- Parameters:
- in_featuresint
Number of input features.
- out_featuresint
Number of output features.
- mlp1_layersint, optional
Number of layers in the first MLP. Defaults to 1.
- mlp2_layersint, optional
Number of layers in the second MLP. Defaults to 1.
- mlp3_layersint, optional
Number of layers in the third MLP. Defaults to 1.
- aggrstr, optional
Aggregation method. Defaults to ‘add’.
- alphafloat, optional
Alpha value. Defaults to 0.5.
- dropoutfloat, optional
Dropout rate. Defaults to 0.0.
- normalizationstr, optional
Normalization method. Defaults to ‘None’.
- input_normbool, optional
Whether to normalize input features. Defaults to False.
- forward(X, vertex, edges, X0)#
Forward pass.
- Parameters:
- XTensor
Input features.
- vertexLongTensor
Vertex index.
- edgesLongTensor
Edge index.
- X0Tensor
Initial features.
- Returns:
- Tensor
Output features.
- reset_parameters()#
Reset parameters.
- class topobench.nn.backbones.GPSEncoder(input_dim: int, hidden_dim: int, num_layers: int = 4, heads: int = 4, dropout: float = 0.1, attn_type: str = 'multihead', local_conv_type: str = 'gin', use_edge_attr: bool = False, redraw_interval: int | None = None, attn_kwargs: dict[str, Any] | None = None)#
Bases:
Module
GPS Encoder that can be used with the training framework.
Uses the official PyTorch Geometric GPSConv implementation. This encoder combines local message passing with global attention mechanisms for powerful graph representation learning.
- Parameters:
- input_dimint
Dimension of input node features.
- hidden_dimint
Dimension of hidden layers.
- num_layersint, optional
Number of GPS layers. Default is 4.
- headsint, optional
Number of attention heads in GPSConv layers. Default is 4.
- dropoutfloat, optional
Dropout rate for GPSConv layers. Default is 0.1.
- attn_typestr, optional
Type of attention mechanism to use. Options are ‘multihead’, ‘performer’, etc. Default is ‘multihead’.
- local_conv_typestr, optional
Type of local message passing layer. Options are ‘gin’, ‘pna’, etc. Default is ‘gin’.
- use_edge_attrbool, optional
Whether to use edge attributes in GPSConv layers. Default is False.
- redraw_intervalint or None, optional
Interval for redrawing random projections in Performer attention. If None, projections are not redrawn. Default is None.
- attn_kwargsdict, optional
Additional keyword arguments for the attention mechanism.
- forward(x: Tensor, edge_index: Tensor, batch: Tensor | None = None, edge_attr: Tensor | None = None, **kwargs) Tensor #
Forward pass of GPS encoder.
- Parameters:
- xtorch.Tensor
Node feature matrix of shape [num_nodes, input_dim].
- edge_indextorch.Tensor
Edge indices of shape [2, num_edges].
- batchtorch.Tensor, optional
Batch vector assigning each node to a specific graph. Shape [num_nodes]. Default is None.
- edge_attrtorch.Tensor, optional
Edge feature matrix of shape [num_edges, edge_dim]. Default is None.
- **kwargsdict
Additional arguments (not used).
- Returns:
- torch.Tensor
Output node feature matrix of shape [num_nodes, hidden_dim].
- class topobench.nn.backbones.GraphMLP(in_channels, hidden_channels, order=1, dropout=0.0, **kwargs)#
Bases:
Module
“Graph MLP backbone.
- Parameters:
- in_channelsint
Number of input features.
- hidden_channelsint
Number of hidden units.
- orderint, optional
To compute order-th power of adj matrix (default: 1).
- dropoutfloat, optional
Dropout rate (default: 0.0).
- **kwargs
Additional arguments.
- forward(x)#
Forward pass.
- Parameters:
- xtorch.Tensor
Input tensor.
- Returns:
- torch.Tensor
Output tensor.
- class topobench.nn.backbones.IdentityGAT(in_channels, hidden_channels, out_channels, num_layers, norm, heads=1, dropout=0.0)#
Bases:
Module
Graph Attention Network (GAT) with identity activation function.
- Parameters:
- in_channelsint
Number of input features.
- hidden_channelsint
Number of hidden units.
- out_channelsint
Number of output features.
- num_layersint
Number of layers.
- normtorch.nn.Module
Normalization layer.
- headsint, optional
Number of attention heads. Defaults to 1.
- dropoutfloat, optional
Dropout rate. Defaults to 0.0.
- forward(x, edge_index)#
Forward pass.
- Parameters:
- xtorch.Tensor
Input node features.
- edge_indextorch.Tensor
Edge indices.
- Returns:
- torch.Tensor
Output node features.
- class topobench.nn.backbones.IdentityGCN(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#
Bases:
Module
Graph Convolutional Network (GCN) with identity activation function.
- Parameters:
- in_channelsint
Number of input features.
- hidden_channelsint
Number of hidden units.
- out_channelsint
Number of output features.
- num_layersint
Number of layers.
- normtorch.nn.Module
Normalization layer.
- dropoutfloat, optional
Dropout rate. Defaults to 0.0.
- forward(x, edge_index)#
Forward pass.
- Parameters:
- xtorch.Tensor
Input node features.
- edge_indextorch.Tensor
Edge indices.
- Returns:
- torch.Tensor
Output node features.
- class topobench.nn.backbones.IdentityGIN(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#
Bases:
Module
Graph Isomorphism Network (GIN) with identity activation function.
- Parameters:
- in_channelsint
Number of input features.
- hidden_channelsint
Number of hidden units.
- out_channelsint
Number of output features.
- num_layersint
Number of layers.
- normtorch.nn.Module
Normalization layer.
- dropoutfloat, optional
Dropout rate. Defaults to 0.0.
- forward(x, edge_index)#
Forward pass.
- Parameters:
- xtorch.Tensor
Input node features.
- edge_indextorch.Tensor
Edge indices.
- Returns:
- torch.Tensor
Output node features.
- class topobench.nn.backbones.IdentitySAGE(in_channels, hidden_channels, out_channels, num_layers, norm, dropout=0.0)#
Bases:
Module
GraphSAGE with identity activation function.
- Parameters:
- in_channelsint
Number of input features.
- hidden_channelsint
Number of hidden units.
- out_channelsint
Number of output features.
- num_layersint
Number of layers.
- normtorch.nn.Module
Normalization layer.
- dropoutfloat, optional
Dropout rate. Defaults to 0.0.
- forward(x, edge_index)#
Forward pass.
- Parameters:
- xtorch.Tensor
Input node features.
- edge_indextorch.Tensor
Edge indices.
- Returns:
- torch.Tensor
Output node features.
- class topobench.nn.backbones.JumpLinkConv(in_features, out_features, mlp_layers=2, aggr='add', alpha=0.5)#
Bases:
Module
Class implementing the JumpLink Convolution.
- Parameters:
- in_featuresint
Number of input features.
- out_featuresint
Number of output features.
- mlp_layersint, optional
Number of layers in the MLP. Defaults to 2.
- aggrstr, optional
Aggregation method. Defaults to ‘add’.
- alphafloat, optional
Alpha value. Defaults to 0.5.
- forward(X, vertex, edges, X0, beta=1.0)#
Forward pass.
- Parameters:
- XTensor
Input features.
- vertexLongTensor
Vertex index.
- edgesLongTensor
Edge index.
- X0Tensor
Initial features.
- betafloat, optional
Beta value. Defaults to 1.0.
- Returns:
- Tensor
Output features.
- reset_parameters()#
Reset parameters.
- class topobench.nn.backbones.MLP(in_channels, hidden_layers, out_channels, dropout=0.25, norm=None, norm_kwargs=None, act=None, act_kwargs=None, final_act=None, final_act_kwargs=None, num_nodes=None, task_level=None, **kwargs)#
Bases:
Module
Multi-Layer Perceptron (MLP).
This class implements a multi-layer perceptron architecture with customizable activation functions and normalization layers.
- Parameters:
- in_channelsint
The dimensionality of the input features.
- hidden_layersint
The dimensionality of the hidden features.
- out_channelsint
The dimensionality of the output features.
- dropoutfloat, optional
The dropout rate (default 0.25).
- normstr, optional
The normalization layer to use (default None).
- norm_kwargsdict, optional
Additional keyword arguments for the normalization layer (default None).
- actstr, optional
The activation function to use (default “relu”).
- act_kwargsdict, optional
Additional keyword arguments for the activation function (default None).
- final_actstr, optional
The final activation function to use (default “sigmoid”).
- final_act_kwargsdict, optional
Additional keyword arguments for the final activation function (default None).
- num_nodesint, optional
The number of nodes in the input graph (default None).
- task_levelint, optional
The task level for the model (default None).
- **kwargs
Additional keyword arguments.
- build_mlp_layers()#
Build the MLP layers.
- Returns:
- nn.Sequential
The MLP layers.
- build_norm_layers(norm, norm_kwargs)#
Build the normalization layers.
- Parameters:
- normstr
The normalization layer to use.
- norm_kwargsdict
Additional keyword arguments for the normalization layer.
- Returns:
- list
A list of normalization layers.
- forward(x, batch_size)#
Forward pass through the MLP.
- Parameters:
- xtorch.Tensor
Input tensor.
- batch_sizeint
Batch size.
- Returns:
- torch.Tensor
Output tensor.
- class topobench.nn.backbones.MeanDegConv(in_features, out_features, init_features=None, mlp1_layers=1, mlp2_layers=1, mlp3_layers=2)#
Bases:
Module
Class implementing the Mean Degree Convolution.
- Parameters:
- in_featuresint
Number of input features.
- out_featuresint
Number of output features.
- init_featuresint, optional
Number of initial features. Defaults to None.
- mlp1_layersint, optional
Number of layers in the first MLP. Defaults to 1.
- mlp2_layersint, optional
Number of layers in the second MLP. Defaults to 1.
- mlp3_layersint, optional
Number of layers in the third MLP. Defaults to 2.
- forward(X, vertex, edges, X0)#
Forward pass.
- Parameters:
- XTensor
Input features.
- vertexLongTensor
Vertex index.
- edgesLongTensor
Edge index.
- X0Tensor
Initial features.
- Returns:
- Tensor
Output features.
- reset_parameters()#
Reset parameters.
- class topobench.nn.backbones.Mlp(input_dim, hid_dim, dropout)#
Bases:
Module
MLP module.
- Parameters:
- input_dimint
Input dimension.
- hid_dimint
Hidden dimension.
- dropoutfloat
Dropout rate.
- forward(x)#
Forward pass.
- Parameters:
- xtorch.Tensor
Input tensor.
- Returns:
- torch.Tensor
Output tensor.
- class topobench.nn.backbones.PlainMLP(in_channels, hidden_channels, out_channels, num_layers, dropout=0.5)#
Bases:
Module
Class implementing a multi-layer perceptron without normalization.
Adapted from CUAI/CorrectAndSmooth.
- Parameters:
- in_channelsint
Number of input features.
- hidden_channelsint
Number of hidden features.
- out_channelsint
Number of output features.
- num_layersint
Number of layers.
- dropoutfloat, optional
Dropout rate. Defaults to 0.5.
- forward(x)#
Forward pass.
- Parameters:
- xTensor
Input features.
- Returns:
- Tensor
Output features.
- reset_parameters()#
Reset parameters.
- class topobench.nn.backbones.RedrawProjection(model: Module, redraw_interval: int | None = None)#
Bases:
object
Helper class to handle redrawing of random projections in Performer attention.
This is crucial for maintaining the quality of the random feature approximation.
- Parameters:
- modeltorch.nn.Module
The model containing PerformerAttention modules.
- redraw_intervalint or None, optional
Interval for redrawing random projections. If None, projections are not redrawn. Default is None.
- redraw_projections()#
Redraw random projections in PerformerAttention modules if needed.
- Returns:
- None
None.
- class topobench.nn.backbones.SCCNNCustom(in_channels_all, hidden_channels_all, conv_order, sc_order, aggr_norm=False, update_func=None, n_layers=2)#
Bases:
Module
SCCNN implementation for complex classification.
Note: In this task, we can consider the output on any order of simplices for the classification task, which of course can be amended by a readout layer.
- Parameters:
- in_channels_alltuple of int
Dimension of input features on (nodes, edges, faces).
- hidden_channels_alltuple of int
Dimension of features of hidden layers on (nodes, edges, faces).
- conv_orderint
Order of convolutions, we consider the same order for all convolutions.
- sc_orderint
Order of simplicial complex.
- aggr_normbool, optional
Whether to normalize the aggregation (default: False).
- update_funcstr, optional
Update function for the simplicial complex convolution (default: None).
- n_layersint, optional
Number of layers (default: 2).
- forward(x_all, laplacian_all, incidence_all)#
Forward computation.
- Parameters:
- x_alltuple(tensors)
Tuple of feature tensors (node, edge, face).
- laplacian_alltuple(tensors)
Tuple of Laplacian tensors (graph laplacian L0, down edge laplacian L1_d, upper edge laplacian L1_u, face laplacian L2).
- incidence_alltuple(tensors)
Tuple of order 1 and 2 incidence matrices.
- Returns:
- tuple(tensors)
Tuple of final hidden state tensors (node, edge, face).
- class topobench.nn.backbones.SCCNNLayer(in_channels, out_channels, conv_order, sc_order, aggr_norm: bool = False, update_func=None, initialization: str = 'xavier_normal')#
Bases:
Module
Layer of a Simplicial Complex Convolutional Neural Network.
- Parameters:
- in_channelstuple of int
Dimensions of input features on nodes, edges, and faces.
- out_channelstuple of int
Dimensions of output features on nodes, edges, and faces.
- conv_orderint
Convolution order of the simplicial filters.
- sc_orderint
SC order.
- aggr_normbool, optional
Whether to normalize the aggregated message by the neighborhood size (default: False).
- update_funcstr, optional
Activation function used in aggregation layers (default: None).
- initializationstr, optional
Initialization method for the weights (default: “xavier_normal”).
- aggr_norm_func(conv_operator, x)#
Perform aggregation normalization.
- Parameters:
- conv_operatortorch.sparse
Convolution operator.
- xtorch.Tensor
Feature tensor.
- Returns:
- torch.Tensor
Normalized feature tensor.
- chebyshev_conv(conv_operator, conv_order, x)#
Perform Chebyshev convolution.
- Parameters:
- conv_operatortorch.sparse
Convolution operator.
- conv_orderint
Order of the convolution.
- xtorch.Tensor
Feature tensor.
- Returns:
- torch.Tensor
Output tensor.
- forward(x_all, laplacian_all, incidence_all)#
Forward computation.
- Parameters:
- x_alltuple of tensors
Tuple of input feature tensors (node, edge, face).
- laplacian_alltuple of tensors
Tuple of Laplacian tensors (graph laplacian L0, down edge laplacian L1_d, upper edge laplacian L1_u, face laplacian L2).
- incidence_alltuple of tensors
Tuple of order 1 and 2 incidence matrices.
- Returns:
- torch.Tensor
Output tensor for each 0-cell.
- torch.Tensor
Output tensor for each 1-cell.
- torch.Tensor
Output tensor for each 2-cell.
- reset_parameters(gain: float = 1.414)#
Reset learnable parameters.
- Parameters:
- gainfloat
Gain for the weight initialization.
- update(x)#
Update embeddings on each cell (step 4).
- Parameters:
- xtorch.Tensor
Input tensor.
- Returns:
- torch.Tensor
Updated tensor.
- class topobench.nn.backbones.TopoTune(GNN, neighborhoods, layers, use_edge_attr, activation)#
Bases:
Module
Tunes a GNN model using higher-order relations.
This class takes a GNN and its kwargs as inputs, and tunes it with specified additional relations.
- Parameters:
- GNNtorch.nn.Module, a class not an object
The GNN class to use. ex: GAT, GCN.
- neighborhoodslist of lists
The neighborhoods of interest.
- layersint
The number of layers to use. Each layer contains one GNN.
- use_edge_attrbool
Whether to use edge attributes.
- activationstr
The activation function to use. ex: ‘relu’, ‘tanh’, ‘sigmoid’.
- aggregate_inter_nbhd(x_out_per_route)#
Aggregate the outputs of the GNN for each rank.
While the GNN takes care of intra-nbhd aggregation, this will take care of inter-nbhd aggregation. Default: sum.
- Parameters:
- x_out_per_routedict
The outputs of the GNN for each route.
- Returns:
- dict
The aggregated outputs of the GNN for each rank.
- forward(batch)#
Forward pass of the model.
- Parameters:
- batchComplex or ComplexBatch(Complex)
The input data.
- Returns:
- dict
The output hidden states of the model per rank.
- generate_membership_vectors(batch: Data)#
Generate membership vectors based on batch.cell_statistics.
- Parameters:
- batchtorch_geometric.data.Data
Batch object containing the batched domain data.
- Returns:
- dict
The batch membership of the graphs per rank.
- get_nbhd_cache(params)#
Cache the nbhd information into a dict for the complex at hand.
- Parameters:
- paramsdict
The parameters of the batch, containing the complex.
- Returns:
- dict
The neighborhood cache.
- interrank_expand(params, src_rank, dst_rank, nbhd_cache, membership)#
Expand the complex into an interrank Hasse graph.
- Parameters:
- paramsdict
The parameters of the batch, containting the complex.
- src_rankint
The source rank.
- dst_rankint
The destination rank.
- nbhd_cachedict
The neighborhood cache containing the expanded boundary index and edge attributes.
- membershipdict
The batch membership of the graphs per rank.
- Returns:
- torch_geometric.data.Data
The expanded batch of interrank Hasse graphs for this route.
- interrank_gnn_forward(batch_route, layer_idx, route_index, n_dst_cells)#
Forward pass of the GNN (one layer) for an interrank Hasse graph.
- Parameters:
- batch_routetorch_geometric.data.Data
The batch of interrank Hasse graphs for this route.
- layer_idxint
The index of the layer.
- route_indexint
The index of the route.
- n_dst_cellsint
The number of destination cells in the whole batch.
- Returns:
- torch.tensor
The output of the GNN (updated features).
- intrarank_expand(params, src_rank, nbhd)#
Expand the complex into an intrarank Hasse graph.
- Parameters:
- paramsdict
The parameters of the batch, containting the complex.
- src_rankint
The source rank.
- nbhdstr
The neighborhood to use.
- Returns:
- torch_geometric.data.Data
The expanded batch of intrarank Hasse graphs for this route.
- intrarank_gnn_forward(batch_route, layer_idx, route_index)#
Forward pass of the GNN (one layer) for an intrarank Hasse graph.
- Parameters:
- batch_routetorch_geometric.data.Data
The batch of intrarank Hasse graphs for this route.
- layer_idxint
The index of the TopoTune layer.
- route_indexint
The index of the route.
- Returns:
- torch.tensor
The output of the GNN (updated features).
- class topobench.nn.backbones.TopoTune_OneHasse(GNN, neighborhoods, layers, use_edge_attr, activation)#
Bases:
Module
Tunes a GNN model using higher-order relations.
This class takes a GNN and its kwargs as inputs, and tunes it with specified additional relations. Unlike the case of TopoTune, this class expects a single Hasse graph as input, where all higher-order neighborhoods are represented as a single adjacency matrix.
- Parameters:
- GNNtorch.nn.Module, a class not an object
The GNN class to use. ex: GAT, GCN.
- neighborhoodslist of lists
The neighborhoods of interest.
- layersint
The number of layers to use. Each layer contains one GNN.
- use_edge_attrbool
Whether to use edge attributes.
- activationstr
The activation function to use. ex: ‘relu’, ‘tanh’, ‘sigmoid’.
- aggregate_inter_nbhd(x_out)#
Aggregate the outputs of the GNN for each rank.
While the GNN takes care of intra-nbhd aggregation, this will take care of inter-nbhd aggregation. Default: sum.
- Parameters:
- x_outtorch.tensor
The output of the GNN, concatenated features of each rank.
- Returns:
- dict
The aggregated outputs of the GNN for each rank.
- all_nbhds_expand(params, membership)#
Expand the complex into a single Hasse graph which contains all ranks and all nbhd.
- Parameters:
- paramsdict
The parameters of the batch, containing the complex.
- membershipdict
The batch membership of the graphs per rank.
- Returns:
- torch_geometric.data.Data
The expanded Hasse graph.
- all_nbhds_gnn_forward(batch_route, layer_idx)#
Forward pass of the GNN (one layer) for an intrarank Hasse graph.
- Parameters:
- batch_routetorch_geometric.data.Data
The batch of intrarank Hasse graphs for this route.
- layer_idxint
The index of the TopoTune layer.
- Returns:
- torch.tensor
The output of the GNN (updated features).
- forward(batch)#
Forward pass of the model.
- Parameters:
- batchComplex or ComplexBatch(Complex)
The input data.
- Returns:
- dict
The output hidden states of the model per rank.
- generate_membership_vectors(batch: Data)#
Generate membership vectors based on batch.cell_statistics.
- Parameters:
- batchtorch_geometric.data.Data
Batch object containing the batched domain data.
- Returns:
- dict
The batch membership of the graphs per rank.
- class topobench.nn.backbones.customMLP(in_channels, hidden_channels, out_channels, num_layers, dropout=0.5, Normalization='bn', InputNorm=False)#
Bases:
Module
Class implementing a multi-layer perceptron.
Adapted from CUAI/CorrectAndSmooth
- Parameters:
- in_channelsint
Number of input features.
- hidden_channelsint
Number of hidden features.
- out_channelsint
Number of output features.
- num_layersint
Number of layers.
- dropoutfloat, optional
Dropout rate. Defaults to 0.5.
- Normalizationstr, optiona
Normalization method. Defaults to ‘bn’.
- InputNormbool, optional
Whether to normalize input features. Defaults to False.
- flops(x)#
Calculate FLOPs.
- Parameters:
- xTensor
Input features.
- Returns:
- int
FLOPs.
- forward(x)#
Forward pass.
- Parameters:
- xTensor
Input features.
- Returns:
- Tensor
Output features.
- reset_parameters()#
Reset parameters.