topobench.nn.backbones.cell.cccn module#
Convolutional Cell Convolutional Network (CCCN) model.
- class CCCN(in_channels, n_layers=2, dropout=0.0, last_act=False)#
Bases:
ModuleCCCN model.
- Parameters:
- in_channelsint
Number of input channels.
- n_layersint, optional
Number of layers (default: 2).
- dropoutfloat, optional
Dropout rate (default: 0).
- last_actbool, optional
If True, the last activation function is applied (default: False).
- __init__(in_channels, n_layers=2, dropout=0.0, last_act=False)#
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, Ld, Lu)#
Forward pass.
- Parameters:
- xtorch.Tensor
Input tensor.
- Ldtorch.Tensor
Domain adjacency matrix.
- Lutorch.Tensor
Label adjacency matrix.
- Returns:
- torch.Tensor
Output tensor.
- class CW(F_in, F_out)#
Bases:
ModuleLayer of the CCCN model.
- Parameters:
- F_inint
Number of input channels.
- F_outint
Number of output channels.
- __init__(F_in, F_out)#
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(xe, Lu, Ld)#
Forward pass.
- Parameters:
- xetorch.Tensor
Input tensor.
- Lutorch.Tensor
Domain adjacency matrix.
- Ldtorch.Tensor
Label adjacency matrix.
- Returns:
- torch.Tensor
Output tensor.
- class GCNConv(in_channels, out_channels, improved=False, cached=False, add_self_loops=None, normalize=True, bias=True, **kwargs)#
Bases:
MessagePassingThe graph convolutional operator from the “Semi-supervised Classification with Graph Convolutional Networks” paper.
\[\mathbf{X}^{\prime} = \mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2} \mathbf{X} \mathbf{\Theta},\]where \(\mathbf{\hat{A}} = \mathbf{A} + \mathbf{I}\) denotes the adjacency matrix with inserted self-loops and \(\hat{D}_{ii} = \sum_{j=0} \hat{A}_{ij}\) its diagonal degree matrix. The adjacency matrix can include other values than
1representing edge weights via the optionaledge_weighttensor.Its node-wise formulation is given by:
\[\mathbf{x}^{\prime}_i = \mathbf{\Theta}^{\top} \sum_{j \in \mathcal{N}(i) \cup \{ i \}} \frac{e_{j,i}}{\sqrt{\hat{d}_j \hat{d}_i}} \mathbf{x}_j\]with \(\hat{d}_i = 1 + \sum_{j \in \mathcal{N}(i)} e_{j,i}\), where \(e_{j,i}\) denotes the edge weight from source node
jto target nodei(default:1.0)- Parameters:
in_channels (int) – Size of each input sample, or
-1to derive the size from the first input(s) to the forward method.out_channels (int) – Size of each output sample.
improved (bool, optional) – If set to
True, the layer computes \(\mathbf{\hat{A}}\) as \(\mathbf{A} + 2\mathbf{I}\). (default:False)cached (bool, optional) – If set to
True, the layer will cache the computation of \(\mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2}\) on first execution, and will use the cached version for further executions. This parameter should only be set toTruein transductive learning scenarios. (default:False)add_self_loops (bool, optional) – If set to
False, will not add self-loops to the input graph. By default, self-loops will be added in casenormalizeis set toTrue, and not added otherwise. (default:None)normalize (bool, optional) – Whether to add self-loops and compute symmetric normalization coefficients on-the-fly. (default:
True)bias (bool, optional) – If set to
False, the layer will not learn an additive bias. (default:True)**kwargs (optional) – Additional arguments of
torch_geometric.nn.conv.MessagePassing.
- Shapes:
input: node features \((|\mathcal{V}|, F_{in})\), edge indices \((2, |\mathcal{E}|)\) or sparse matrix \((|\mathcal{V}|, |\mathcal{V}|)\), edge weights \((|\mathcal{E}|)\) (optional)
output: node features \((|\mathcal{V}|, F_{out})\)
- __init__(in_channels, out_channels, improved=False, cached=False, add_self_loops=None, normalize=True, bias=True, **kwargs)#
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x, edge_index, edge_weight=None)#
Runs the forward pass of the module.
- message(x_j, edge_weight)#
Constructs messages from node \(j\) to node \(i\) in analogy to \(\phi_{\mathbf{\Theta}}\) for each edge in
edge_index. This function can take any argument as input which was initially passed topropagate(). Furthermore, tensors passed topropagate()can be mapped to the respective nodes \(i\) and \(j\) by appending_ior_jto the variable name, .e.g.x_iandx_j.
- message_and_aggregate(adj_t, x)#
Fuses computations of
message()andaggregate()into a single function. If applicable, this saves both time and memory since messages do not explicitly need to be materialized. This function will only gets called in case it is implemented and propagation takes place based on atorch_sparse.SparseTensoror atorch.sparse.Tensor.
- reset_parameters()#
Resets all learnable parameters of the module.