topobench.nn.backbones.graph.graph_mlp module#
Graph MLP backbone from yanghu819/Graph-MLP.
- class Dropout(p=0.5, inplace=False)#
Bases:
_DropoutNdDuring training, randomly zeroes some of the elements of the input tensor with probability
p.The zeroed elements are chosen independently for each forward call and are sampled from a Bernoulli distribution.
Each channel will be zeroed out independently on every forward call.
This has proven to be an effective technique for regularization and preventing the co-adaptation of neurons as described in the paper Improving neural networks by preventing co-adaptation of feature detectors .
Furthermore, the outputs are scaled by a factor of \(\frac{1}{1-p}\) during training. This means that during evaluation the module simply computes an identity function.
- Parameters:
- Shape:
Input: \((*)\). Input can be of any shape
Output: \((*)\). Output is of the same shape as input
Examples:
>>> m = nn.Dropout(p=0.2) >>> input = torch.randn(20, 16) >>> output = m(input)
- forward(input)#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class GraphMLP(in_channels, hidden_channels, order=1, dropout=0.0, **kwargs)#
Bases:
Module“Graph MLP backbone.
- Parameters:
- in_channelsint
Number of input features.
- hidden_channelsint
Number of hidden units.
- orderint, optional
To compute order-th power of adj matrix (default: 1).
- dropoutfloat, optional
Dropout rate (default: 0.0).
- **kwargs
Additional arguments.
- __init__(in_channels, hidden_channels, order=1, dropout=0.0, **kwargs)#
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)#
Forward pass.
- Parameters:
- xtorch.Tensor
Input tensor.
- Returns:
- torch.Tensor
Output tensor.
- class LayerNorm(normalized_shape, eps=1e-05, elementwise_affine=True, bias=True, device=None, dtype=None)#
Bases:
ModuleApplies Layer Normalization over a mini-batch of inputs.
This layer implements the operation as described in the paper Layer Normalization
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated over the last D dimensions, where D is the dimension of
normalized_shape. For example, ifnormalized_shapeis(3, 5)(a 2-dimensional shape), the mean and standard-deviation are computed over the last 2 dimensions of the input (i.e.input.mean((-2, -1))). \(\gamma\) and \(\beta\) are learnable affine transform parameters ofnormalized_shapeifelementwise_affineisTrue. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).Note
Unlike Batch Normalization and Instance Normalization, which applies scalar scale and bias for each entire channel/plane with the
affineoption, Layer Normalization applies per-element scale and bias withelementwise_affine.This layer uses statistics computed from input data in both training and evaluation modes.
- Parameters:
normalized_shape (int or list or torch.Size) –
input shape from an expected input of size
\[[* \times \text{normalized\_shape}[0] \times \text{normalized\_shape}[1] \times \ldots \times \text{normalized\_shape}[-1]]\]If a single integer is used, it is treated as a singleton list, and this module will normalize over the last dimension which is expected to be of that specific size.
eps (float) – a value added to the denominator for numerical stability. Default: 1e-5
elementwise_affine (bool) – a boolean value that when set to
True, this module has learnable per-element affine parameters initialized to ones (for weights) and zeros (for biases). Default:True.bias (bool) – If set to
False, the layer will not learn an additive bias (only relevant ifelementwise_affineisTrue). Default:True.
- weight#
the learnable weights of the module of shape \(\text{normalized\_shape}\) when
elementwise_affineis set toTrue. The values are initialized to 1.
- bias#
the learnable bias of the module of shape \(\text{normalized\_shape}\) when
elementwise_affineis set toTrue. The values are initialized to 0.
- Shape:
Input: \((N, *)\)
Output: \((N, *)\) (same shape as input)
Examples:
>>> # NLP Example >>> batch, sentence_length, embedding_dim = 20, 5, 10 >>> embedding = torch.randn(batch, sentence_length, embedding_dim) >>> layer_norm = nn.LayerNorm(embedding_dim) >>> # Activate module >>> layer_norm(embedding) >>> >>> # Image Example >>> N, C, H, W = 20, 5, 10, 10 >>> input = torch.randn(N, C, H, W) >>> # Normalize over the last three dimensions (i.e. the channel and spatial dimensions) >>> # as shown in the image below >>> layer_norm = nn.LayerNorm([C, H, W]) >>> output = layer_norm(input)
- __init__(normalized_shape, eps=1e-05, elementwise_affine=True, bias=True, device=None, dtype=None)#
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- extra_repr()#
Set the extra representation of the module.
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- forward(input)#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- reset_parameters()#
- class Linear(in_features, out_features, bias=True, device=None, dtype=None)#
Bases:
ModuleApplies a linear transformation to the incoming data: \(y = xA^T + b\).
This module supports TensorFloat32.
On certain ROCm devices, when using float16 inputs this module will use different precision for backward.
- Parameters:
- Shape:
Input: \((*, H_{in})\) where \(*\) means any number of dimensions including none and \(H_{in} = \text{in\_features}\).
Output: \((*, H_{out})\) where all but the last dimension are the same shape as the input and \(H_{out} = \text{out\_features}\).
- weight#
the learnable weights of the module of shape \((\text{out\_features}, \text{in\_features})\). The values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where \(k = \frac{1}{\text{in\_features}}\)
- Type:
torch.Tensor
- bias#
the learnable bias of the module of shape \((\text{out\_features})\). If
biasisTrue, the values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{in\_features}}\)
Examples:
>>> m = nn.Linear(20, 30) >>> input = torch.randn(128, 20) >>> output = m(input) >>> print(output.size()) torch.Size([128, 30])
- __init__(in_features, out_features, bias=True, device=None, dtype=None)#
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- extra_repr()#
Set the extra representation of the module.
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- forward(input)#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- reset_parameters()#
- weight: Tensor#
- class Mlp(input_dim, hid_dim, dropout)#
Bases:
ModuleMLP module.
- Parameters:
- input_dimint
Input dimension.
- hid_dimint
Hidden dimension.
- dropoutfloat
Dropout rate.
- __init__(input_dim, hid_dim, dropout)#
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x)#
Forward pass.
- Parameters:
- xtorch.Tensor
Input tensor.
- Returns:
- torch.Tensor
Output tensor.
- get_feature_dis(x)#
Get feature distance matrix.
- Parameters:
- xtorch.Tensor
Input tensor.
- Returns:
- torch.Tensor
Feature distance matrix.