normalization

class symm_learning.nn.normalization.eAffine(in_type: FieldType, bias: bool = True)[source]

Applies a symmetry-preserving affine transformation to the input escnn.nn.GeometricTensor.

The affine transformation for a given input \(x \in \mathcal{X} \subseteq \mathbb{R}^{D_x}\) is defined as:

\[\mathbf{y} = \mathbf{x} \cdot \alpha + \beta\]

such that

\[\rho_{\mathcal{X}}(g) \mathbf{y} = (\rho_{\mathcal{X}}(g) \mathbf{x}) \cdot \alpha + \beta \quad \forall g \in G\]

Where \(\mathcal{X}\) is a symmetric vector space with group representation \(\rho_{\mathcal{X}}: G \to \mathbb{GL}(D_x)\), and \(\alpha \in \mathbb{R}^{D_x}\), \(\beta \in \mathbb{R}^{D_x}\) are symmetry constrained learnable vectors.

Parameters:
  • in_type – the escnn.nn.FieldType of the input geometric tensor. The output type is the same as the input type.

  • bias – a boolean value that when set to True, this module has a learnable bias vector in the invariant subspace of the input type. Default: True

Shape:
  • Input: \((N, D_x)\) or \((N, D_x, L)\), where \(N\) is the batch size, \(D_x\) is the dimension of the input type, and \(L\) is the sequence length.

  • Output: \((N, D_x)\) or \((N, D_x, L)\) (same shape as input)

evaluate_output_shape(input_shape)[source]

Compute the shape the output tensor which would be generated by this module when a tensor with shape input_shape is provided as input.

Parameters:

input_shape (tuple) – shape of the input tensor

Returns:

shape of the output tensor

extra_repr() str[source]

Return the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

forward(x: GeometricTensor)[source]

Applies the affine transformation to the input geometric tensor.

class symm_learning.nn.normalization.eBatchNorm1d(in_type: FieldType, eps: float = 1e-05, momentum: float = 0.1, affine: bool = True, track_running_stats: bool = True)[source]

Applies Batch Normalization over a 2D or 3D symmetric input escnn.nn.GeometricTensor.

Method described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .

\[y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]

The mean and standard-deviation are calculated using symmetry-aware estimates (see var_mean()) over the mini-batches and \(\gamma\) and \(\beta\) are the scale and bias vectors of a eAffine, which ensures that the affine transformation is symmetry-preserving. By default, the elements of \(\gamma\) are initialized to 1 and the elements of \(\beta\) are set to 0.

Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default momentum of 0.1.

If track_running_stats is set to False, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.

Note

If input tensor is of shape \((N, C, L)\), the implementation of this module computes a unique mean and variance for each feature or channel \(C\) and applies it to all the elements in the sequence length \(L\).

Parameters:
  • input_type – the escnn.nn.FieldType of the input geometric tensor. The output type is the same as the input type.

  • eps – a value added to the denominator for numerical stability. Default: 1e-5

  • momentum – the value used for the running_mean and running_var computation. Can be set to None for cumulative moving average (i.e. simple average). Default: 0.1

  • affine – a boolean value that when set to True, this module has learnable affine parameters. Default: True

  • track_running_stats – a boolean value that when set to True, this module tracks the running mean and variance, and when set to False, this module does not track such statistics, and initializes statistics buffers running_mean and running_var as None. When these buffers are None, this module always uses batch statistics. in both training and eval modes. Default: True

Shape:
  • Input: \((N, C)\) or \((N, C, L)\), where \(N\) is the batch size, \(C\) is the number of features or channels, and \(L\) is the sequence length

  • Output: \((N, C)\) or \((N, C, L)\) (same shape as input)

check_equivariance(atol=1e-05, rtol=1e-05)[source]

Check the equivariance of the convolution layer.

evaluate_output_shape(input_shape)[source]

Compute the shape the output tensor which would be generated by this module when a tensor with shape input_shape is provided as input.

Parameters:

input_shape (tuple) – shape of the input tensor

Returns:

shape of the output tensor

export() BatchNorm1d[source]

Export the layer to a standard PyTorch BatchNorm1d layer.

extra_repr() str[source]

Return the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

forward(x: GeometricTensor)[source]

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.