conv

class symm_learning.nn.conv.GSpace1D(fibergroup: Group, name: str = 'GSpace1D')[source]

Hacky solution to use GeometricTensor with time as a homogenous space.

Note in ESCNN the group is thought to act on points in the space and on the “fibers” (e.g. the channels). Here the fibergroup is assumed to be any finite symmetry group and hence we do not consider the action on points of the gspace, since for a 1D space the only well-defined left orthogonal action is the trivial and reflection actions.

Hence in general consider the use of the modules using this GSpace instance as having two symmetry groups: 1. The group acting on the fibers (e.g. channels) of the input 2. The group acting on the time dimension, which is trivial in this case or reflection (not implemented yet).

Warning

This is a hacky solution and should be used with care. Do not rely on escnn standard functionality.

property basespace_action: Representation

Defines how the fiber group transforms the base space.

More precisely, this method defines how an element \(g \in G\) of the fiber group transforms a point \(x \in X \cong \R^d\) of the base space. This action is defined as a \(d\)-dimensional linear Representation of \(G\).

restrict(id)[source]

Build the GSpace associated with the subgroup of the current fiber group identified by the input id. This reduces the level of symmetries of the base space to be considered.

Check the restrict method’s documentation in the non-abstract subclass used for a description of the parameter id.

Parameters:

id – id of the subgroup

Returns:

a tuple containing

  • gspace: the restricted gspace

  • back_map: a function mapping an element of the subgroup to itself in the fiber group of the original space

  • subgroup_map: a function mapping an element of the fiber group of the original space to itself in the subgroup (returns None if the element is not in the subgroup)

class symm_learning.nn.conv.eConv1D(in_type: FieldType, out_type: FieldType, kernel_size: int = 3, stride=1, padding=0, dilation=1, bias=True, padding_mode='zeros', basisexpansion: Literal['blocks'] = 'blocks', recompute: bool = False, initialize: bool = True, device=None, dtype=None)[source]

One-dimensional \(\mathbb{G}\)-equivariant convolution.

This layer applies a standard 1D convolution (see torch.nn.Conv1d) to geometric tensors by ensuring the convolution kernel \(K\) of shape (out_type.size, in_type.size, kernel_size) is constrained to be constructed from interwiners between the input and output representations, such that \(K[:, :, i] \in \mathrm{Hom}_{\mathbb{G}}(\mathcal{V}_{\text{in}}, \mathcal{V}_{\text{out}})\)

For the usual convolution hyper-parameters (stride, padding, dilation, etc.) this class follows exactly the semantics of torch.nn.Conv1d; please refer to the PyTorch docs for details.

Parameters:
  • in_type (escnn.nn.FieldType) – Field type of the input tensor. Must have GSpace1D as its gspace. Input tensors should be of shape (batch_dim, in_type.size, H), where H is the 1D/time dimension.

  • out_typeescnn.nn.FieldType Field type of the output tensor. Must have the same gspace as in_type. Output tensors will be of shape (batch_dim, out_type.size, H_out).

  • kernel_sizeint, default=3 Temporal receptive field \(h\).

  • strideint, default=1

  • paddingint, default=0

  • dilationint, default=1

  • biasbool, default=True

  • padding_modestr, default=”zeros” Passed through to torch.nn.functional.conv1d().

  • basisexpansionLiteral["blocks"], default=”blocks” Basis-construction strategy. Currently only "blocks" (ESCNN’s block-matrix algorithm) is implemented.

  • recomputebool, default=False Whether to rebuild the kernel basis at every forward pass (useful for debugging; slow).

  • initializebool, default=True If True, the free parameters are initialised with the generalised He scheme implemented in escnn.nn.init.

  • devicetorch.device, optional

  • dtypetorch.dtype, optional

Example:

>>> from escnn.group import DihedralGroup
>>> from escnn.nn import FieldType
>>> from symm_learning.nn import eConv1D, GSpace1D
>>> G = DihedralGroup(10)
>>> # Custom (hacky) 1D G-space needed to use `GeometricTensor`
>>> gspace = GSpace1D(G)  # Note G does not act on points in the 1D space.
>>> in_type = FieldType(gspace, [G.regular_representation])
>>> out_type = FieldType(gspace, [G.regular_representation] * 2)
>>> H, kernel_size, batch_size = 10, 3, 5
>>> # Inputs to Conv1D/eConv1D are of shape (B, in_type.size, T) where B is the batch size, C is the number of channels and T is the time dimension.
>>> x = in_type(torch.randn(batch_size, in_type.size, H))
>>> # Instance of eConv1D
>>> conv_layer = eConv1D(in_type, out_type, kernel_size=3, stride=1, padding=0, bias=True)
>>> # Forward pass
>>> y = conv_layer(x)  # (B, out_type.size, H_out)
>>> # After training you can export this `EquivariantModule` to a `torch.nn.Module` by:
>>> conv1D = conv_layer.export()

Shape

  • Input: (B, in_type.size, H)

  • Output: (B, out_type.size, H_out), where H_out is computed as in torch.nn.Conv1d.

check_equivariance(atol=1e-05, rtol=1e-05)[source]

Check the equivariance of the convolution layer.

dim_after_conv(input_dim: int) int[source]

Calculate the output dimension after the convolution.

evaluate_output_shape(input_shape) tuple[int, ...][source]

Calculate the output shape of the convolution layer.

expand_bias() Tensor[source]

Expand the bias vector to the full bias vector.

expand_kernel() Tensor[source]

Kernel of the convolution layer of shape (out_channels, in_channels, kernel_size).

export() Conv1d[source]

Exporting to a torch.nn.Conv1d

extra_repr()[source]

Return the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

forward(input: GeometricTensor) GeometricTensor[source]

Forward pass of the 1D convolution layer.

class symm_learning.nn.conv.eConvTranspose1D(in_type: FieldType, out_type: FieldType, output_padding: int = 0, **conv1d_kwargs)[source]

One-dimensional G-equivariant transposed convolution.

dim_after_conv(input_dim: tuple[int, ...]) tuple[int, ...][source]

Calculate the output dimension after the transposed convolution.

export() ConvTranspose1d[source]

Exporting to a torch.nn.ConvTranspose1d

extra_repr()[source]

Return the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

forward(input: GeometricTensor) GeometricTensor[source]

Forward pass of the transposed 1D convolution layer.