eMLP#

class eMLP(in_rep, out_rep, hidden_units, activation=ReLU(), dropout=0.0, bias=True, hidden_rep=None, init_scheme='xavier_normal')[source]#

Bases: Module

Equivariant MLP composed of eLinear layers.

The network preserves the action of the underlying group on every layer by constructing hidden representations from the group regular representation (or a user-provided base representation) repeated as needed to reach the requested width.

The network defines:

\[\mathbf{f}_{\mathbf{\theta}}: \mathcal{X} \to \mathcal{Y}.\]

Functional equivariance constraint:

\[\mathbf{f}_{\mathbf{\theta}}(\rho_{\mathcal{X}}(g)\mathbf{x}) = \rho_{\mathcal{Y}}(g)\mathbf{f}_{\mathbf{\theta}}(\mathbf{x}) \quad \forall g\in\mathbb{G}.\]

Create an equivariant MLP.

Parameters:
  • in_rep (Representation) – Input representation \(\rho_{\text{in}}\) defining the group action on the input.

  • out_rep (Representation) – Output representation \(\rho_{\text{out}}\); must belong to the same group as in_rep.

  • hidden_units (list[int]) – Width of each hidden layer (number of representation copies).

  • activation (Module) – Non-linearity inserted after every hidden layer.

  • dropout (float) – Dropout probability applied after activations; 0.0 disables it.

  • bias (bool) – Whether to include a bias term in equivariant linear layers.

  • hidden_rep (Representation, optional) – Base representation used to build hidden layers. Defaults to the regular representation when None.

  • init_scheme (str | None) – Parameter initialization scheme passed to eLinear.

forward(x)[source]#

Apply the equivariant MLP to x preserving group structure.

Parameters:

x (Tensor) – Tensor with trailing dimension matching in_rep.size.

Return type:

Tensor

Returns:

Tensor with trailing dimension out_rep.size.

reset_parameters(scheme='xavier_normal')[source]#

Reinitialize all eLinear layers with the provided scheme.

Return type:

None

Parameters:

scheme (str)