iMLP#

class iMLP(in_rep, out_dim, hidden_units, activation=ReLU(), dropout=0.0, bias=True, hidden_rep=None, init_scheme='xavier_normal')[source]#

Bases: Module

Invariant MLP built from an equivariant backbone and invariant pooling.

The network defines:

\[\mathbf{f}_{\mathbf{\theta}}: \mathcal{X} \to \mathcal{Y}^{\text{inv}}.\]

Functional invariance constraint:

\[\mathbf{f}_{\mathbf{\theta}}(\rho_{\mathcal{X}}(g)\mathbf{x}) = \mathbf{f}_{\mathbf{\theta}}(\mathbf{x}) \quad \forall g\in\mathbb{G}.\]

Create a group-invariant MLP.

The model first applies an equivariant MLP to extract group-aware features, pools them into the trivial representation, and finishes with an unconstrained linear head to produce invariant outputs.

Parameters:
  • in_rep (Representation) – Input representation \(\rho_{\text{in}}\) defining the group action on the input.

  • out_dim (int) – Dimension of the invariant output vector.

  • hidden_units (list[int]) – Width of each hidden layer in the equivariant backbone.

  • activation (Module) – Non-linearity inserted after every hidden layer and after the backbone.

  • dropout (float) – Dropout probability applied after backbone activations.

  • bias (bool) – Whether to include biases in the backbone and head.

  • hidden_rep (Representation | None) – Base representation used to build hidden layers. Defaults to the regular representation when None.

  • init_scheme (str | None) – Parameter initialization scheme passed to eLinear.

forward(x)[source]#

Compute invariant outputs from the input representation values.

Return type:

Tensor

Parameters:

x (Tensor)

reset_parameters(scheme='xavier_normal')[source]#

Reinitialize all eLinear layers with the provided scheme.

Return type:

None

Parameters:

scheme (str)