eMLP#
- class eMLP(in_rep, out_rep, hidden_units, activation=ReLU(), dropout=0.0, bias=True, hidden_rep=None, init_scheme='xavier_normal')[source]#
Bases:
ModuleEquivariant MLP composed of
eLinearlayers.The network preserves the action of the underlying group on every layer by constructing hidden representations from the group regular representation (or a user-provided base representation) repeated as needed to reach the requested width.
The network defines:
\[\mathbf{f}_{\mathbf{\theta}}: \mathcal{X} \to \mathcal{Y}.\]Functional equivariance constraint:
\[\mathbf{f}_{\mathbf{\theta}}(\rho_{\mathcal{X}}(g)\mathbf{x}) = \rho_{\mathcal{Y}}(g)\mathbf{f}_{\mathbf{\theta}}(\mathbf{x}) \quad \forall g\in\mathbb{G}.\]Create an equivariant MLP.
- Parameters:
in_rep (
Representation) – Input representation \(\rho_{\text{in}}\) defining the group action on the input.out_rep (
Representation) – Output representation \(\rho_{\text{out}}\); must belong to the same group asin_rep.hidden_units (
list[int]) – Width of each hidden layer (number of representation copies).activation (
Module) – Non-linearity inserted after every hidden layer.dropout (
float) – Dropout probability applied after activations;0.0disables it.bias (
bool) – Whether to include a bias term in equivariant linear layers.hidden_rep (
Representation, optional) – Base representation used to build hidden layers. Defaults to the regular representation whenNone.init_scheme (
str|None) – Parameter initialization scheme passed toeLinear.