MLP#
- class MLP(in_dim, out_dim, hidden_units, activation=ReLU(), batch_norm=False, bias=True)[source]#
Bases:
ModuleStandard baseline MLP with no symmetry constraints.
The network defines:
\[\mathbf{f}_{\mathbf{\theta}}: \mathbb{R}^{d_{\mathrm{in}}} \to \mathbb{R}^{d_{\mathrm{out}}}.\]No equivariance or invariance constraints are imposed.
Constructor of a Multi-Layer Perceptron (MLP) model.
- Parameters:
in_dim (
int) – Dimension of the input space.out_dim (
int) – Dimension of the output space.hidden_units (
list[int]) – List of number of units in each hidden layer.activation (
Module|list[Module]) – Activation module or list of activation modules.batch_norm (
bool) – Whether to include batch normalization.bias (
bool) – Whether to include a bias term in the linear layers.