models#

A collection of equivariant neural network architectures.

eMLP(in_rep, out_rep, hidden_units[, ...])

Equivariant MLP composed of eLinear layers.

iMLP(in_rep, out_dim, hidden_units[, ...])

Invariant MLP built from an equivariant backbone and invariant pooling.

MLP(in_dim, out_dim, hidden_units[, ...])

Standard baseline MLP with no symmetry constraints.

eTimeCNNEncoder(in_rep, out_rep, ...[, ...])

Equivariant 1D CNN encoder built from channel-equivariant blocks.

TimeCNNEncoder(in_dim, out_dim, ...[, ...])

1D CNN baseline encoder for inputs of shape (N, in_dim, H).

eTransformerEncoderLayer(in_rep, nhead[, ...])

Equivariant Transformer encoder layer with the same API as torch.nn.TransformerEncoderLayer.

eTransformerDecoderLayer(in_rep, nhead[, ...])

Equivariant Transformer decoder layer mirroring torch.nn.TransformerDecoderLayer.

GenCondRegressor(in_dim, out_dim, cond_dim)

Generative Conditional Regressor module.

eCondTransformerRegressor(in_rep, cond_rep, ...)

Equivariant analogue of the conditional transformer regressor baseline.

CondTransformerRegressor(in_dim, out_dim, ...)

Transformer-based generative conditional regressor.