OmniSafe Model Utils#

Model Building Utils#

Documentation

omnisafe.utils.model.initialize_layer(init_function, layer)[source]#

Initialize the layer with the given initialization function.

The init_function can be chosen from: kaiming_uniform, xavier_normal, glorot, xavier_uniform, orthogonal.

Parameters:
  • init_function (InitFunction) – The initialization function.

  • layer (nn.Linear) – The layer to be initialized.

Return type:

None

omnisafe.utils.model.get_activation(activation)[source]#

Get the activation function.

The activation can be chosen from: identity, relu, sigmoid, softplus, tanh.

Parameters:

activation (Activation) – The activation function.

Return type:

Union[Type[Identity], Type[ReLU], Type[Sigmoid], Type[Softplus], Type[Tanh]]

omnisafe.utils.model.build_mlp_network(sizes, activation, output_activation='identity', weight_initialization_mode='kaiming_uniform')[source]#

Build the MLP network.

Example

>>> build_mlp_network([64, 64, 64], 'relu', 'tanh')
Sequential(
    (0): Linear(in_features=64, out_features=64, bias=True)
    (1): ReLU()
    (2): Linear(in_features=64, out_features=64, bias=True)
    (3): ReLU()
    (4): Linear(in_features=64, out_features=64, bias=True)
    (5): Tanh()
)
Parameters:
  • sizes (List[int]) – The sizes of the layers.

  • activation (Activation) – The activation function.

  • output_activation (Activation) – The output activation function.

  • weight_initialization_mode (InitFunction) – The initialization function.

Return type:

Module

omnisafe.utils.model.set_optimizer(opt, module, learning_rate)[source]#

Returns an initialized optimizer from PyTorch.

Note

The optimizer can be chosen from the following list:

  • Adam

  • AdamW

  • Adadelta

  • Adagrad

  • Adamax

  • ASGD

  • LBFGS

  • RMSprop

  • Rprop

  • SGD

Parameters:
  • opt (str) – optimizer name.

  • module (Union[nn.Module, List[nn.Parameter]]) – module or parameters.

  • learning_rate (float) – learning rate.

Return type:

Optimizer