Penalty Function Algorithms#
|
The Implementation of the P3O algorithm. |
|
The Implementation of the IPO algorithm. |
Penalized Proximal Policy Optimization#
Documentation
- class omnisafe.algorithms.on_policy.P3O(env_id, cfgs)[source]#
The Implementation of the P3O algorithm.
References
Title: Penalized Proximal Policy Optimization for Safe Reinforcement Learning
Authors: Linrui Zhang, Li Shen, Long Yang, Shixiang Chen, Bo Yuan, Xueqian Wang, Dacheng Tao.
URL: P3O
- __init__(env_id, cfgs)#
- _init_log()[source]#
Log the P3O specific information.
Things to log
Description
Loss/Loss_pi_cost
The loss of the cost performance.
- Return type:
None
- _loss_pi_cost(obs, act, logp, adv_c)[source]#
Compute the performance of cost on this moment.
Detailedly, we compute the loss of cost of policy cost from real cost.
(2)#\[L = \mathbb{E}_{\pi} \left[ \frac{\pi^{'}(a|s)}{\pi(a|s)} A^{C}_{\pi_\theta}(s, a) \right]\]where \(A^{C}_{\pi_\theta}(s, a)\) is the cost advantage, \(\pi(a|s)\) is the old policy, \(\pi^{'}(a|s)\) is the current policy.
- Parameters:
obs (torch.Tensor) – Observation.
act (torch.Tensor) – Action.
logp (torch.Tensor) – Log probability of action.
adv_c (torch.Tensor) – Cost advantage.
- Returns:
torch.Tensor – The loss of cost of policy cost from real cost.
- Return type:
Tensor
- _update_actor(obs, act, logp, adv_r, adv_c)[source]#
Update policy network under a double for loop.
The pseudo code is shown below:
for _ in range(self.cfgs.actor_iters): for _ in range(self.cfgs.num_mini_batches): # Get mini-batch data # Compute loss # Update network
Warning
For some
KL divergence
based algorithms (e.g. TRPO, CPO, etc.), theKL divergence
between the old policy and the new policy is calculated. And theKL divergence
is used to determine whether the update is successful. If theKL divergence
is too large, the update will be terminated.- Parameters:
obs (torch.Tensor) –
observation
stored in buffer.act (torch.Tensor) –
action
stored in buffer.log_p (torch.Tensor) –
log_p
stored in buffer.adv (torch.Tensor) –
advantage
stored in buffer.adv_c (torch.Tensor) –
cost_advantage
stored in buffer.
- Return type:
None
Interior-point Policy Optimization#
Documentation
- class omnisafe.algorithms.on_policy.IPO(env_id, cfgs)[source]#
The Implementation of the IPO algorithm.
References
Title: IPO: Interior-point Policy Optimization under Constraints
Authors: Yongshuai Liu, Jiaxin Ding, Xin Liu.
URL: IPO
- __init__(env_id, cfgs)#
- _compute_adv_surrogate(adv_r, adv_c)[source]#
Compute surrogate loss.
IPO uses the following surrogate loss:
(4)#\[L = \mathbb{E}_{s_t \sim \pi_\theta} \left[ \frac{\pi_\theta^{'}(a_t|s_t)}{\pi_\theta(a_t|s_t)} A(s_t, a_t) - \kappa \frac{J^{C}_{\pi_\theta}(s_t, a_t)}{C - J^{C}_{\pi_\theta}(s_t, a_t) + \epsilon} \right]\]Where \(\kappa\) is the penalty coefficient, \(C\) is the cost limit, \(\epsilon\) is a small number to avoid division by zero.
- Parameters:
adv (torch.Tensor) – reward advantage
adv_c (torch.Tensor) – cost advantage
- Return type:
Tensor