nanoPPO
A flexible and efficient implementation of the Proximal Policy Optimization (PPO) algorithm for reinforcement learning.
Installation
In a virtualenv (see these instructions if you need to create one):
pip3 install nanoppo
Releases
Version | Released | Buster Python 3.7 |
Bullseye Python 3.9 |
Bookworm Python 3.11 |
Files |
---|---|---|---|---|---|
0.15.post2 | 2023-11-28 | ||||
|
|||||
0.15.post1 | 2023-11-06 | ||||
|
|||||
0.15 | 2023-11-06 | ||||
|
|||||
0.14 | 2023-10-08 | ||||
|
|||||
0.13.post2 | 2023-09-19 | ||||
|
|||||
0.13.post1 | 2023-09-19 | ||||
|
|||||
0.13 | 2023-09-19 | ||||
|
|||||
0.1.post1 | 2023-08-21 | ||||
|
|||||
0.1 | 2023-08-21 | ||||
|
Issues with this package?
- Search issues for this package
- Package or version missing? Open a new issue
- Something else? Open a new issue