Migration to RLStudio of mountain car problem proposed by openAI-gym

1 minute read

MIGRATION

The previous implementation (no RL-Studio related) is documented in this post which is an evolution of this one in which you can see more details of this problem solution. This migration consisted of:

  • Creating models and world for mountain car problem.
  • Ensuring actions doesnt provoke and unconsistent state (robot must always be within the “mountain” platform and move just to right and left).
  • Ensure actions efforts make the problem reachable but considerably difficult so we can take benefit of the algorithm to solve it.
  • Migrating the learning algorithm to make the robot behave well in mountain car problem.
  • Adapt rewards as a non-stationary problem.
  • perform several tests to conclude that the problem was succesfully migrated and solved using qlearning.

you can find all the iterations tested in the results uploaded in the repository.

In there you will notice that there is not need to give plenty of information to the agent through the reward function.

  • If you give a reward of 1 when the goal is achieved and 0 otherwise, the robot finally learn the optimal path.
  • If you stop the episode when the robot perform 0 steps you encourage the robot to reach the goal before 20 steps are accomplished.

The optimal reward configuration and hyperparameters can be found in the uploaded agent code In the same way, there you will find the worlds and models used.

RL-Studio related

Additionally, the following steps were accomplished to adapt the problem to RL-Studio:

  • Create a .yml configuration file
  • Including the algorithm in “algorithms” folder
  • Including the agent in “agents” folder
  • Push the model and world referenced in the configuration file to the CustomRobots repository
  • Adding the environment in /rl-studio/init file
  • Adding a folder in /rl-studio/envs (no major modifications with respect to the other implementations in rl-studio)

DEMO