Carla follow lane DDPG vs PPO
Follow lane carla ppo Vs dppg
Follow lane carla ppo Vs dppg
migration to carla and follow lane problem
all algorithms comparison
Qlearning and DQN f1 follow line gazebo comparison
Qlearning and DQN f1 follow line gazebo comparison
Qlearning and DQN cartpole problems included
Qlearning and DQN cartpole problems included
Qlearning and DQN cartpole problems included
DQN cartpole problem included
Qlearning cartpole problem included
RL-Studio was bornt! We got to write an article
final allignment within the team to progress in same direction and creation of rl-studio inferencer module
migration of problems to new version, documentation and some refactors applied
Projects fixes, algorithms fine tuning and algorighms course
Implemented first prototype for mountain car in RL-Studio
Keep working on mountain car (adapted to RL-Studio version) and formalize the phd burocracy
Run Nacho TFM in new RLStudio and adapt mountain car made problem to work on it
Run Nacho TFM in RLStudio and adapt robot_mesh made problem to work on it
Learning about gazebo and rl-studio to later develop in this environment
Reinforcement of RL basics to be able to perform good practices when developing
Standarizing the project folders and the exercises code structure
Creating our own environment to freely configure our own problem
Solving a more complex problem using qlearning
Projects fixes, algorithms fine tuning and algorighms course
Start playing with reinforcement learning
Start playing with reinforcement learning
Landing on vision recognition basics