2 minute read

Summary

Building upon the foundation laid in my initial post, this week was dedicated to developing Pilot 2.0, a more advanced version controlled by two PID controllers – one for linear speed and another for angular speed.

Progress This Week

I engaged in the practical application of combining ROS2 with our simple circuit simulation in Gazebo. The primary task involved utilizing an ROS2 HAL class to process images from the car’s camera and to transmit speed commands, effectively bringing Pilot 2.0 to life.

Overcoming PID Controller Challenges

The first major hurdle was inverting the existing PID controller logic. In its original form, a larger error increased the speed, which was contrary to the desired functionality. After testing various approaches, including experimenting with a 1/PID formula, I opted for a more straightforward solution:

  • Inversion Formula: maxspeed + minimumspeed - PID

This method ensured the output remained within the predefined speed limits. For instance, with a max speed of 10 and min speed of 5:

  • PID output of 5 resulted in 10 (10 + 5 - 5)
  • PID output of 10 led to 5 (10 + 5 - 10)
  • PID output of 7 yielded 8 (10 + 5 - 7)

Tuning the P, I, D Parameters

Fine-tuning the proportional (P), integral (I), and derivative (D) parameters was a complex task due to the six-variable scenario – three each for the linear and angular PIDs. Despite these challenges, the adjustments paid off, enabling Pilot 2.0 to complete the simulated circuit in just over a minute.

Data Collection

With the successful implementation of Pilot 2.0, I utilized ROS2 bag to collect data from this experiment. This dataset will be crucial for analysis.

Analizing the data

Rosbag is great for recording the datasets, however, we need a format that can be red by PyTorch. In order to do this we can find a solution in stackoverflow that works quite good.

Parting from there we can get the linear and angular speeds and the images asocieted with them, which is what we need in order to train a neural network.

Starting with pyTorch and PilotNet

Using the example code we tried to train our first neural network. However the first problem we found is that we had to alter a few things in order to be able to use the code without an Nvidia GPU CUDA-capable.

Once the code was adapted we tried to adapt the data as seen before, balance it (there where almost 3 times more data of straight lines than curves) and train the model. But we found a strange problem, our bag recorded with rosbag, somehow seemed to have more pairs of velocities (linear, angular) that it had images, which ment that even thought we had the data we have no idea what the result would be.

References