Week 2 - Getting Started
Summary
During my recent meeting with my advisor, we agreed that the next objective is to begin simulating a holonomic racing car and creating the initial dataset to train a PilotNet neural network.
Progress This Week
This week, I cloned the GitHub repository of a final thesis that is nearing completion. The goal was to repurpose existing code to drive the car through pre-designed circuits while collecting data on its speed and the camera frames.
After cloning the repository, I spent some time identifying reusable code and setting up a ROS2 workspace. I then began simulating the car on various circuits using the existing implementations.
When I attempted to run my code, I encountered several issues. Firstly, the images I received from the camera were 3x3, not the expected 640x420. Oddly enough, the original repository had the same issue, but eventually, it started receiving the correct images. In contrast, my code never did.
To troubleshoot, I integrated my car controller code into the existing code from the repository. Surprisingly, this resolved the image issue, even though both implementations used identical function calls and libraries.
However, another problem arose. My previously functional code (shown here) no longer performed as expected. I have two theories for this:
- The camera is positioned on the left side of the car, whereas the camera in the simulation I originally coded for might have been centered.
- The PID parameters and angular speed limits are no longer appropriate for the new simulation. I tested this by revisiting the website where I first created my code, Unibotics Academy. After reducing the max angular speed from 6 to 0.5, the car was able to complete the circuit, albeit not very cleanly.
My current focus is to adapt my code to these new conditions before proceeding with dataset creation.