Week 17-18 - Driving again
Progress This Week
Objective
The objective was to achieve a well-functioning neural pilot, which was finally achieved.
Trying External Pilot
After implementing normalization and affining in the preprocessing of the data, getting great training and offline graphs, and still not getting a working pilot, I needed something else to find where the problem was.
In order to do this, I asked for a pilot and some datasets from another student ahead of me, Alejandro Moncalvillo. With his model, one I knew had to work, I was able to find some problems in the Neural Pilot, the biggest one regarding color. After solving this and training again, I got a model that worked, but only most of the time. One out of every three laps (first one included), it would crash in the first curve.
Knowing now what I had to work on, I proceeded to the next step of this external approach.
Training with External Data
I decided to start training with the data the external pilot had been trained on. This was initially challenging because we had different approaches to recording the datasets, so I needed some alterations in order to actually start training. For this, every step was traced, especially the image management; the images had to remain consistent with what the Neural Pilot was going to receive.
After some failures, I tried training it with data from quite a lot of datasets of different circuits. This was actually easier than I thought due to some Python magic, and I ended training with a dataset that contained 56,500 data before any preprocessing was done, doubling the amount with the flipping preprocessing.
The result might be seen here: https://youtu.be/dum_p3uuFuk
After that, I tried to add affine, which resulted in a pilot that failed quite a lot: https://youtu.be/UvbVKGQogVk
However, affining made it more complex and doubled the data; maybe I could make it find a good model faster using normalization, which, as seen before, obtained great results in accelerating training.
The result was this: https://youtu.be/a7uMsLUp8L0
Every training was done with 100 epochs.
For the Future
Given the differences between datasets, I did not perform offline testing and did not include graphics because they were basically the same as in other posts.
However, next time I will be using my own datasets and documenting everything properly.
I also attempted to do the preprocessing during training instead of during data loading, but it proved incredibly difficult given that most of the preprocessing did not change the image but created another image with different characteristics.