2 minute read

This weeks we continue with the use of the RGB cameras, but this time, we are going to add a little bit more of spice, we will add more agents apart from our vehicle. The purpose of doing this, is to try to teach the car to learn how to control the speed in cases where a car is in front of it for example.

As of now, we have achieved a good model and training process to solve a follow lane task, either be it with an RGB camera image or a segmentated image. Now, if we think about it, the only thing that we can do to accomplish the same performance plus stopping when a car is in front of it, is to add this information through the recolected dataset, so we are going to have to think along this line. To do this, we are going to simplify once again the task at hand by trying to replicate the expected behaviour only on the Town02, in doing so we also reduce the dataset size getting faster results sooner.

Spawning npcs

First thing first, we are going to learn how to spawn other agents to the world for we need to teach the car how to behave in case another one is in front of it. To do this, Carla pretty much gives us example codes to try and spawn other vehicles in the simulation.

Recolection of data

To recolect data, we spawn a set of vehicles and simply record as we did on the previous cases. Given that turning data is not probable to happend at the same time that we have a car in front of us, we are going to use the same data from previous dataset, without having to generate a new one for now. We will see if the car is able to perform good, or if we need to collect data with cars in front of it on turning points.


Trying to make sense of the video, we see that the car is obviously failing at its intended purpose. Even though we are injecting more data related to the car slowing down when it encounters another car in front of it, it is not learning how to stop behind it. But looking on the bright side, we can see that it is slowing down before it even comes close. A simple way of trying to fix this could be to add more data regarding this behaviour, but the fact that we are increasing the dataset size we expect to encounter challenges in the future regarding this problem.

Finally, in order to improve the visualization of the videos, I added the braking values (which right now are nonexistent) and the input image passed on to the neural network. This will help us to debug better possible encounters with problems in the future.