8th week: Follow-person updates and testing.
On the eigth week I made more updates and tests with the follow-person project.
First of all, I ended (at least for the moment) the code of the program, and also the wires configuration. As you can see, comparing with the previous model, there is a new “Width” output from the camera, and this is to make the PID autonomous from the code, so this is not a parameter and changes everytime the cam changes automatically.
Also the “Decision” parameter from the decider block makes the code in the motorDriver block much more easier, as now it doesnt have to see which one is sending data and just reads from the one that is enabled.
In the testing part, I tested the code with the cam in my computer instead of the simulated one, and it seems to work fluidly even with all the code running.
Also I tried to test the simulation without all the props, so only the person and the turtlebot2 are loaded in Gazebo, and this seemed to improve a lot the FPS’s performance, but the object detector still had problems on finding the simulated person.
To solve this, I tried to work with a real robot in the robotics lab, but they told me that the plugins for the cams in those robots are not implemented at the moment on ROS2, so I will keep in contact with them to try and help them on getting those plugins as soon as possible.