1 minute read

This “sixth week” (actually it has been the whole august month, but halve of it i was on vacations ^.^) I had to update the ROS2 blocks, because one of my partners of the VisualCircuit project told me to add some images and description to the blocks following a simple tutorial they did some time ago. Now all the blocks have been added to the online version of VisualCircuit!

Also I started working on a full project to test VisualCircuit. This project is a follow-person using vision and ROS2. To start the project I needed a scenario to deploy the robot and the person. Here I used the hospital world that you can find on the amazon web services on github. I used the one floor world, because I’m not going to program the turtlebot to comunicate with different objects like elevators, I’m going to make it follow a person inside the hospital.

For the moving person part, I found that my friend Carlos Caminero already had a plugin to spawn and control manually a person model, so I used his plugin here. To use it, you need to build his repository using colcon build (ros-foxy way to compile) and execute the executable file found on “(ros-foxy workspace)/install/person_teleop/bin” which is also called “person_teleop”.

Here you can see how it moves following a path using objectives by coordinates.

Now, to program the robot I started thinking on the inside logic that the robot might follow. The objective is to follow a person, but for that, we need to find the person, and if at the begining there is no person on vision, we need to rotate to find it. So I designed some kind of decision tree with only 2 branches: follow the person or rotate to find a person.

But later I saw that it was not optimal on resource use, due to the duplicated blocks for the same purpose. So I started thinking on another way of working the same way with just one cam and one object detector blocks. This is what I finally got:

Here the behavior is the same, but instead of enabling the object detector of each side, we enable the rotation or follow mode.

For the next weeks, I have to adjust the PID to follow correctly the person and maybe add a laser block to mantain the distance with the human.