1 minute read

In this week, I’ve progressed a lot in the follow_person and in the docker.

I’ve been having troubles understanding the data that the depth camera gives in PointCloud2 messages(*). As we can see in the next image, the fields that the sensor gives have name,offset and datatype (count is always 1, since there is only 1 value for each point). As I’ve found out, the data is shared in a 1-D array, each point has 32 bytes (point_step), and since there are 640×480 points (307200 points), the total entries of the 1-D array are 9830400, as seen in “row_step”.

To get the data as numbers we need to know that every number there is a float, since the datatype value is 7, and as we can see in the terminal at the right side of the image, it means that each value is a Float32. The way in which the information of the point is stored is 12 bytes for x,y,z (4 bytes each),4 empty bytes, 4bytes for the color of the point and another empty 12 bytes.

With this information, I could manage to transform from bytes to float32 using the python library function “struct.unpack()”. Also, to use my 640×480 object detection point, I had to change the coordinates in 2-D to 1-D, so that I can find that point in the array. That’s why I used “(widht × y + x) × 32”.

Here we can see a video of the block tester working:

Also I’ve kept working on the docker, and now I have a script that builds the docker instead of writing the commands by hand. I have 2 docker images: the simulated and the real. The simulated one, installs ROS2 humble, gazebo and several dependencies, also installs my TFG github repo, and makes all the configuration for ROS2 humble to work and for colcon to be able to build. The real one is still on progress, but will have as well ROS2 humble, will not have gazebo (as it will not be necessary to work with a simulated robot), and will include all the dependencies of the real robot, such as the camera driver or the kobuki launchers.

*Sources: PointCloud.org