4 minute read

Lane Detection

The main purpose of this project is to achieve an autonomous driving of a vehicle in different situation. To achieve this, you need to have one thing very clear: Are you on the road or are you out? Detecting if the vehicle is on the road or if it is out and how close it is to leaving are key aspects when trying to create an autonomous driving algorithm.

TeleopWindow

Target of the algorithm

The main target of this algorithm is to be able to identify the road borders. With that, we can get to know for example, the center of the road, or how close we are to one or another border. This will allow to identify when the robot is on the center of the road or when is it on the verge of leaving, for example.

Environment: Sensors

To be able to identify the center of the lane we will be using the robot’s front camera. We need to have a wide view of the robot’s camera to ensure that both sides of the road are always detected when the robot is phyiscally over the road. This required tweaking the horizontal fov of the camera on the robot model settings:

<horizontal_fov>2.75</horizontal_fov>

With an increased horizontal fov, the camera acts like it has a default zoomout, allowing the road to be present all the time: robotView

Road identification: Color filtering

To start detecting the road, we firstly need to separate the edge of the road from the rest of the image. As the track we are using has the road borders on white, delimiting the track, we can simply apply an HSV filter to the image to obtain a mask with all the pixels of the lane limits in white and the rest in black.

Analysis of the road mask

Once the filter has found the white mask with the side of the road, it is now needed to identify the two groups that define the border of the road. Considering there is one white line at the left and another one at the right, and that they are always separated by the road width, we can perform this anaylisis:

  • Draw a straight horizontal line on the mask. Using that horizontal line, identify the first and last pixel across the horizontal:

robotView

Now, we can calculate the average of those two pixels and divide the image into two clusters:

robotView

Imagine the case where the robot only sees one line. The first and last pixel will be on the same road line mark:

robotView

If the distance between the first and last pixel is too small, we can consider only one line is being detected and we can tell the robot that there is not a full perception of the lane. This normally leads to the robot stopping as it cannot make any assumption on where he is or is not.

In case the two lines are detected, we can use the clustering to divide the image in two zones. We can simply now calculate the amount of white pixels over the horizontal line in each cluster and calculate its average on the X coordinates to obtain the X center of each of the two lane limits. Once the center of both lanes has been detected, we can use each individual center to know the left and right limits of the road, and we can also calculate the average of them two to obtain the center of the road.

Taking one measure only can be quite problematic. Also, this measures need to be taken at some height on the camera, with the following dilemma:

  • A higher taken measure will provide the robot with a further-into-the-future center, meaning that taking an instant action to adjust the position may be too soon before it should take place
  • A lower taken measure will be a line closer to the wheels of the robot, which provides the robot the biggest reactivity and inmediance of its movements, but is also easily losable as the closer we are to the bottom, the wider the line is so the easier it is to lose any of its borders.

In order to solve this problem, multiple measures are taken, obtaining the center on each of those horizontal lines:

robotView

With each of those centers, many approaches can be taken depending on the needs of the user:

  • Use the top measure to obtain a safer approach into the future, who may be good for lower speeds or obstacle appearence.
  • Use the bottom measure to get the most reactive approach possible, maximizing the speed posibilities of the robot.
  • Compute an average of the centers, obtaining the most reliable measure for general purposes.

For example, in the two yet implememted methods for autonomous movement, we compute the average of all the centers.

Portability of the algorithm.

In order to use the algorithm in many applications, we have created a class that contains the algorithm. The class has a main method: getCenterPosition(image, HEIGHT). The image inputted will be the image where the center of the line is obtained from, and the HEIGHT parameter will be the height of the line drawn to cut with the road and obtain the center at.

Updated: