1 minute read

Introduction

In this post, we will talking about Yolop for drivable area and lane detection. Yolop will use for perception

What is Yolop?

YOLOP is a Panoptic vision perception system to aid autonomous driving in real-time. This is one of the very first end-to-end panoptic vision perception models aimed at self-driving systems running in real time.

It performs traffic object detection, drivable area segmentation, and lane detection simultaneously.

yolop-architecture

For it can use, you must install all depedencies Requeriments

In our case, we interest drivable area and lane detection because of the cars can not detect in this moment.

Models

YOLOP follow a model neural network MCNet (Convolutional neural network,CNN) End-to-end. This model is builded in Pytorch : ‘End-to-end.pth’

From the end-to-end model it is possible to export models in onnx format. This type of format offers optimized operations and computational graphics to improve model performance (model accuracy) and reduce computational costs.

With all this information, we have tested 3 models:

  • End-to-end: Pytorch model. For this model the input image, its dimensions must be multiples of 32. This is because the maximum step of the network is 32 and it is a fully convolutional network.

  • Yolop-320-320: Onnx model. Input image 320x320

  • Yolop-640-640: Onnx model. Input image 640x640

Note: Yolop-320-320 and Yolop-640-640, are models exported ad from End-to-end model.

For more information about Onnx and how to install : Onnx , Install onnx

Results with YOLOP

For analise each model, we calculate inference time to each model and then we have represented the results in a bar graph

yolop

As you can view yolop-320-320 has better mean inference time than end-to-end and yolop-640-640,therefore it is a good candidate model to choose since with this model we can achieve a rate of approximately 100 fps.

Finally, in this video you can see each model and the results regarding the drivable area segmentation and lane detection.

Note: The drone is remote controlled with a joystick and to compute the inference in each model we have used CUDA to speed up the computation

yolop

[YOLOP] (https://youtu.be/G0New6pOUbs?si=_XqWbcm6EAjRD-w9)