This week I solved a problem refering to the MOT dataset sequences I obtained. It seems that the duration and FPS indicated in the web are not ok (https://motchallenge.net/data/MOT17Det/) so I introduced images source directly to be able to feed the aplication with the raw frames. I also realized that the ground truth provided only takes into account the class ‘person’ if following the instructions provided in the official paper of the dataset (http://arxiv.org/abs/1603.00831). So maybe this needs to be discussed…
I had problems with frames not being logged when there was not a tracked object in it so that was solved. The logger was changed to log the skipped frames too (which was being done the opposite way in the previous version).
Some frames are still not being logged (most of them are) so that needs to be fixed in order to get the proper results. In first tests done using MOT (MOT17-02 exactly) the AP obtained for the person class was 7,97% with a precision of 81% and a recall of 9,7%. The results in other sequences from MOT were similar. This means that the application is usually right with the positive samples that are true positives but misses a lot of objects. This can give an idea on where to touch on the system to improve this results.
Work in progress: ID assignation and final results on MOT17Det. Future work: check TrackingNet dataset which provides evaluation tool in python –> https://github.com/SilvioGiancola/TrackingNet-devkit
Refering to the dissertation the main chapters were defined, a first draw of the Introduction section (chapter 1) was done and also some cleaning in the State of the Art section (chapter 2). The format was modified using a more appropriated template. The latest version can be seen at https://github.com/RoboticsURJC-students/2017-tfm-alexandre-rodriguez/tree/develop/latex