Previous weeks - Change in initial height

1 minute read

New data

Since we have obtained a good result with the frames that include a linear motion we proceed to increase a degree of freedom in this type of data letting the point start at a random height. In the following link you can find a description of this new type of images.

Non Recurrent Neural Networks

I used the same structure that in the previous training

2D Convolutional network structure

As I mentioned earlier, with this new degree of freedom the complexity of the problem increases and, as expected, the performance of the network is not good.

Loss history
Error histogram
Relative error histogram

In the next image you can see target frame in the samples where the errors (absolute and relative) are maximum. Consequently, the error made is very high.

Relative and absolute error

Recurrent Neural Networks

I used the same structure that in the previous training:

ConvLSTM network structure

In this case we obtain a result that we could consider as expected, these networks are able to better capture the temporal relationship and obtain a better performance than in the previous case.

Loss history
Error histogram
Relative error histogram

In the next image you can see target frame in the samples where the errors (absolute and relative) are maximum. As I mentioned, this type of structure has improved the results and the error made, despite remaining high, is reduced compared to the previous structure.

Relative and absolute error

In order to improve the obtained results we will choose to increase the number of samples in consequence with the complexity of the problem and modify the structure of the network in the same way.