Week 5 - Training nets with a big dataset

1 minute read

Meeting summary

  • The main objective of the previous two weeks was to adapt the code to be able to train networks with larger datasets in order to analyze the effect that the considerable increase in the number of samples has on efficiency. This objective has been achieved, which allows us to advance in the established roadmap.
  • Consequently with the previous point, this week I will carry out the necessary training and tests to see the effect of the increase in the number of samples.
  • The objective of refactoring the code for modeled samples has not yet been attacked, so it will be a pending task the next few weeks.

To Do

The tasks proposed for this week are

  • Train the 3 networks with more samples (100000) in linear movement dataset with 3 degrees of freedom.
  • Refactor the code to be able to train and evaluate the networks with modeled data.

Non-Recurrent Network

I used the same structure that in the previous training:

Non recurrent network structure
Loss history

In the next picture you can see a summary of the error committed

Error stats

LSTM Network

I used the same structure that in the previous training:

LSTM network structure
Loss history

In the next picture you can see a summary of the error committed

Error stats

ConvLSTM Network

I used the same structure that in the previous training:

ConvLSTM network structure
Loss history

In the next picture you can see a summary of the error committed

Error stats