Week 5 - Training nets with a big dataset
Meeting summary
- The main objective of the previous two weeks was to adapt the code to be able to train networks with larger datasets in order to analyze the effect that the considerable increase in the number of samples has on efficiency. This objective has been achieved, which allows us to advance in the established roadmap.
- Consequently with the previous point, this week I will carry out the necessary training and tests to see the effect of the increase in the number of samples.
- The objective of refactoring the code for modeled samples has not yet been attacked, so it will be a pending task the next few weeks.
To Do
The tasks proposed for this week are
- Train the 3 networks with more samples (100000) in linear movement dataset with 3 degrees of freedom.
- Refactor the code to be able to train and evaluate the networks with modeled data.
Non-Recurrent Network
I used the same structure that in the previous training:
In the next picture you can see a summary of the error committed
LSTM Network
I used the same structure that in the previous training:
In the next picture you can see a summary of the error committed
ConvLSTM Network
I used the same structure that in the previous training:
In the next picture you can see a summary of the error committed