Week 11 - Implementing Live video camera

1 minute read

Make things easier with GUI

Hello all!

I have been working in a interface that guides the user to use the images predictor.

All the code can be found in link

It allows the user to choose between Live camera or Recorded Video.

  • First option can use the camera of our laptop/pc or a usb webcam attached to get predictions in the moment, of a ball going from the left to the right of view.

  • The Recorded Video selection it’s about chosing a video file from the local disk and analize it.

Currently it needs to be with a green background and a orange ball.

This is how it looks:

Recorded

Cap0
Cap1
Cap2
Cap3

Live_camera

Cap4

Also the predictions are not working propertly so I need to improve our generated images used to train other Network. Extrating the values of X,Y axis from the video recordered I have been using to test the current code, the similarity are not exactly the same as modeled/raw values used to train the LSTM model.

Talking with Inma and Jose María we decided that is better to include a new vlue to the model, witch is the nº of the frame itself. It start with the first frame when the ball enter in the scene and we can get a centroid. And finish when the ball disapear from the scene.

Changes to this preferences will be add in the next steps.