This is the code for this this video on Youtube by Siraj Raval as part of the #InMyFeelingsChallenge dance competition. The challenge is to create your own AI to dance to this song and submit it via Twitter, Facebook, Youtube, Instagram, or LinkedIn (or all of them) using the #InMyFeelingsChallenge hashtag.
Page load size is about ~ 13.5 MB which includes PoseNet Tensorflow Model so be patient and wait for the full page load.
In this demo you can process both picture and video. I used .mp4 files downloaded from Giphy.com.
You can download WebM video of pose estimation after process. It uses captureStream method, so some browsers can't do that.
Sometimes it may happen that the video may pause without actual pausing(bug?lag?). If this happened just use the video rewind.
PoseNet runs with either a single-pose or multi-pose detection algorithm. The single person pose detector is faster and more accurate but requires only one subject present in the image.
The output stride and image scale factor have the largest effects on accuracy/speed. A higher output stride results in lower accuracy but higher speed. A higher image scale factor results in higher accuracy but lower speed.
This is a Pure Javascript implementation of: PoseNet.
Thank you TensorFlow.js for your flexible and intuitive APIs.
Created by Maxim Maximov in 2018 | Github Repo