Online Control Using Particle Belief Propagation
|Graduate Research Project||Karen Liu||Spring 2015-Summer2015(Continued as Personal Project)||C++||No||Code N/A|
By using a particle-based belief propagation algorithm to derive an appropriate control prior, and multiple predictive walkers to explore the possibilities of each set of controls forward to some planning horizon, human-like (well, zombie-like) bipedal motion can be generated without pre-built controllers or pre-scripted state machines, and using simple cost functions that work to enforce standing upright, for instance.
My implementation is based on the source paper's primary author's implementation Online Control of Simulated Humanoids Using Particle Belief Propagation, although I built mine, as with all of my Character Animation projects, using Georgia Tech's DART Dynamic Animation and Robotics Toolkit. I implemented mine so that there's an entire simulation world for each walker.
To summarize the gist of the algorithm, for every frame of controlled bipdeal animation, controls are sampled from a control prior and applied in each of a group of walkers, and the motion result is then evaluated. This is repeated for as many forward simulation steps is necessary, for each walker, to reach the planning horizon (how far in the future the motion is being evaluated) and the first control of the trajectory with the least overall cost is the control applied to the master skeleton to provide a single frame of control.
The distribution that is sampled to provide the control is built upon distributions that minimize the control, the first and second derivative of control, while also encoding the rest pose (which is not a cost-function component) and the previous frame's control trajectory. In the future, I am planning on attempting to build a more informed prior from which to sample control, which would hopefully result in more humanlike motion while requiring fewer samples.