By zeroing in on humans’ gait, body symmetry and foot placement, U-M researchers are teaching self-driving cars to recognize and predict pedestrian movements with greater precision than current technologies.
Data collected by vehicles allow the researchers to capture video snippets of humans in motion and then recreate them in 3D computer simulation. With that, they’ve created a system that can predict poses and future locations for one or several pedestrians up to about 50 yards from the vehicle.
Much of the machine learning used to bring autonomous technology to its current level has dealt with two dimensional images—still photos. But by utilizing video clips that run for several seconds, the U-M system can study the first half of the snippet to make its predictions, and then verify the accuracy with the second half.