Fighting fake videos with improved computer vision

By | June 25, 2019
FacebooktwitterredditlinkedinmailFacebooktwitterredditlinkedinmail0

Contributing to a project that aims to detect “deepfake” videos, U-M engineers developed software that improves a computer’s ability to track an object through a video clip by 11% on average.

The software, called BubbleNets, chooses the best frame for a human to annotate. In addition to helping train algorithms for spotting doctored clips, it could improve computer vision in many emerging areas such as driverless cars, drones, surveillance and home robotics.

“The U.S. government has a real concern about state-sponsored groups manipulating videos and releasing them on social media,” said Brent Griffin, U-M assistant research scientist in electrical and computer engineering. “There are way too many videos for analysts to assess, so we need autonomous systems that can detect whether or not a video is authentic.”