A Novel Optimization Method Boosts Video Motion Estimation
Cornell University researchers have created a new optimization tool for estimating motion across an input video. This tool, called OmniMotion, might be used in video editing and generative AI movie production.
It is detailed in a presentation titled "Tracking Everything, Everywhere, All at Once," which was presented at the International Conference on Computer Vision on October 2-6 in Paris.
Novel optimization tool enables better video motion estimation
"There are two dominant paradigms in motion estimation—optical flow, which is dense but short range, and feature tracking, which is sparse but long range," said Noah Snavely, associate professor of computer science at Cornell Tech and the Cornell Ann S. Bowers College of Computing and Information Science.
"Our method enables us to have both dense and long-range tracking across time."
OmniMotion employs what the researchers refer to as "a quasi-3D representation"—a flexible form of 3D that keeps crucial aspects while avoiding the difficulties of dynamic 3D reconstruction.
The new method creates a comprehensive motion representation for the whole video using a small sample of frames and motion estimations.
When the representation is improved, it may be queried with any pixel in each frame to provide a smooth, precise motion trajectory over the whole movie.
This might be handy for introducing computer-generated imagery, or CGI, into video editing, according to Snavely.
Read more: One YouTube Tab on Android Version Was Changed by Google
0 Comments
Leave a Comment
Your email address will not be published. Required fields are marked *