Abstract:
In the domain of computer vision, optical flow stands as a cornerstone for unravelling dynamic visual scenes. However, the challenge of accurately estimating optical flow under conditions of large displacement remains an open question. The conventional image flow constraint is vulnerable to substantial nonlinear elements, rapid temporal variations, or spatial changes in the intensity function. The inaccurate approximations inherent in numerical differentiation techniques can further amplify such intricacies. In response, this research proposes an innovative algorithm for optical flow computation, utilising the higher precision of second-order Taylor series approximation within the differential estimation framework to improve the robustness and accuracy of optical flow. By embracing this mathematical underpinning, the research seeks to extract more information about the behaviour of the function under complex scenarios with large nonlinear components, rapid temporal changes, or spatial changes in the intensity gradients and estimate the motion of areas with a lack of texture. The experimental results demonstrate that the proposed algorithm outperforms the existing optical flow algorithms, revealing its capability to estimate global motion accurately even in challenging scenarios. An impressive showcase of its capabilities emerges through its competitive performance on renowned optical flow benchmarks such as KITTI (2015) and Middlebury. The average endpoint error (AEE), a quintessential measure of the accuracy of optical flow algorithms, which computes the Euclidian distance between the calculated flow field and the ground truth flow field, stands notably diminished, validating the effectiveness of the algorithm in handling complex motion patterns. Further experiments conducted against OpenCV optical flow implementation show a significant performance over state-of-the-art algorithms, indicating its potential for practical application in a range of real-world scenarios that require accurate global motion estimations, such as autonomous navigation, video surveillance, flight stabilisation in drones, video stabilisation and motion-based recognition, where accurate motion estimation between consecutive frames is imperative.