TY - GEN
T1 - BrightFlow
T2 - 23rd IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2023
AU - Marsal, Remi
AU - Chabot, Florian
AU - Loesch, Angelique
AU - Sahbi, Hichem
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023/1/1
Y1 - 2023/1/1
N2 - Unsupervised optical flow estimation relies on the assumption that pixels characterizing the same observed object should exhibit a stable appearance across video frames. With this assumption, the long-standing principle behind flow estimation consists in optimizing a photometric loss that maximizes the similarity between paired pixels in successive frames. However, these frames could be subject to strong brightness changes due to the radiometric properties of scenes as well as their viewing conditions.In this paper, we present BrightFlow, a new method to train any optical flow estimation network in an unsupervised manner. It consists in training two networks that jointly estimate optical flow and brightness changes. These changes are then compensated in the photometric loss so that reconstruction errors due to shadows or reflections will not affect negatively the training. As this compensation mechanism is only used at training stage, our method does not impact the number of parameters or the complexity at inference. Extensive experiments conducted on standard datasets and optical flow architectures show a consistent gain of our method. Source code is available at https://github.com/CEA-LIST/BrightFlow.
AB - Unsupervised optical flow estimation relies on the assumption that pixels characterizing the same observed object should exhibit a stable appearance across video frames. With this assumption, the long-standing principle behind flow estimation consists in optimizing a photometric loss that maximizes the similarity between paired pixels in successive frames. However, these frames could be subject to strong brightness changes due to the radiometric properties of scenes as well as their viewing conditions.In this paper, we present BrightFlow, a new method to train any optical flow estimation network in an unsupervised manner. It consists in training two networks that jointly estimate optical flow and brightness changes. These changes are then compensated in the photometric loss so that reconstruction errors due to shadows or reflections will not affect negatively the training. As this compensation mechanism is only used at training stage, our method does not impact the number of parameters or the complexity at inference. Extensive experiments conducted on standard datasets and optical flow architectures show a consistent gain of our method. Source code is available at https://github.com/CEA-LIST/BrightFlow.
KW - Algorithms: Video recognition and understanding (tracking, action recognition, etc.)
KW - and algorithms (including transfer, low-shot, semi-, self-, and un-supervised learning)
KW - formulations
KW - Machine learning architectures
UR - https://www.scopus.com/pages/publications/85149008419
U2 - 10.1109/WACV56688.2023.00210
DO - 10.1109/WACV56688.2023.00210
M3 - Conference contribution
AN - SCOPUS:85149008419
T3 - Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023
SP - 2060
EP - 2069
BT - Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 3 January 2023 through 7 January 2023
ER -