HDR-YOLO: Adaptive Object Detection in Haze, Dark, and Rain Scenes Based on YOLO
Abstract
In the context of real-world environments, images acquired through surveillance cameras in such settings are frequently marred by issues including diminished contrast, suboptimal image quality, and color aberrations, rendering conventional object detection models ill-suited for the task. Taking inspiration from the foundational principles of image restoration, this study aims to extract environment-agnostic features across various weather conditions in order to enhance object detection performance in multiple scenarios while maintaining accuracy under typical meteorological conditions. In response to this question, we introduce a detection framework as HDR-YOLO that jointly trains feature extraction and object detection. Meantime, to solve the problem of visual impairments caused by adverse conditions, we propose a Dynamic Extraction of Environment-Agnostic Features (DEAF) module. Additionally, we joint mean squared error (MSE) loss and Log-Cosh loss as optimization techniques, carefully tailored to further elevate detection performance, especially under adverse meteorological conditions. Extensive empirical findings from the AGVS dataset validate the ability of HDR-YOLO to improve object detection performance in airport ground videos within real-world settings while maintaining precision under typical meteorological conditions, which underscores its innovative capabilities and adaptability in complex and diverse environments.