Domain Adaptation Infrared Forest Fire Detection Method Based on YOLOv5 Framework
Abstract
Deep networks have achieved great success in forest fire detection by exploiting visible light images. However, visible light images are susceptible to strong light, smoke, and obstruction interference. The infrared image has high sensitivity to temperature changes of targets, which can alleviate the deficiency of visible light image. Due to the significant distribution shift between visible light and infrared images, directly using the visible light-based pre-trained network for infrared forest fire results in a significant decrease in performance. To resolve this issue, this paper proposes an infrared image forest fire detection system based on domain adaptive learning. We adopt two YOLOv5 frameworks to extract features from visible light images (source domain) and infrared images (target domain). To align the features of the two domains, we construct a novel adaptation learning mechanism based on Kullback–Leibler (KL) loss and feature maximum mean discrepancy (FMMD) loss. We conducted extensive comparative experiments on two publicly available datasets to verify the effectiveness of the proposed model. All experimental results indicate that our proposed domain adaptive learning mechanism effectively improves the performance of infrared forest fire detection.
Remember to check out the Most Cited Articles! |
---|
Check out Notable Titles in Artificial Intelligence. |