Please login to be able to save your searches and receive alerts for new content matching your search criteria.
In this article, were explored new possibilities of aggregating information from different channels of color images. This was done by means of giving different importance -threshold- to each channel during the scale phase of edge detection. After that, several methods for aggregating the edges extracted from each channel were applied. The output of the algorithms was compared with Berkeley’s images data set. The results of the experiments proved that using different threshold for each channel and aggregating them makes the edge map closer to the human’s compare to grayscale’s. As well, these results showed that the color space of 8 dimensions -called Super8 and developed in - allows obtaining more significative edges compared to the ones obtained by RGB’s. Moreover, these results point out significative differences in the edges depending from which color/channel they were extracted.
This chapter introduces a novel approach to tree detection by fusing LiDAR (Light Detection and Ranging) and RGB imagery, leveraging Ordered Weighted Averaging (OWA) aggregation operators to improve image fusing. It focuses on enhancing tree detection and classification by combining LiDAR’s structural data with the spectral details from RGB images. The fusion methodology aims to optimize information retrieval, employing image segmentation and advanced classification techniques. The effectiveness of this method is demonstrated on the PNOA dataset, highlighting its potential for supporting forest management.