Processing math: 100%
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    A Systematic Literature Review on Multimodal Image Fusion Models With Challenges and Future Research Trends

    Imaging technology has undergone extensive development since 1985, which has practical implications concerning civilians and the military. Recently, image fusion is an emerging tool in image processing that is adept at handling diverse image types. Those image types include remote sensing images and medical images for upgrading the information through the fusion of visible and infrared light based on the analysis of the materials used. Presently, image fusion has been mainly performed in the medical industry. With the constraints of diagnosing a disease via single-modality images, image fusion could be able to meet up the prerequisites. Hence, it is further suggested to develop a fusion model using different modalities of images. The major intention of the fusion approach is to achieve higher contrast, enhancing the quality of images and apparent knowledge. The validation of fused images is done by three factors that are: (i) fused images should sustain significant information from the source images, (ii) artifacts must not be present in the fused images and (iii) the flaws of noise and misregistration must be evaded. Multimodal image fusion is one of the developing domains through the implementation of robust algorithms and standard transformation techniques. Thus, this work aims to analyze the different contributions of various multimodal image fusion models using intelligent methods. It will provide an extensive literature survey on image fusion techniques and comparison of those methods with the existing ones. It will offer various state-of-the-arts of image fusion methods with their diverse levels as well as their pros and cons. This review will give an introduction to the current fusion methods, modes of multimodal fusion, the datasets used and performance metrics; and finally, it also discusses the challenges of multimodal image fusion methods and the future research trends.

  • articleNo Access

    Deep Residual Network and Wavelet Transform-Based Non-Local Means Filter for Denoising Low-Dose Computed Tomography

    Image denoising helps to strengthen the image statistics and the image processing scenario. Because of the inherent physical difficulties of various recording technologies, images are prone to the emergence of some noise during image acquisition. In the existing methods, poor illumination and atmospheric conditions affect the overall performance. To solve these issues, in this paper Political Taylor-Anti Coronavirus Optimization (Political Taylor-ACVO) algorithm is developed by integrating the features of Political Optimizer (PO) with Taylor series and Anti Coronavirus Optimization (ACVO). The input medical image is subjected to noisy pixel identification step, in which the deep residual network (DRN) is used to discover noise values and then pixel restoration process is performed by the created Political Taylor-ACVO algorithm. Thereafter image enhancement mechanism strategy is done using vectorial total variation (VTV) norm. On the other hand, original image is applied to discrete wavelet transform (DWT) such that transformed result is fed to non-local means (NLM) filter. An inverse discrete wavelet transform (IDWT) is utilized to the filtered outcome for generating the denoised image. Finally, image enhancement result is fused with denoised image computed through filtering model to compute fused output image. The proposed model observed the value for Peak signal-to-noise ratio (PSNR) of 29.167 dB, Second Derivative like Measure of Enhancement (SDME) of 41.02 dB, and Structural Similarity Index (SSIM) of 0.880 for Gaussian noise.

  • articleNo Access

    A CAPACITIVE IMAGE ANALYSIS SYSTEM TO CHARACTERIZE THE SKIN SURFACE

    The assessment of the skin surface is of a great importance in the dermocosmetic field to evaluate the response of individuals to medical or cosmetic treatments. In vivo quantitative measurements of changes in skin topographic structures provide a valuable tool, thanks to noninvasive devices. However, the high cost of the systems commonly employed is limiting, in practice, the widespread use of these devices for a routine-based approach. In this work we resume the research activity carried out to develop a compact low-cost system for skin surface assessment based on capacitive image analysis. The accuracy of the capacitive measurements has been assessed by implementing an image fusion algorithm to enable a comparison between capacitive images and the ones obtained using high-cost profilometry, the most accurate method in the field. In particular, very encouraging results have been achieved in the measurement of the wrinkles' width. On the other hand, experiments show all the native design limitations of the capacitive device, primarily conceived to work with fingerprints, to measure the wrinkles' depth, which point toward a specific re-designing of the capacitive device.

  • articleNo Access

    Optimal fusion of multi-focus image: Integrating WNMF and focal point analysis

    Image fusion can be used to improve the image utilization, spatial resolution and spectral resolution, which has been widely applied on medicine, remote sensing, computer vision, weather forecast and military target recognition. The goal of image fusion is to reduce the uncertainty and redundancy of the output and increase the reliability of the image on the basis of the maximum combination of relevant information. In this paper, a multi-focus image fusion algorithm based on WNMF and Focal point position analysis is proposed to improve the image fusion method based on nonnegative matrix factorization. In the imaging process, the Gaussian function is used to approximate the point spread function in the optical system. Then calculate the difference between the original image and the approximate point spread function and get the weighted matrix U. Finally, we apply the weighted nonnegative matrix algorithm to image fusion, and the new fusion image with clear parts is obtained. Experimental results show that the multi-focus image fusion algorithm based on WNMF and Focal point position analysis (MFWF) is better.

  • articleNo Access

    Assessment of SPOT-6 optical remote sensing data against GF-1 using NNDiffuse image fusion algorithm

    A cross-comparison method was used to assess the SPOT-6 optical satellite imagery against Chinese GF-1 imagery using three types of indicators: spectral and color quality, fusion effect and identification potential. More specifically, spectral response function (SRF) curves were used to compare the two imagery, showing that the SRF curve shape of SPOT-6 is more like a rectangle compared to GF-1 in blue, green, red and near-infrared bands. NNDiffuse image fusion algorithm was used to evaluate the capability of information conservation in comparison with wavelet transform (WT) and principal component (PC) algorithms. The results show that NNDiffuse fused image has extremely similar entropy vales than original image (1.849 versus 1.852) and better color quality. In addition, the object-oriented classification toolset (ENVI EX) was used to identify greenlands for comparing the effect of self-fusion image of SPOT-6 and inter-fusion image between SPOT-6 and GF-1 based on the NNDiffuse algorithm. The overall accuracy is 97.27% and 76.88%, respectively, showing that self-fused image of SPOT-6 has better identification capability.

  • articleNo Access

    DETECTION OF MOVING SMALL TARGETS IN INFRARED IMAGE SEQUENCES CONTAINING CLOUD CLUTTER

    Detecting and tracking dim moving small targets in infrared image sequences containing cloud clutter is an important area of research. The paper proposes a novel algorithm for the dim moving small target detection in cloudy background. The algorithm consists of three courses. The first course consists of the image spatial filtering and the sequence temporal filtering, it can be realized by two parallel calculative parts. The second course is the fusion and the segmentation processing. The last course is the targets acquiring and tracking, it can be achieved by the Kalman tracker. The results of our experiment prove that the algorithm is very effective.

  • articleNo Access

    Image Fusion for Mars Data Using Mix of Robust PCA

    Multi-sensor image fusion is the process of combining relevant information from high spatial resolution image and high spectral resolution image. This paper proposes a pansharpening method for the fusion of Mars images obtained by the Mars Reconnaissance Orbiter satellite (PAN) and Mars Odyssey satellite (THEMIS MS). The method is based on some mix of Intensity, Hue, Saturation (IHS) and robust principal component analysis (RPCA) combined with the discrete wavelet transformation (DWT). Meanwhile, the results obtained with that of the other fusion techniques are compared, and a relatively objective comprehensive evaluation for fused image is used. Experiments show that the proposed algorithm has a significant suppression of artificial textures and less spectral distortion, and its running time is acceptable.

  • articleNo Access

    Fusion of Visible and Thermal Images Using a Directed Search Method for Face Recognition

    A new image fusion algorithm based on the visible and thermal images for face recognition is presented in this paper. The new fusion algorithm derives the benefit from both the modalities images. The proposed fusion process is the weighted sum of thermal and visible face information with two weighting factors α and β, respectively. The weighting factors are calculated using a directed search algorithm automatically. The proposed fusion framework is evaluated through extensive experiments using UGC-JU face database. Experiments are of three fold. Firstly, individual modalities images are used separately for human face recognition. Secondly, fused face images using the proposed method are used for recognition purpose. The highest level of accuracy achieved by using the proposed method is about 98.42%. Lastly, the three existing fusion methods are applied on the same face database for comparison with the results of the proposed method. All the results demonstrate significant performance improvements in recognition over individual modalities and some of the existing fusion approaches, suggesting that fusion is a viable approach that deserves further study and consideration.

  • articleNo Access

    Contrast Limited Adaptive Histogram Equalization-Based Fusion in YIQ and HSI Color Spaces for Underwater Image Enhancement

    To improve contrast and restore color for underwater images without suffering from insufficient details and color cast, this paper proposes a fusion algorithm for different color spaces based on contrast limited adaptive histogram equalization (CLAHE). The original color image is first converted from RGB space to two different spaces: YIQ and HSI. Then, the algorithm separately applies CLAHE in YIQ and HSI color spaces to obtain two different enhanced images. After that, the YIQ and HSI enhanced images are respectively converted back to RGB space. When the three components of red, green, and blue are not coherent in the YIQ-RGB or HSI-RGB images, the three components will have to be harmonized with the CLAHE algorithm in RGB space. Finally, using a 4-direction Sobel edge detector in the bounded general logarithm ratio operation, a self-adaptive weight selection nonlinear image enhancement is carried out to fuse the YIQ-RGB and HSI-RGB images together to achieve the final image. The experimental results showed that the proposed algorithm provided more detail enhancement and higher values of color restoration than other image enhancement algorithms. The proposed algorithm can effectively reduce noise interference and observably improve the image quality for underwater images.

  • articleNo Access

    An Improved Biologically-Inspired Image Fusion Method

    A biologically inspired image fusion mechanism is analyzed in this paper. A pseudo-color image fusion method is proposed based on the improvement of a traditional method. The proposed model describes the fusion process using several abstract definitions which correspond to the detailed behaviors of neurons. Firstly, the infrared image and visible image are respectively ON against enhanced and OFF against enhanced. Secondly, we feed back the enhanced visible images given by the ON-antagonism system to the active cells in the center-surrounding antagonism receptive field. The fused +VIS+IR signal are obtained by feeding back the OFF-enhanced infrared image to the corresponding surrounding-depressing neurons. Then we feed back the enhanced visible signal from OFF-antagonism system to the depressing cells in the center-surrounding antagonism receptive field. The ON-enhanced infrared image is taken as the input signal of the corresponding active cells in the neurons, then the cell response of infrared-enhance-visible is produced in the process, it is denoted as +IR+VIS. The three kinds of signal are considered as R, G and B components in the output composite image. Finally, some experiments are performed in order to evaluate the performance of the proposed method. The information entropy, average gradient and objective image fusion measure are used to assess the performance of the proposed method objectively. Some traditional digital signal processing-based fusion methods are also evaluated for comparison in the experiments. In this paper, the Quantitative assessment indices show that the proposed fusion model is superior to the classical Waxman’s model, and some of its performance is better than the other image fusion methods.

  • articleNo Access

    Target Object Recognition Using Multiresolution SVD and Guided Filter with Convolutional Neural Network

    To design an efficient fusion scheme for the generation of a highly informative fused image by combining multiple images is still a challenging task in computer vision. A fast and effective image fusion scheme based on multi-resolution singular value decomposition (MR-SVD) with guided filter (GF) has been introduced in this paper. The proposed scheme decomposes an image of two-scale by MR-SVD into a lower approximate layer and a detailed layer containing the lower and higher variations of pixel intensity. It generates lower and details of left focused (LF) and right focused (RF) layers by applying the MR-SVD on each series of multi-focus images. GF is utilized to create a refined and smooth-textured weight fusion map by the weighted average approach on spatial features of the lower and detail layers of each image. A fused image of LF and RF has been achieved by the inverse MR-SVD. Finally, a deep convolutional autoencoder (CAE) has been applied to segment the fused results by generating the trained-patches mechanism. Comparing the results by state-of-the-art fusion and segmentation methods, we have illustrated that the proposed schemes provide superior fused and its segment results in terms of both qualitatively and quantitatively.

  • articleNo Access

    US/MRI Guided Robotic System for the Interventional Treatment of Prostate

    Needle-based percutaneous prostate interventions include biopsy and brachytherapy and the former is the gold standard for the diagnosis of prostate cancer and the latter is often used in the treatment of prostate cancer. This paper introduces a novel robotic assistant system for prostate intervention and the system architecture and workflow are described, which is significant for the design of similar systems. In order to offer higher precision and better real-time performance, a Ultrasound (US)/Magnetic Resonance Imaging (MRI) fusion method is proposed to guide the procedures in this study. Moreover, image registration is a key step and a hot issue in image fusion, especially in multimodal image fusion. In this work, we adopt a novel registration method based on active demons and optic flow for prostate image fusion. To verify the availability of the system, we evaluate our approach of the US/MRI image fusion by using data acquired from six patients, and root mean square error (RMSE) for anatomical landmarks is 3.15mm. In order to verify the accuracy and validity of the system developed in this paper, a system experimental platform was built and used for bionic tissue puncture of prostate under the guidance of MR and Transrectal Ultrasound (TRUS) fusion images. The experimental results show that the deviations of the final actual needle points of the three target points on the bionic tissue model measured in the laboratory environment are less than 2.5mm.

  • articleNo Access

    Depth Image Vibration Filtering and Shadow Detection Based on Fusion and Fractional Differential

    The depth image generated by Kinect sensor always contains vibration and shadow noises which limit the related usage. In this research, a method based on image fusion and fractional differential is proposed for the vibration filtering and shadow detection. First, an image fusion method based on pixel level is put forward to filter the vibration noises. This method can achieve the best quality of every pixel according to the depth images sequence. Second, an improved operator based on fractional differential is studied to extract the shadow noises, which can enhance the boundaries of shadow regions significantly to accomplish the shadow detection effectively. Finally, a comparison is made with other traditional and state-of-the-art methods and our experimental results indicate that the proposed method can filter out the vibration and shadow noises effectively based on the F-measure system.

  • articleNo Access

    An Efficient FPGA Architecture with High-Performance 2D DWT Processor for Medical Imaging

    Medical image fusion is the process of deriving vital information from multimodality medical images. Some important applications of image fusion are medical imaging, remote control sensing, personal computer vision and robotics. For medical diagnosis, computerized tomography (CT) gives the best information about denser tissue with a lesser amount of distortion and magnetic resonance image (MRI) gives the better information on soft tissue with little higher distortion. The main scheme is to combine CT and MRI images for getting most significant information. The need is to focus on less power consumption and less occupational area in the implementations of the applications involving image fusion using discrete wavelet transform (DWT). To design the DWT processor with low power and area, a low power multiplier and shifter are incorporated in the hardware. This low power DWT improves the spatial resolution of fused image and also preserve the color appearance. Also, the adaptation of the lifting scheme in the 2D DWT process further improves the power reduction. In order to implement this 2D DWT processor in field-programmable gate array (FPGA) architecture as a very large scale integration (VLSI)-based design, the process is simulated with Xilinx 14.1 tools and also using MATLAB. When comparing the performance of this low power DWT and other available methods, this high performance processor has 24%, 54% and 53% of improvements on the parameters like standard deviation (SD), root mean square error (RMSE) and entropy. Thus, we are obtaining a low power, low area and good performance FPGA architecture suited for VLSI, for extracting the needed information from multimodality medical images with image fusion.

  • articleNo Access

    Multi-Modal Image Fusion via Convolutional Morphological Component Analysis and Guided Filter

    In feature-level image fusion, deep learning technology, particularly convolutional sparse representation (SR) theory, has emerged as a new topic over the past three years. This paper proposes an effective image fusion method based on convolution SR, namely, convolutional sparsity-based morphological component analysis and guided filter (CS-MCA-GF). The guided filter operator and choose-max coefficient fusion scheme introduced in this method can effectively eliminate the artifacts generated by the morphological components in the linear fusion, and maintain the pixel saliency of the source images. Experiments show that the proposed method can achieve an excellent performance in multi-modal image fusion, which includes medical image fusion.

  • articleNo Access

    FUSION OF MULTISPECTRAL AND PANCHROMATIC IMAGES BASED ON THE NONSUBSAMPLED CONTOURLET TRANSFORM

    This paper proposes two different methods for fusing multispectral (MS) and panchromatic (PAN) satellite images using the nonsubsampled contourlet transform (NSCT). The NSCT decomposes the images into several directional subbands at flexible resolutions. The finer subbands of the MS image decomposition, which represent the high frequency details, are modified with the subbands from PAN image. The inverse transform gives a high resolution MS representation of the given low resolution MS image. The paper also proposes the use of the parameters mean structural similarity index measure (MSSIM) and edge stability mean square error (ESMSE) to measure the quality of the fused image, in addition to standard parameters like correlation coefficient, PSNR, RASE and ERGAS. Experiments show that the proposed methods outperform standard fusion techniques in terms of both visual quality and quantitative error measures. The fused images obtained using the proposed methods show good structural similarity and better edge stability as the contourlet transform very well extracts the oriented edge details from the high resolution PAN image.

  • articleNo Access

    A Novel Fusion Rule for Medical Image Fusion in Complex Wavelet Transform Domain

    Medical image fusion is being used at large by clinical professionals for improved diagnosis and treatment of diseases. The main aim of image fusion process is to combine complete information from all input images into a single fused image. Therefore, a novel fusion rule is proposed for fusing medical images based on Daubechies complex wavelet transform (DCxWT). Input images are first decomposed using DCxWT. The complex coefficients so obtained are then fused using normalized correlation based fusion rule. Finally, the fused image is obtained by inverse DCxWT with all combined complex coefficients. The performance of the proposed method has been evaluated and compared both visually and objectively with DCxWT based fusion methods using state-of art fusion rules as well as with existing fusion techniques. Experimental results and comparative study demonstrate that the proposed fusion technique generates better results than existing fusion rules as well as with other fusion techniques.

  • articleNo Access

    Fuzzy Transform-Based Fusion of Multiple Images

    Extensive development has taken place in the field of image fusion and various algorithms of image fusion have attracted the attention of many researchers in the recent past. Various algorithms of image fusion are used to combine information from multiple source images into a single fused image. In this paper, fusion of multiple images using fuzzy transform is proposed. Images to be fused are initially decomposed into same size blocks. These blocks are then fuzzy transformed and fused using maxima coefficient value-based fusion rule. Finally, the fused image is obtained by performing inverse fuzzy transform. The performance of the proposed algorithm is evaluated by performing experiments on multifocus, medical and visible/infrared images. Further, the performance of the proposed algorithm is compared with the state-of-the-art image fusion algorithms, both subjectively and objectively. Experimental results and comparative study show that the proposed fusion algorithm fuses the multiple images effectively and produces better fusion results for medical and visible/infrared images.

  • articleNo Access

    Infrared and Visible Image Fusion Based on Sparse Representation and Spatial Frequency in DTCWT Domain

    Infrared and visible image fusion is a key area of research in multi-sensor image fusion. The main purpose of this fusion is to combine thermal information of the infrared image and texture information of the visible image. This paper presents an image fusion framework, based on parallel arrangement of sparse representation (SR) and spatial frequency (SF). In the proposed framework, an efficient edge-aware filter, i.e. guided filter, is first employed on the visible image. Then dual-tree complex wavelet transform (DTCWT) is used to obtain low-pass and high-pass coefficients of images, as it is shift-invariant and has high directional selectivity. The low-pass coefficients are fused using the SR- and SF-based fusion rules in parallel, which enhances the regional features of the images. The simulation results show that the proposed technique has better performance when compared with conventional techniques in both subjective and objective evaluations.

  • articleNo Access

    A Metaheuristics Framework for Weighted Multi-band Image Fusion

    The main objective of hyper/multispectral image fusion is producing a composite color image that allows for an appropriate visualization of the relevant spatial and spectral information. In this paper, we propose a general framework for spectral weighting-based image fusion. The proposed methodology relies on weight updates conducted using nature-inspired algorithms and a goodness-of-fit criterion defined as the average root mean square error. Simulations on four public data sets and a recent Landsat 8 image of Brullus Lake, Egypt, as an area of study prove the efficiency of the proposed framework. The purpose of the study is to present a framework of multi-band image fusion that produces a fused image of high quality to be further used in computer processing and the results show that the image produced by the presented framework has the highest quality compared with some of the state-of-the art algorithms. To prove the increase in the image quality, we used general quality metrics such as Universal Image Quality Index, Mutual Information, the Variance and Information Measure.