Search name | Searched On | Run search |
---|---|---|
Keyword: Fusion (76) | 13 Mar 2025 | Run |
You do not have any saved searches
Image fusion plays a vital method that integrates data from infrared (IR) and visible light images, to improve visual data quality for specific applications, like, surveillance and object detection. This research investigates efficient algorithms for deep learning (DL)-based image fusion, in the concept of architectures that utilize the complementary characteristics of IR and visible light images. It employs data comprising IR and visible light images captured under varying environmental conditions, ensuring a diverse representation of scenarios. The images perform pre-processing methods such as noise reduction and histogram equalization to optimize the fusion quality. Multi-scale feature extraction is used for feature extraction. This research presents a novel approach leveraging a synergistic fibroblast fine-tuned efficient convolutional neural network (SF-ECNN) for fusion. The SF-ECNN model is analyzed using various metrics and evaluations. Evaluation metrics reveal an EN value of 7.2368, VIF of 0.8124, SD of 95.3515, SF of 19.5314, and AG of 7.3571. Thus, the outcomes exhibit significant improvements in comparison to current advanced fusion techniques, underscoring the efficacy and efficiency of the methodology in producing fused images of superior quality.
Breast cancer (BrC) is one of the most common causes of death among women worldwide. Images of the breast (mammography or ultrasound) may show an anomaly that represents early indicators of BrC. However, accurate breast image interpretation necessitates labor-intensive procedures and highly skilled medical professionals. As a second opinion for the physician, deep learning (DL) tools can be useful for the diagnosis and classification of malignant and benign lesions. However, due to the lack of interpretability of DL algorithms, it is not easy to understand by experts as to how to predict a label. In this work, we proposed multitask U-Net Saliency estimation and DL model-based breast lesion segmentation and classification using ultrasound images. A new contrast enhancement technique is proposed to improve the quality of original images. After that, a new technique was proposed called UNET-Saliency map for the segmentation of breast lesions. Simultaneously, a MobileNetV2 deep model is fine-tuned with additional residual blocks and trained from scratch using original and enhanced images. The purpose of additional blocks is to reduce the number of parameters and better learning of ultrasound images. Training is performed from scratch and extracted features from the deeper layers of both models. In the later step, a new cross-entropy controlled sine-cosine algorithm is developed and selected best features. The main purpose of this step is the reduction of irrelevant features for the classification phase. The selected features are fused in the next step by employing a serial-based Manhattan Distance (SbMD) approach and classified the resultant vector using machine learning classifiers. The results indicate that a wide neural network (W-NN) obtained the highest accuracy of 98.9% and sensitivity rate of 98.70% on the selected breast ultrasound image dataset. The comparison of the proposed method accuracy is conducted with state-of-the-art (SoArt) techniques which show the improved performance.
Combining information from Electroencephalography (EEG) and Functional Magnetic Resonance Imaging (fMRI) has been a topic of increased interest recently. The main advantage of the EEG is its high temporal resolution, in the scale of milliseconds, while the main advantage of fMRI is the detection of functional activity with good spatial resolution. The advantages of each modality seem to complement each other, providing better insight in the neuronal activity of the brain. The main goal of combining information from both modalities is to increase the spatial and the temporal localization of the underlying neuronal activity captured by each modality. This paper presents a novel technique based on the combination of these two modalities (EEG, fMRI) that allow a better representation and understanding of brain activities in time. EEG is modeled as a sequence of topographies, based on the notion of microstates. Hidden Markov Models (HMMs) were used to model the temporal evolution of the topography of the average Event Related Potential (ERP). For each model the Fisher score of the sequence is calculated by taking the gradient of the trained model parameters. The Fisher score describes how this sequence deviates from the learned HMM. Canonical Partial Least Squares (CPLS) were used to decompose the two datasets and fuse the EEG and fMRI features. In order to test the effectiveness of this method, the results of this methodology were compared with the results of CPLS using the average ERP signal of a single channel. The presented methodology was able to derive components that co-vary between EEG and fMRI and present significant differences between the two tasks.
Muon reactivation coefficient are determined for muonic He for up to six (n = 1, 2, 3, …, 6) states of formation and at temperature Tp = 100 eV and for various relative ion densities. In the next decade it may be possible to explore new conditions for further energy gain in muon catalyzed fusion system, μCF, using nonuniform (temperature and density) plasma states. Here, we have considered a model for inhomogeneous μCF for mixtures of D/T and H/D/T. Using coupled dynamical equations it is shown that the neutrons yield per muon injection, Yn (neutrons/muon), in the dt branch of an inhomogeneous H/D/T mixture is at least 2.24 times higher than similar homogeneous systems and this rate for a D/T mixture is 1.92. Also, we have compared the neutron yield in the dt branch of homogeneous D/T and H/D/T mixtures (temperature range T = 300–800 K, and density ϕ = 1 LHD). It is shown that Yn(D/T)/Yn(H/D/T) = 1.32, which is in good agreement with recently measured experimental values. In other words our calculations show that the addition of protonium to a D/T mixture leads to a significant decrease in the cycling rate for the physical conditions described herein.
The experimental data on the capture and evaporation residue cross-sections obtained in the 48Ca+208Pb reaction were analyzed in the framework of the dynamical model based on the dinuclear system concept and advanced statistical method to clarify the reaction mechanism. The experimental excitation function of the capture reactions was decomposed into contributions of the fusion–fission, quasifission and fast-fission processes. Total evaporation residues and ones after neutron emission were only calculated and compared with the available experimental data.
We discuss the effects of non-inertial motion in reactions occurring in laboratory, stars, and elsewhere. It is demonstrated that non-inertial effects due to large accelerations during nuclear collisions might have appreciable effects nuclear and atomic transitions. We also explore the magnitude of the corrections induced by strong gravitational fields on nuclear reactions in massive, compact stars, and the neighborhood of black holes.
We have studied the role of spatial extension (Halo structure) of 6He and that of breakup reaction channel in the fusion of 6He+238U system. The breakup channel effects are taken into account within the framework of the dynamic polarisation potential approach. It has been found that the fusion cross section enhances due to the static effects of spatial extension of the projectile in the entire energy region. The breakup effects, however, results in the above barrier suppression and below barrier small enhancement in the fusion cross section. The matching between the data and the predictions improves significantly by the inclusion of both static and breakup effects.
Micromegas-based detectors are used in a wide variety of neutron experiments. Their fast response meets the needs of time-of-flight facilities in terms of time resolution. The possibility of constructing low mass Micromegas detectors makes them appropriate for beam imaging and monitoring without affecting the beam quality or inducing background in parallel measurements. The good particle discrimination capability allows using Micromegas for neutron induced fission and (n, α) cross-section measurements. Their high radiation resistance make them suitable for working as flux monitors in the core of fission nuclear reactors as well as in the proximity of fusion chambers. New studies underlined the possibility of performing neutron computed tomography (CT) with Micromegas as neutron detectors, but also of exploiting its performances in experiments of fundamental nuclear physics.
In a decade-and-a-half old experiment, Raabe et al. [Nature 431, 823 (2004)], had studied fusion of an incoming beam of halo nucleus 6He with the target nucleus 238U238U. We extract a new interpretation of the experiment, different from the one that has been inferred so far. We show that their experiment is actually able to discriminate between the structures of the target nucleus (behaving as standard nucleus with density distribution described with canonical RMS radius r=r0A13r=r0A13 with r0=1.2r0=1.2 fm), and the “core” of the halo nucleus, which surprisingly, does not follow the standard density distribution with the above RMS radius. In fact, the core has the structure of a tennis-ball (bubble)-like nucleus, with a “hole” at the center of the density distribution. This novel interpretation of the fusion experiment provides an unambiguous support to an almost two decades old model [A. Abbas, Mod. Phys. Lett. A16, 755 (2001)], of the halo nuclei. This Quantum Chromodynamics based model succeeds in identifying all known halo nuclei and makes clear-cut and unique predictions for new and heavier halo nuclei. This model supports the existence of tennis-ball (bubble)-like core, of even the giant-neutron halo nuclei. This should prove beneficial to the experimentalists, to go forward more confidently, in their study of exotic nuclei.
We analyse the transfer matrix spectra of vertex models associated with the Lie superalgebras gl(P|M) and sl(P|M) using the representation theory of the Hecke algebra Hn(q). We develop a terminology for discussing the Bethe ansatz computation of the spectrum from this perspective. Using representations coming from these vertex models we develop some new methods for dealing with the analysis of Hecke algebras in any specialisation, including roots of unity. We also discuss the construction and spectrum of fusion models from the viewpoint of representation theory, begining a classification of the spectrum and identifying some sectors with trivial spectrum. In particular we show that the spectrum of the sl(P|M) λ-fusion model is trivial if λP+1>M.
We study the fusion of function difference representation (FR) and cyclic representation (CR) of Zn Sklyanin algebra (SA). Using this fusion procedure with certain functions, we derive some finite-dimensional representations of SA. We obtain a conjugate FR and a new FR of SA. We also give automorphisms and an antiautomorphism of Zn SA.
Using an extended mapping approach and a special Painlevé–Bäcklund transformation, respectively, we obtain two families of exact solutions to the (2+1)-dimensional Boiti–Leon–Martina–Pempinelli (BLMP) system. In terms of the derived exact solution, we reveal some novel evolutional behaviors of localized excitations, i.e., fission, fusion, and annihilation phenomena in the (2+1)-dimensional BLMP system.
To improve the image quality and compensate deficiencies of haze removal, we presented a novel fusion method. By analyzing the darkness channel of each method, the effective darkness channel model that takes the correlation information of each darkness channel into account was constructed. This method was used to estimate the transmission map of the input image, and refined by the modified guided filter in order to further improve the image quality. Finally, the radiance image was restored by combining the monochrome atmospheric scattering model. Experimental results show that the proposed method not only effectively remove the haze of the image, but also outperform the other haze removal methods.
The purpose of this research is to determine the surface morphologies, microstructural and thermal properties of tungsten-based composites that consist of 93 wt.% tungsten (W), 6 wt.% vanadium carbide (VC) and 1 wt.% graphite (C) powders. W-6 wt.% VC-1 wt.% C powders were mechanically alloyed (MA’d) for 6 hrs using a SpexTMSpexTM with a rate rpm using tungsten carbide vial and balls and sintered at 1750∘C1750∘C for under N2N2, H2H2 gas flow conditions. The phase composition and microstructural characterization of the tungsten composites were carried out using X-ray diffractometer (XRD), Scanning Electron Microscopy (SEM) and Raman Spectroscopy. SEM images showed the distribution of the tungsten (W), vanadium carbide (VC) and graphite (C) powders and porosity in the tungsten matrix. The Raman spectra exhibited two major peaks, which are recorded at 1331 (vs) cm−1cm−1 and 1583 (vs) cm−1cm−1 in the Raman spectra. These bands represented carbon phases such as disordered graphite (D) and graphite (G). Thermogravimetric analysis (TGA) measurements were performed to obtain the weight loss and thermal stability of samples in the temperature range 3030–1100∘C1100∘C under argon gas atmosphere. The TG curve revealed a total loss of 3.3% of weight at this temperature range. It is considered that the cause of mass loss is due to the oxidation and gas desorption of materials.
In this paper an integrated vision system for autonomous land vehicle is described. The vision system includes 2D and 3D vision modules and information fusion module. The task of 2D vision is to provide the physical and geometry information of road, and the task of 3D vision system is to detect the obstacles in the surrounding. Fusion module combines 2D and 3D information to generate a feasible region provided for vehicle navigation.
The profile view of a face provides a complementary structure that is not seen in the frontal view. The classification system combining both frontal and profile views of faces can improve the classification accuracy. And it would be more foolproof because it is difficult to fool the profile face identification by a mask. This paper proposes a new face recognition approach, which can be applied on both frontal and profile faces, to build a robust combined multiple view face identification system. The recognition employs a novel facial corner coding and matching method, and integrates the outline and interior facial parts in the profile matching. The proposed multiview modified Hausdorff distance fuses multiple views of faces to achieve an improved system performance.
This paper presents a new technique for user identification and recognition based on the fusion of hand geometric features of both hands without any pose restrictions. All the features are extracted from normalized left and right hand images. Fusion is applied at feature and also at decision level. Two probability-based algorithms are proposed for classification. The first algorithm computes the maximum probability for nearest three neighbors. The second algorithm determines the maximum probability of the number of matched features with respect to a thresholding on distances. Based on these two highest probabilities initial decisions are made. The final decision is considered according to the highest probability as calculated by the Dempster–Shafer theory of evidence. Depending on the various combinations of the initial decisions, three schemes are experimented with 201 subjects for identification and verification. The correct identification rate is found to be 99.5%, and the false acceptance rate (FAR) of 0.625% has been found during verification.
Automatic music genre classification based on distance metric learning (DML) is proposed in this paper. Three types of timbral descriptors, namely, mel-frequency cepstral coefficient (MFCC) features, modified group delay features (MODGDF) and low-level timbral feature sets are combined at the feature level. We experimented with k nearest neighbor (kNN) and support vector machine (SVM)-based classifiers for standard and DML kernels (DMLK) using GTZAN and Folk music dataset. Standard kernel-based kNN and SVM-based classifiers report classification accuracy (in%) of 79.03 and 90.16, respectively, on GTZAN dataset and 86.60 and 92.26, respectively, for Folk music dataset, with the best performing RBF kernel. A further improvement was observed when DML kernels were used in place of standard kernels in the kernel kNN and SVM-based classifiers with an accuracy of 84.46%, 92.74% (GTZAN), 90.00 and 96.23 (Folk music dataset) for DMLK-kNN and DMLK-SVM, respectively. The results demonstrate the potential of DML kernels in music genre classification task.
Achieving a better recognition rate for text in action video images is challenging due to multiple types of text with unpredictable actions in the background. In this paper, we propose a new method for the classification of caption (which is edited text) and scene text (text that is a part of the video) in video images. This work considers five action classes, namely, Yoga, Concert, Teleshopping, Craft, and Recipes, where it is expected that both types of text play a vital role in understanding the video content. The proposed method introduces a new fusion criterion based on Discrete Cosine Transform (DCT) and Fourier coefficients to obtain the reconstructed images for caption and scene text. The fusion criterion involves computing the variances for coefficients of corresponding pixels of DCT and Fourier images, and the same variances are considered as the respective weights. This step results in Reconstructed image-1. Inspired by the special property of Chebyshev-Harmonic-Fourier-Moments (CHFM) that has the ability to reconstruct a redundancy-free image, we explore CHFM for obtaining the Reconstructed image-2. The reconstructed images along with the input image are passed to a Deep Convolutional Neural Network (DCNN) for classification of caption/scene text. Experimental results on five action classes and a comparative study with the existing methods demonstrate that the proposed method is effective. In addition, the recognition results of the before and after the classification obtained from different methods show that the recognition performance improves significantly after classification, compared to before classification.
In the process of multimodal image fusion, how to improve the visual effect after the image fused, while taking into account the protection of energy and the extraction of details, has attracted more and more attention in recent years. Based on the research of visual saliency and the final action-level measurement of the base layer, a multimodal image fusion method based on a guided filter is proposed in this paper. Firstly, multi-scale decomposition of a guided filter is used to decompose the two source images into a small-scale layer, large-scale layer and base layer. The fusion rule of the maximum absolute value is adopted in the small-scale layer, the weight fusion rule based on regular visual parameters is adopted in the large-scale layer and the fusion rule based on activity-level measurement is adopted in the base layer. Finally, the fused three scales are laminated into the final fused image. The experimental results show that the proposed method can improve the image edge processing and visual effect in multimodal image fusion.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.