You do not have any saved searches
Each local feature in the appearance image of cigarette packs is a key element to reflect the corresponding brand information. If only a single convolutional neural network is used, the context information of sequence data may be lost, resulting in an insufficient grasp of the overall information. In order to realize the deep-level feature extraction of the appearance of cigarette packs and realize the appearance detection of cigarette packs with higher accuracy and speed, a new method for the appearance detection of cigarette packs was proposed by combining the convolutional neural network and the cyclic neural network methods in the deep learning algorithm. The image acquisition card is used to collect the appearance images of cigarette packs, and contrast enhancement and rotation correction are performed on the collected images to effectively improve their quality and provide a good guarantee for subsequent feature extraction and detection. The preprocessed cigarette package image is input into the convolutional neural network to realize the deep-level feature extraction of the cigarette package appearance image. The cigarette package appearance features’ output from the convolutional neural network is input into the short-term and long-term memory unit, as well as the gate-controlled cyclic unit, of the corresponding recurrent neural network, in order to process time sequence information based on the efficient extraction of details from the image, retain the sequence and context information of the input data, and ultimately achieve accurate detection of the cigarette package appearance. Through experimental analysis, this method can effectively identify the appearance defects of cigarette packs, mark them immediately, and present them in an intuitive way, so that staff can quickly locate the problem and take corresponding measures. The method can detect appearance defects larger than 1.59mm×1.59mm with high accuracy. For various appearance defects, the detection rate can be guaranteed to be over 98%, providing strong support for quality control and product upgrading in the tobacco industry.
Low-light images are challenging for both human observation and computer vision algorithms due to low visibility. To address this issue, various image enhancement techniques such as dehazing, histogram equalization, and neural network-based methods have been proposed. However, most existing methods often suffer from the problems of insufficient contrast and over-enhancement while enhancing the brightness, which not only affects the visual quality of images but also adversely impacts their subsequent analysis and processing. To tackle these problems, this paper proposes a low-light image enhancement method called LEFB. Specifically, the low-light image is first transformed into the LAB color space, and the L channel controlling brightness is enhanced using a local contrast enhancement algorithm. Then, the enhanced image is further enhanced using an exposure fusion-based contrast enhancement algorithm, and finally, a bilateral filtering function is applied to reduce image edge blurriness. Experimental evaluations are conducted on real datasets with four comparison algorithms. The results demonstrate that the proposed method has superior performance in enhancing low-light images, effectively addressing problems of insufficient contrast and over-enhancement, while preserving fine details and texture information, resulting in more natural and realistic enhanced images.
High-level noise and low contrast characteristics in medical images continue to present major bottlenecks in their segmentation despite increased imaging modalities. This paper presents a semi-automatic algorithm that utilizes the noise for enhancing the contrast of low contrast input magnetic resonance images followed by a new graph cut method to reconstruct the surface of left ventricle. The main contribution in this work is a new formulation for preventing the conventional cellular automata method to leak into surrounding regions of similar intensity. Instead of segmenting each slice of a subject sequence individually, we empirically select a few slices, segment them, and reconstruct the left ventricular surface. During the course of surface reconstruction, we use level sets to segment the rest of the slices automatically. We have throughly evaluated the method on both York and MICCAI Grand Challenge workshop databases. The average Dice coefficient (in %) is found to be 92.4 ± 1.3 (value indicates the mean and standard deviation) whereas false positive ratio, false negative ratio, and specificity are found to be 0.019, 7.62 × 10-3, and 0.75, respectively. Average Hausdorff distance between segmented contour and ground truth is determined to be 2.94 mm. The encouraging quantitative and qualitative results reflect the potential of the proposed method.
Image enhancement processing is a very important operation during image preprocessing. Compared with to enhancc the overall contrast level of image, enhancing the local contrast of image can improve the level of such contrast directly as well as the quality and effect of image enhancement. In this paper, the gray prediction model is applied to the process of enhancing image local contrast, so as to measure the change range of image local contrast and adaptively adjust the scale of enhancing image local contrast. The simulation results show that, in addition to enhancing the contrast of gray level on the edge of image, the proposed algorithm can inhibit roughened nonedge region and improve the quality of local enhancement processing, which create a more favorable condition for the further image edge detection.
A contrast enhancement method based on the adaptive noise threshold estimation and logarithmic function in nonsubsampled contourlet transform (NSCT) domain is proposed, which can improve the defects segmentation accuracy of the elevator compensation chain. After extracting region of interest (ROI) according to the spatial location of defects, it is transformed into NSCT domain, where the high-frequency subband coefficients corresponding to noise is suppressed by the adaptive noise threshold estimation. Then, the logarithmic function transformation is used to enhance the image edges to a variable extent. Finally, the enhanced image is segmented by watershed algorithm based on laws texture analysis. Experimental results demonstrate that the performance of the proposed method is superior to the existing methods in terms of both the quality of contrast enhancement and the segmentation accuracy.
Informative images endure from poor contrast and noise during image acquisition. Significant information retrieval necessitates image contrast enhancement and removal of noise as a prerequisite before any further processing can be done. Dominant applications with low contrast images affected by speckle noise are medical ultrasound images. The objective of this work is to improve the effectiveness of the preprocessing stage in medical ultrasound images by enhancing the image while retaining its structural characteristics. For image enhancement, this work proposes to develop an automatic contrast enhancement technique using cumulative histogram equalization and gamma correction based on the image. For noise removal, this work proposes an algorithm Gamma Correction with Exponentially Adaptive Threshold (GCEAT) which suggests the use of GC for contrast enhancement along with a new wavelet-based adaptive soft thresholding technique for noise removal. The proposed GCEAT-based image de-noising is validated with other enhancement and noise removal techniques. Experimental results with low contrast synthetic and actual ultrasound images show that the suggested proposed system performs better than existing contrast enhancement techniques. Encouraging results were obtained with medical ultrasound images in terms of Peak-Signal to Noise Ratio (PSNR), Mean Square Error (MSE), Structural Similarity Index Measure (SSIM) and Average Intensity (AI).
The paper developed an approach to extract the VOI (Volume of Interest) from a CT dataset based on volume rendering, which can get a rough VOI from the volumetric data by simply adjusting the Window Level and the Window Width, then enhances the contrast among the voxels according to the Linear General Fuzzy Operator (LGFO) and extracts a desired structure from the above-mentioned enhanced 3D data through the feature function in rapid sequence; Our method adjusts the parameters according to the condition until the satisfied VOI is extracted. Experimental results show that the method combined with multi-manner can extract the VOI which represents clearly three dimensional anatomical structure of the object, such as tumors or normal organs, and can find potential applications in diagnosis and education.
Among all image enhancement techniques, histogram equalization is the most used technique. However, preserving brightness is the main issue, and it creates a weird look by destroying its originality. This paper proposes a new method that has command on the brightness issue of histogram equalization to enhance the quality of microscopic images. The method splits the histogram of each color channel into two sub-histograms based on their mean as the threshold and supplanting their cumulative distribution with Kumaraswamy distribution. The proposed method is tested with color microscopic images of cancer-affected lymph nodes gathered from Biological Image Repository IICBU, and objective and subjective assessments confirm that the proposed approach performs more efficiently compared to other state-of-the-art methods.
Digital image processing (DIP) becomes a common tool for analyzing engineering problems by fast, frequent and noncontact method of identification and measurement. An attempt has been made in the present investigation to use this method for automatically detecting the worn regions on the material surface and also its measurement. Brass material has been used for experimentation as it is used generally as a bearing material. A pin on disc dry sliding wear testing machine has been used for conducting the experiments by applying loads from 10 N to 50 N and by keeping sliding distance and sliding speed constant. After testing, images are acquired by using 1/2 inch interline transfer CCD image sensor with 795(H)∗896(V) spatial resolution of 8.6μm (H)∗8.3μm (V) unit cell. Denoising has been done to remove any possible noise followed by contrast stretching to enhance image for wear region extraction. Segmentation tool was used to divide the worn and unworn regions by identifying white regions greater than a threshold value with an objective of quantifying the worn surface for tested specimen. Canny edge detection and granulometry techniques have been used to quantify the wear region. The results revel that the specific wear rate increases with increase in applied load, at constant sliding speed and sliding distance. Similarly, the area of worn region as identified by DIP also increased from 42.7% to 69.97%. This is because of formation of deeper groves in the worn material.
Image contrast enhancement (CE) is a frequent image enhancement requirement in diverse applications. Histogram equalization (HE), in its conventional and different further improved ways, is a popular technique to enhance the image contrast. The conventional as well as many of the later versions of HE algorithms often cause loss of original image characteristics particularly brightness distribution of original image that results artificial appearance and feature loss in the enhanced image. Discrete Cosine Transform (DCT) coefficient mapping is one of the recent methods to minimize such problems while enhancing the image contrast. Tuning of DCT parameters plays a crucial role towards avoiding the saturations of pixel values. Optimization can be a possible solution to address this problem and generate contrast enhanced image preserving the desired original image characteristics. Biological behavior-inspired optimization techniques have shown remarkable betterment over conventional optimization techniques in different complex engineering problems. Gray wolf optimization (GWO) is a comparatively new algorithm in this domain that has shown promising potential. The objective function has been formulated using different parameters to retain original image characteristics. The objective evaluation against CEF, PCQI, FSIM, BRISQUE and NIQE with test images from three standard databases, namely, SIPI, TID and CSIQ shows that the presented method can result in values up to 1.4, 1.4, 0.94, 19 and 4.18, respectively, for the stated metrics which are competitive to the reported conventional and improved techniques. This paper can be considered a first-time application of GWO towards DCT-based image CE.
The dissolved particles and their resultant scattering are the underlying cause of low contrast and blur thereby producing poor quality underwater images. Single-shot shallow coastal underwater images are much in need of the preprocessing steps viz. image enhancement and restoration. The underwater image processing operations like classification, object detection and computer vision require an enhancement/restoration preprocessing. The paper aims to restore the visibility of objects in turbid water images with its effective edge aware restoration cum enhancement model. The restricted rolling guidance filter on the DCP-based restoration method produces better edge aware restoration and denoised output image. The model-based (MB) dark channel prior (DCP) along with an edge emphasis and contrast enhancement achieves the essential dehazing and improvement in contrast for the heavily blurred underwater images. The subjective and objective projections are an evidence for the same. This edge preserving and denoising nature of the model is also exhibited with comparisons made with promising algorithms over the decade.
Tendons and ligaments play an important role to ensure mobility and stability. To correctly understand the characteristics of these fibrous collagenous connective tissues, it is fundamental to highlight their 3D microstructure. In this study a microtomography (microCT) system was used to acquire human hamstring tendons after performing specific preparations to enhance image contrast. Specifically, samples were treated either through chemical dehydration or by 2% of phosphotungstic acid (PTA) in water (H2O) or in 70% ethanol (EtOH) solution. Acquired images were elaborated using dedicated techniques based on 3D Hessian multiscale filter so as to highlight the fibrous structure and identify specific geometric features. For any strategy of sample preparation, the proposed approach resulted to be adequate for identifying fascicle features, thus obtaining structures with diameter in the range of 100–600 μm and proper longitudinal alignment. In conclusion, a novel contrast enhancement microCT protocol was designed and preliminarily validated for the microstructural analysis of fibrous tissues.
Multifocus image fusion can obtain an image with all objects in focus, which is beneficial for understanding the target scene. Multiscale transform (MST) and sparse representation (SR) have been widely used in multifocus image fusion. However, the contrast of the fused image is lost after multiscale reconstruction, and fine details tend to be smoothed for SR-based fusion. In this paper, we propose a fusion method based on MST and convolutional sparse representation (CSR) to address the inherent defects of both the MST- and SR-based fusion methods. MST is first performed on each source image to obtain the low-frequency components and detailed directional components. Then, CSR is applied in the low-pass fusion, while the high-pass bands are fused using the popular “max-absolute” rule as the activity level measurement. The fused image is finally obtained by performing inverse MST on the fused coefficients. The experimental results on multifocus images show that the proposed algorithm exhibits state-of-the-art performance in terms of definition.
Successful image reconstruction requires the recognition of a scene and the generation of a clean image of that scene. We propose to use recurrent neural networks for both analysis and synthesis.
The networks have a hierarchical architecture that represents images in multiple scales with different degrees of abstraction. The mapping between these representations is mediated by a local connection structure. We supply the networks with degraded images and train them to reconstruct the originals iteratively. This iterative reconstruction makes it possible to use partial results as context information to resolve ambiguities.
We demonstrate the power of the approach using three examples: superresolution, fill-in of occluded parts, and noise removal/contrast enhancement. We also reconstruct images from sequences of degraded images.
Image enhancement is used to correct contrast deficiencies and to improve the quality of an image. It is essential and critical to extracting features and segmenting images. This paper presents a novel contrast enhancement algorithm based on newly defined texture histogram and fuzzy entropy with the ability to preserve edges and details, while avoiding noise amplification and over-enhancement. To demonstrate the performance, the proposed algorithm is tested on a variety of images and compared with other enhancement algorithms. Experimental results proved that the proposed method has better performance in enhancing images without over-enhancement and under-enhancement.
Mammogram registration is an important preprocessing technique, which helps in finding asymmetrical regions in left and right breast. However, correct nipple position is the crucial key point of mammogram registration since it is the only consistent and stable landmark upon a mammogram. To locate the nipple coordinates accurately in mammogram images, this work improves previous algorithms such as maximum height of the breast border (MHBB) and proposes a novel method consisting of local spatial-maximum mean intensity (LSMMI), local maximum zero-crossing (LMZC) based on the second-order derivative, and a combined approach dependent on LSMMI and LMZC. The proposed method is tested on 413 mammogram images from MIAS and DDSM databases. Consequently, the mean Euclidean distance (MED) between the ground truth identified by the radiologist and the detected nipple position is 0.64 cm, within 1 cm of the gold standard, for estimating the proposed method. The experimental results hence indicate that our proposed method can detect the nipple positions more accurately than other previous methods. Furthermore, the proposed select visible-nipple mammograms (SVNM) algorithm with the ability of generalization has achieved a 99% selection rate for automatic clustering of nipples in a mammography database, besides automatically detecting the breast border and nipple positions in mammograms.
Early identification of COVID-19 necessitates a precise interpretation of computed tomography (CT) chest images. Consistent and accurate photographs are critical for the correct diagnosis. In this paper, a novel pre-processing approach (mainly improves the contrast) based on gradient enhancement method (GCE) is proposed for prominent visualization of the diagnostic features from the COVID-19 CT images. This pre-processing stage helps in preserving the diagnostic information in the disease affected area. Edge information in the CT images helps the physicians and classification model for better classification. The edge features are preserved by improving the contrast by using multi-scale dependent dark pass filter. From the edge features and pixel intensities, cumulative distribution function (CDF) is computed. It is then mapped to a uniform distribution, resulting in contrast enhanced COVID-19 CT images. These pre-processed images are fed to the customized deep convolutional neural networks (CNNs) like AlexNet, VGG-19, ResNet-101, DenseNet-201, GoogleNet, MobileNet-v2, SqueezeNet, Inception-v3, Xception, and EfficientNet-b0 for classification. Introducing GCE as a pre-processing stage improves the COVID-19 classification accuracy by nearly 6%. Evaluation of the contrast enhancement by GCE technique is carried out on the CT images by contrast improvement index (CII), discrete entropy (DE), and Kullback–Leibler distance (KL-Distance) measures. The experimental findings reveal that the GCE method produces higher CII and DE values than the other enhancement methods available.
Many properties of the atmosphere affect the quality of images propagating through it by blurring it and reducing its contrast. The atmospheric path involves several limitations such as scattering and absorption of the light, and turbulence, which degrade the image.
Use of the standard Wiener filter for correction of atmospheric blur is often not effective because, although aerosol MTF (modulation transfer function) is rather deterministic, turbulence MTF is random. The atmospheric Wiener filter is one method for overcoming turbulence jitter.
The recently developed atmospheric Wiener filter, which corrects for turbulence blur, aerosol blur, and path radiance simultaneously, is implemented here in digital restoration of Landsat TM (thematic mapper) imagery over seven wavelength bands of the satellite instrumentation. Turbulence MTF is calculated from meteorological data or estimated if no meteorological data were measured. Aerosol MTF is consistent with optical depth. The product of the two yields atmospheric MTF, which is implemented in the atmospheric Wiener filter.
Restoration improves both smallness of size of resolvable detail and contrast. Restorations are quite apparent even under clear weather conditions.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.