Different image enhancement techniques are applied to improve the visual quality of an image on a display device. Contrast stretching, intensity level slicing with and without background, histogram equalization, logarithmic transformation and power law transformation are some image enhancement techniques. Most of the research work focuses on adaptive gamma correcting factors for better visualization of extremely low contrast images, giving less importance to the constant for enhanced visualization. This research proposes an efficient and less complex enhanced power law transformation (EPLT) approach to improve the contrast of dimmed and extremely bright images. The approach is a quick way to compute the value of C, i.e. constant for enhanced visualization. For better picture quality, it is very important to determine C automatically and the gamma correcting factor. This technique offers a novel and unique perspective on image contrast manipulation. The proposed enhancement technique is experimented on histopathology images of breast cancer, bright images and extremely dark images. The average peak signal-to-noise ratio (PSNR) for clinical data and BreakHis dataset is high for the proposed method are 16.52487 and 17.69335 respectively. The average RMSE for clinical data and BreakHis dataset is low for the proposed method are 40.88251 and 44.2546 respectively. It is observed that the proposed method yields the most satisfactory contrast enhancements based on performance comparison with other state-of-art enhancement algorithms and works efficiently on all types of images.
Data processing with multiple domains is an important concept in any platform; it deals with multimedia and textual information. Where textual data processing focuses on a structured or unstructured way of data processing which computes in less time with no compression over the data, multimedia data are processing deals with a processing requirement algorithm where compression is needed. This involve processing of video and their frames and compression in short forms such that the fast processing of storage as well as the access can be performed. There are different ways of performing compression, such as fractal compression, wavelet transform, compressive sensing, contractive transformation and other ways. One way of performing such a compression is working with the high frequency component of multimedia data. One of the most recent topics is fractal transformation which follows the block symmetry and archives high compression ratio. Yet, there are limitations such as working with speed and its cost while performing proper encoding and decoding using fractal compression. Swarm optimization and other related algorithms make it usable along with fractal compression function. In this paper, we review multiple algorithms in the field of fractal-based video compression and swarm intelligence for problems of optimization.
Since edge detection is a field of study used by various disciplines, it is of vital importance to calculate it accuretly. In addition, an edge detection algorithm may be involved in many image processing phases. A recent and contemporary approach, neutrosophy is based on neutrosophic logic, neutrosophic probability, neutrosophic set and neutrosophic statistics. This method yields better results compared to various other optimization methods. Neutrosophic Set (NS) is based on the origin, nature and scope of neutralities. In NS, problems are separated into true, false and indeterminacy subsets. It helps solve indeterminate situations effectively. It has recently been used in the field of image processing as indeterminate situations are also encountered in this field. Chan–Vese (CV) model is one of the successful region-based segmentation methods. The present study proposes a new NS-based edge detection method using CV algorithm. The proposed method combines the philosophical view of NS with successful segmentation characteristics of CV model. Obtained edge detection results are compared with different edge detection methods. The performances of each method are analyzed by using Figure of Merit (FOM) and Peak Signal-To-Noise Ratio (PSNR). The results suggest that the proposed method displays a better performance assessment compared to the used well-known methods.
Medical imaging technology is one of the most critical applications necessitating data protection, particularly if we need to keep track of any important patient information. This medical imaging system employs encryption and decryption. Using several cryptographic techniques, the security key was established to protect the data. Every network that sends and receives data needs to be secure in some way. In this paper, ALO along with the encryption algorithm honey is used to enhance the security of medical imaging technologies, the proposed study uses a variety of ways to protect important health information. In comparison to the existing one, the proposed honey algorithm attains better results. Further, the antlion optimizer uses random keys throughout the encryption and decryption. In the next step, the keys are remodeled using antlion optimization. After that, the updated key is optimized by analyzing every element and generating paths that trigger the traps and latching functions. The mean square error (MSE) is reduced to 1% and the peak signal-to-noise ratio (PSNR) is increased to 98% by using a hybrid strategy.
This paper proposes an innovative image compression scheme by utilizing the Adaptive Discrete Wavelet Transform-based Lifting Scheme (ADWT-LS). The most important feature of the proposed DWT lifting method is splitting the low-pass and high-pass filters into upper and lower triangular matrices. It also converts the filter execution into banded matrix multiplications with an innovative lifting factorization presented with fine-tuned parameters. Further, optimal tuning is the most important contribution that is achieved via a new hybrid algorithm known as Lioness-Integrated Whale Optimization Algorithm (LI-WOA). The proposed algorithm hybridizes the concepts of both the Lion Algorithm (LA) and Whale Optimization Algorithm (WOA). In addition, innovative cosine evaluation is initiated in this work under the CORDIC algorithm. Also, this paper defines a single objective function that relates multi-constraints like the Peak Signal-to-Noise Ratio (PSNR) as well as Compression Ratio (CR). Finally, the performance of the proposed work is compared over other conventional models regarding certain performance measures.
Many shares are generated from the secret images that are illogical containing certain message within them in visual cryptography. When all shares are piled jointly, they tend to expose the secret of the image. The multiple shares are used to transfer the secret image by using the encryption and decryption process by means of the elliptic curve cryptography (ECC) technique. In ECC method, the public key is randomly generated in the encryption process and decryption process, the private key (H) is generated by utilizing the optimization technique and for evaluating the performance of the optimization by using the peak signal to noise ratio (PSNR). From the test results, the PSNR has been exposed to be 65.73057, also the mean square error (MSE) value is 0.017367 and the correlation coefficient (CC) is 1 for the decrypted image without any distortion of the original image and the optimal PSNR value is attained using the cuckoo search (CS) algorithm when compared with the existing works.
A Reversible Data Hiding technique by using histogram shifting and modulus operator is proposed in which secret data is embedded into blocks of the cover image. These blocks are modified by using modulus operator to increase the number of peak points in the histogram of the cover image which further increases its embedding capacity. Secret data is embedded in the original cover blocks of the cover image by using peak points of the predicted blocks, which are generated by using modulus operator. Peak Signal to Noise Ratio and PSNR-Human Visual System are used to show the human visual acceptance of the proposed technique. Experimental results show that the embedding capacity is high as compared to the capacity of existing RDH techniques, while distortion in marked images is also less as compared to distortion produced by these existing techniques.
This paper presents a modified pulse width modulation (PWM) readout method for DPS with block-based self-adjusted reference voltage to extend dynamic range (DR). In this scheme, pixel arrays are divided into blocks with the same size, and the exposure process consists of two periods. During the first exposure, block-based reference voltage is generated based on the average photocurrent within the block. In the second exposure, the generated voltage is used as the reference voltage for PWM readout. Then the quantization results are combined together to represent the luminance information. DR and peak signal-to-noise ratio (PSNR) were discussed with different images and integration time. Simulation results show that this scheme can achieve a DR of over 96dB with 8-bit memory and 8-bit ADC and 145dB with 12-bit memory and 12-bit ADC. It also has 28.98dB PSNR on average, better than 20.11dB in fixed reference voltage method and 17.25dB in ramp reference voltage method. Combining the feature of active pixel sensor (APS) and PWM readout, this scheme allows better DR but also obtains less distortion of the image reconstruction.
The proposed reversible data hiding technique is the extension of Peng et al.’s technique [F. Peng, X. Li and B. Yang, Improved PVO-based reversible data hiding, Digit. Signal Process.25 (2014) 255–265]. In this technique, a cover image is segmented into nonoverlapping blocks of equal size. Each block is sorted in ascending order and then differences are calculated on the basis of locations of its largest and second largest pixel values. Negative predicted differences are utilized to create empty spaces which further enhance the embedding capacity of the proposed technique. Also, the already sorted blocks are used to enhance the visual quality of marked images as pixels of these blocks are more correlated than the unsorted pixels of the block. Experimental results show the effectiveness of the proposed technique.
Steganography has become one of the most significant techniques to conceal secret data in media files. This paper proposes a novel automated methodology of achieving two levels of security for videos, which comprise encryption and steganography techniques. The methodology enhances the security level of secret data without affecting the accuracy and capacity of the videos. In the first level, the secret data is encrypted based on Advanced Encryption Standard (AES) algorithm using Java language, which renders the data unreadable. In the second level, the encrypted data is concealed in the video frames (images) using FPGA hardware implementation that renders the data invisible. The steganographic technique used in this work is the least significant bit (LSB) method; a 1–1–0 LSB scheme is used to maintain significantly high frame imperceptibility. The video frames used as cover files are selected randomly by the randomization scheme developed in this work. The randomization method scatters the data throughout the video frames rendering the retrieval of the data in its original order, without a proper key, a challenging task. The experimental results of concealment of secret data in video frames are presented in this paper and compared with those of similar approaches. The performance in terms of area, power dissipation, and peak signal-to-noise ratio (PSNR) of the proposed method outperformed traditional approaches. Furthermore, it is demonstrated that the proposed method is capable of automatically embedding and extracting the secret data at two levels of security on video frames, with a 57.1dB average PSNR.
Dedicated hardware for “Discrete Wavelet Transform” (DWT) is at high demand for real-time imaging operations in any standalone electronic devices, as DWT is being extensively utilized for most of the transform-domain imagery applications. Various DWT algorithms exist in the literature facilitating its software implementations which are generally unsuitable for real-time imaging in any stand-alone devices due to their power intensiveness and huge computation time. In this paper, a convolutional DWT-based pipelined and tunable VLSI architecture of Daubechies 9/7 and 5/3 DWT filter is presented. Our proposed architecture, which mingles the advantages of convolutional and lifting DWT while discarding their notable disadvantages, is made area and memory efficient by exploiting “Distributed Arithmetic’ (DA) in our own ingenious way. Almost 90% reduction in the memory size than other notable architectures is reported. In our proposed architecture, both the 9/7 and 5/3 DWT filters can be realized with a selection input, “mode”. With the introduction of DA, pipelining and parallelism are easily incorporated into our proposed 1D/2D DWT architectures. The area requirement and critical path delay are reduced to almost 38.3% and 50% than that of the latest remarkable designs. The performance of the proposed VLSI architecture also excels in real-time applications.
This paper proposes six novel approximate 1-bit full adders (AFAs) for inexact computing. The six novel AFAs namely AFA1, AFA2, AFA3, AFA4, AFA5, and AFA6 are derived from state-of-the-art exact 1-bit full adder (EFA) architectures. The performance of these AFAs is compared with reported AFAs (RAAs) in terms of design metrics (DMs) and peak-signal-to-noise-ratio (PSNR). The DMs under consideration are power, delay, power-delay-product (PDP), energy-delay-product (EDP), and area. For a fair comparison, the EFAs and proposed AFAs along with RAAs are described in Verilog, simulated, and synthesized using Cadences’ RC tool, using generic 180 nm standard cell library. The unconstrained synthesis results show that: among all the proposed AFAs, the AFA1 and AFA2 are found to be energy-efficient adders with high PSNR. The AFA1 has a total power=1.722μW, delay=213ps, PDP=0.3668fJ, EDP=78.1285×10−27Js, area=36.59μm2, and PSNR=26.4292dB. And the AFA2 has the total power=1.924μW, delay=215ps, PDP=0.4136fJ, EDP=88.924×10−27Js, area=33.264μm2, and PSNR=26.4292dB.
Addition and multiplication are some of the most broadly adopted arithmetic operations in a wide range of applications. This paper proposes new structures of approximate multipliers to optimize the area, delay, and power without affecting the accuracy metrics. Multipliers and adders play a significant role in the functioning of any digital circuit or system. The overall performance of a processor highly depends on the speed of adders and the energy consumption. In this paper, two types of compact error-tolerant approximate adders are designed and used along with approximate 4:2 compressors to improvise the efficiency of the approximate multipliers. The proposed approximate multipliers show good results when compared to the existing structures in terms of area, delay, power, and accuracy. The approximate multipliers are applied to image sharpening and image multiplication applications. The error-tolerant adder’s performance is evaluated in the practical domain using the image blending application. Peak signal-to-noise ratio (PSNR) performance and the structural similarity index metric (SSIM) are used to assess the modeled designs. The proposed approximate multipliers and adders exhibit better performance in terms of PSNR and SSIM and are found to be an optimized design to apply effectively in various error-tolerant image processing applications.
Breast cancer is one of the major causes of death among women. If a cancer can be detected early, the options of treatment and the chances of total recovery will increase. From a woman's point of view, the procedure practiced (compression of breasts to record an image) to obtain a digital mammogram (DM) is exactly the same that is used to obtain a screen film mammogram (SFM). The quality of DM is undoubtedly better than SFM.
However, obtaining DM is costlier and very few institutions can afford DM machines. According to the National Cancer Institute 92% of breast imaging centers in India do not have digital mammography machines and they depend on the conventional SFM. Hence in this context, one should answer "Can SFM be enhanced up to a level of DM?" In this paper, we discuss our experimental analysis in this regard. We applied elementary image enhancement techniques to obtain enhanced SFM. We performed the quality analysis of DM and enhanced SFM using standard metrics like PSNR and RMSE on more than 350 mammograms. We also used mean opinion score (MOS) analysis to evaluate enhanced SFMs. The results showed that the clarity of processed SFM is as good as DM.
Furthermore, we analyzed the extent of radiation exposed during SFM and DM. We presented our literally findings and clinical observations.
The progressive nature of the JPEG2000 coded bitstream allows the reconstruction of images of different qualities from a single coded bitstream. This feature is utilized in this work to estimate the mean-squared-error (MSE) of reconstructed images without requiring the original image. It is based on the fact that if the MSE between the original image and a lower quality image is known, the MSE for higher quality images can be estimated from a quality scalable bitstream. The proposed method is highly accurate and is very simple as no complex statistical modeling is needed. Therefore, it is suitable to measure the fidelity of JPEG2000 decoded images at any desired quality in a real-time scenario.
In this paper, we changed the methodology for pixel value differencing. The proposed method work on RGB color images improves the existing PVD technique in terms of embedding capacity and overcomes the issue of falling off boundaries in the traditional PVD technique, and provides security to the secret message from histogram quantization attack. Color images are composed of three different color channels (red, green, and blue), so we cannot apply the traditional pixel value differencing algorithm to them. Due to that, the proposed technique divides the RGB photograph in red, blue, and green channels. Following that the modified pixel value differencing algorithm is employed to all successive pixels of color channels. We get the total embedding capacity by adding the embedding capacities of each color component. After embedding the data, we concatenate the color channels to get the stegoimage. On a series of color images, we tested our pixel value differencing approach and found that the stego-picture’s visual excellence and payload capacity were reasonable. The variation in histogram between the stego and cover photographs was minor, making it resistant to histogram quantization attacks, and the suggested approach also solves the issue of falling off the boundary.
India is rich in its heritage and culture. It has many historical monuments and temples where the walls are made of inscribed stones and rocks. The stone inscriptions play a vital role in portraying about the ancient incidents. Hence, the digitization of these stone inscriptions is necessary and contributes much for the epigraphers. Recently, the digitizing of these inscriptions began with the binarization process of stone inscriptions. This process mainly depends on the thresholding technique. In this paper, the binarization of terrestrial and underwater stone inscription images is preceded by a contrast enhancement and succeeded by edge-based filtering that minimizes noise and fine points the edges. A new method called modified bi-level thresholding (MBET) algorithm is proposed and compared with various existing thresholding algorithms namely Otsu method, Niblack method, Sauvola method, Bernsen method and Fuzzy C means method. The obtained results are evaluated with the performance metrics such as peak signal-to-noise ratio (PSNR) and standard deviation (SD). It is observed that the proposed method has an improvement of 49% and 39%, respectively, on an average by the metrics considered.
The demand for high data rate transmission is ever increasing every day. Multi-carrier code division multiple access (MC-CDMA) system is considered as the forerunner and advancement in the mobile communication system. In this paper, two types of JPEG2000 lossily-compressed test images are transmitted through an MC-CDMA channel in low SNR (as low as 4 dB) environment and their quality are evaluated objectively by using peak signal-to-noise ratio (PSNR) and root mean square error (RMSE). The test images are all compressed from ratio of 10 : 1 up to 70 : 1 and the system involves multi-user image transmission in near real-time low SNR (±5 dB). It is found that JPEG2000 image compression technique that applies wavelet transform performed quite well in the low SNR multipath fading channel — as low as 4 dB, and this looks promising for future applications.
In JPEG2000 standard, the number of bit planes of wavelet coefficients to be used in encoding is dependent on the compression ratio as well as subbands. These significant wavelet bit planes can be utilized to embed bits of secret data as they are retained in the final bit stream after Tier-2 encoding. In proposed techniques, the above mentioned concept have been utilized to embed secret data bits in lowest significant bit planes of the quantized wavelet coefficients of a cover image. In first technique, secret data is converted into a series of symbols using multiple bases notational system. These bases are selected by using the degree of local variation of coefficient values of the cover image so that coefficient of a complex region can potentially carry more secret data bits as compared to coefficients of smooth region. Symbols of secret data are embedded into bit planes of significant quantized wavelet coefficients by using EMD approaches. In second technique, the secret data bits are embedded into significant quantized wavelet coefficients by using modified EMD. Experimental results show that these proposed techniques provide large embedding capacity and better visual quality stego images than existing steganography techniques applicable to JPEG2000 compressed images. It has also been shown that modified EMD-based technique is better than EMD with multiple bases notational system (MBNS)-based technique.
Magnetic Resonance Imaging (MRI) techniques are a fundamental and imperative part of the medical image processing field. The images acquired from the MRI machines are affected by the noise. This noise degrades the quality of the images. Acquisition of MRI with noise may give erroneous results. Hence, to enhance the image quality, it is necessary to reduce or remove this noise. To enhance the image quality of MRI, a plethora of filtering algorithms are available along with the morphological operations. In this paper, we have implemented numerous filters like Adaptive Median filter, Median filter, Mean filter, bilateral filter, NLM filter, Gaussian filter, Weiner filter, and morphological operations to eliminate the noise in the MRI of the spinal cord. The scenarios considered are 1. Application of filters, 2. Application of filters followed by morphological operations, and 3. Morphological operations followed by the application of filters. Statistical parameters like Peak Signal-to-Noise Ratio (PSNR) and Mean Square Error (MSE) are found for all three approaches and are used to analyze the performance of these techniques. NLM filters are found to give the best performance when compared to other filters. Morphological operations affect the performance of the filters. Application of morphological operations before filtering degrades the filter performance while applying them after improves the performance. The dataset comprises of 250 spinal cord MRIs with noise. The author inferred that the performance of the filters is improved by applying the filtering techniques after the morphological operation.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.