Please login to be able to save your searches and receive alerts for new content matching your search criteria.
In the field of medical imaging, there is a strong requirement for the storage of an immense volume of digitized medical image data. The digital image must be compressed heavily before storing and transferring it because of having restricted bandwidth and scope of storage. When compression of images is done at a lower bit rate it reduces the image fidelity that results in a drop in quality but poses many challenges to overcome and prevents diagnostic miscalculations with great compression rates for reduced storage and quick transmission. To overcome this challenging issue several hybrid efficient compression procedures solely for medical digital images have been introduced in recent years. The transformation of image, quantization, and encoding is part of image compression. This paper presents a qualitative and comprehensive review of image compression techniques for two-dimensional (2D) still and three-dimensional (3D) medical images. The features and constraints associated with various compression methods for compressing grayscale images are reviewed and discussed in this paper. In-depth reviews of the practical concerns and difficulties in the medical scan compression arena are provided.
To use crystallography for the determination of the three-dimensional structures of proteins, protein crystals need to be grown. Automated imaging systems are increasingly being used to monitor these crystallization experiments. These present problems of accessibility to the data, repeatability of any image analysis performed and the amount of storage required. Various image formats and techniques can be combined to provide effective solutions to high volume processing problems such as these, however lack of widespread support for the most effective algorithms, such as JPeg2000 which yielded a 64% improvement in file size over the bitmap, currently inhibits the immediate take up of this approach.
In this paper, fractal compression methods are reviewed. Three new methods are developed and their results are compared with the results obtained using four previously published fractal compression methods. Furthermore, we have compared the results of these methods with the standard JPEG method. For comparison, we have used an extensive set of image quality measures. According to these tests, fractal methods do not yield significantly better compression results when compared with conventional methods. This is especially the case when high coding accuracy (small compression ratio) is desired.
It has been shown that using bincodes (BCs) to represent binary images is storage-saving and easy to be manipulated. Given a set of BCs, this paper presents improved codes, namely, the modified BCs (MBCs) to represent binary images. We first transform the given BCs into a set of logical expressions, then an improved encoding scheme is employed to reduce the storage space required for representing these logical expressions, thus obtaining the MBCs. Given twenty different types of real images, experimental results show that the proposed MBCs has 25% to 28% improvement when compared to BCs. By adopting the level-compact scheme on MBCs, storage space can be reduced further. It is shown that in some cases the storage-saving improvement is more than 50%. Some image manipulations, such as computing geometrical properties and set operations, on the MBCs are also investigated.
In recent years, wavelets have attracted great attention in both still image compression and video coding, and several novel wavelet-based image compression algorithms have been developed so far, one of which is Shapiro's embedded zerotree wavelet (EZW) image compression algorithm. However, there are still some deficiencies in this algorithm. In this paper, after the analysis of the deficiency in EZW, a new algorithm based on quantized coefficient partitioning using morphological operation is proposed. Instead of encoding the coefficients in each subband line-by-line, regions in which most of the quantized coefficients are significant are extracted by morphological dilation and encoded first. This is followed by using zerotrees to encode the remaining space which has mostly zeros. Experimental results show that the proposed algorithm is not only superior to the EZW, but also compares favorably with the most efficient wavelet-based image compression algorithms reported so far.
In this paper, a new edge detection scheme based on block truncation coding (BTC) is proposed. As we know, the BTC is a simple and fast scheme for digital image compression. To detect an edge boundary using the BTC scheme, the bit plane information of each BTC-compressed block is exploited, and a simple block type classifier is introduced.
The experimental results show that the proposed scheme clearly detects the edge boundaries of digital images while requiring very little computational complexity. Meanwhile, the edge detection process can be incorporated into all BTC variant schemes. In other words, the newly proposed scheme provides a good approach for the detection of edge boundaries using block truncation coding.
Image Quality Measure (IQM) is used to automatically measure the degree of image artifacts such as blocking, ringing and blurring effects. It is calculated traditionally in the image spatial domain. In this paper, we present a new method of transforming an image into a low-dimensional domain based on random projection, so we can efficiently obtain the compatible IQM. From the transformed domain, we can calculate the Peak Signal-to-Noise Ratio (PSNR) and apply fuzzy logic to generate a Low-Dimensional Quality Index (LDQI). Experimental results show that the LDQI can approximate the IQM in the image spatial domain. We observe that the LDQI is suited for measuring the compression blur due to its relatively low distortion. The relative error is about 0.15 as the compression blur increases.
In this article, a new minimum spanning tree-based method for shape description and matching is proposed. Its properties are checked through the problem of graphical symbols recognition. Recognition invariance in front shift and multi-oriented noisy objects was studied in the context of small and low resolution binary images. The approach seems to have many desirable properties, even if the construction of graphs induces an expensive algorithmic cost. In order to reduce time computing, an alternative solution based on image compression concepts is provided. The recognition is realized in a compact space, namely the Discrete Cosine space. The use of block discrete cosine transform is discussed and justified. The experimental results led on the GREC2003 database show that the proposed method is characterized by a good discrimination power, a real robustness to noise, with an acceptable time computing. The position with a reference approach like Zernike moments is also investigated to measure the relevance of the proposed technique.
Region-of-interest (ROI) image coding is one of the new features included in the JPEG2000 image coding standard. Two methods are defined in the standard: the Maxshift method and the generic scaling based method. In this paper, a new region-of-interest coding method called Contour-based Multi-ROI Multi-quality Image Coding (CMM) is proposed. Unlike other existing methods, the CMM method takes the contour and texture of the whole image as a special ROI, which makes the visually most important parts (in both ROI and Background) to be coded first. Experimental results indicate that the proposed method significantly outperforms the previous ROI coding schemes in the overall ROI coding performance.
Genetic Algorithm (GA) has been successfully applied to codebook design for vector quantization and its candidate solutions are normally turned by LBG algorithm. In this paper, to solve premature phenomenon and falling into local optimum of GA, a new Genetic Simulated Annealing-based Kernel Vector Quantization (GSAKVQ) is proposed from a different point of view. The simulated annealing (SA) method proposed in this paper can approach the optimal solution faster than the other candidate approaches. In the frame of GA, firstly, a new special crossover operator and a mutation operator are designed for the partition-based code scheme, and then a SA operation is introduced to enlarge the exploration of the proposed algorithm, finally, the Kernel function-based fitness is introduced into GA in order to cluster those datasets with complex distribution. The proposed method has been extensively compared with other algorithms on 17 datasets clustering and four image compression problems. The experimental results show that the algorithm can achieve its superiority in terms of clustering correct rate and peak signal-to-noise ratio (PSNR), and the robustness of algorithm is also very good. In addition, we took “Lena” as an example and added Gaussian noise into the original image then adopted the proposed algorithm to compress the image with noise. Compared to the original image with noise, the reconstructed image is more distinct, and with the parameter value increasing, the value of PSNR decreases.
This paper proposes an innovative image compression scheme by utilizing the Adaptive Discrete Wavelet Transform-based Lifting Scheme (ADWT-LS). The most important feature of the proposed DWT lifting method is splitting the low-pass and high-pass filters into upper and lower triangular matrices. It also converts the filter execution into banded matrix multiplications with an innovative lifting factorization presented with fine-tuned parameters. Further, optimal tuning is the most important contribution that is achieved via a new hybrid algorithm known as Lioness-Integrated Whale Optimization Algorithm (LI-WOA). The proposed algorithm hybridizes the concepts of both the Lion Algorithm (LA) and Whale Optimization Algorithm (WOA). In addition, innovative cosine evaluation is initiated in this work under the CORDIC algorithm. Also, this paper defines a single objective function that relates multi-constraints like the Peak Signal-to-Noise Ratio (PSNR) as well as Compression Ratio (CR). Finally, the performance of the proposed work is compared over other conventional models regarding certain performance measures.
Recently a discrete-time cellular neural network (DT-CNN) is applied to many image processing applications such as compression and reconstruction, recognition and so on. Conventional image processing techniques such as the discrete cosine transformation (DCT) and wavelet transforms work as a simple filter and do not make good use of interpolative dynamics by the feedback A template, which is one of the significant characteristics of a cellular neural network (CNN). If CNN is applied to a filter by an only feedforward B template, one should make a model which consists of digital filters using high speed signal processing modules such as a high speed digital signal processor. This paper describes the nonlinear interpolative effect of the feedback A template, by showing the evaluation of image compression and reconstruction.
Two-Dimensional Wavelet Transforms have proven to be highly effective tools for image analysis. In this paper, we present a VLSI implementation of four- and six-coefficient Daubechies Wavelet Transforms using an algebraic integer encoding representation for the coefficients. The Daubechies filters (DAUB4 and DAUB6) provide excellent spatial and spectral locality, properties which make it useful in image compression. In our algorithm, the algebraic integer representation of the wavelet coefficients provides error-free calculations until the final reconstruction step. This also makes the VLSI architecture simple, multiplication-free and inherently parallel. Compared to other DWT algorithms found in the literature, such as embedded zero-tree, recursive or semi-recursive, linear systolic arrays and conventional fixed-point binary architectures, it has reduced hardware cost, lower power dissipation and optimized data-bus utilization. The architecture is also cascadable for computation of one- or multi-dimensional Daubechies Discrete Wavelet Transforms.
A Centroid Neural Network with Weighted Features (CNN-WF) is proposed and presented in this paper. The proposed CNN-WF is based on a Centroid Neural Network (CNN), an effective clustering tool that has been successfully applied to various problems. In order to evaluate the importance of each feature in a set of data, a feature weighting concept is introduced to the Centroid Neural Network in the proposed algorithm. The weight update equations for CNN-WF are derived by applying the Lagrange multiplier procedure to the objective function constructed for CNN-WF in this paper. The use of weighted features makes it possible to assess the importance of each feature and to reject features that can be considered as noise in data. Experiments on a synthetic data set and a typical image compression problem show that the proposed CNN-WF can assess the importance of each feature and the proposed CNN-WF outperforms conventional algorithms including the Self-Organizing Map (SOM) and CNN in terms of clustering accuracy.
Energy consumption is a critical problem affecting the lifetime of wireless image sensor networks (WISNs). In such systems, images are usually compressed using JPEG standard to save energy during transmission. And since DCT transformation is the most computationally intensive part in the JPEG technique, several approximation techniques have been proposed to further decrease the energy consumption. In this paper, we propose a low-complexity DCT approximation method which is based on the combination of the rounded DCT with a pruned approach. Experimental comparison with recently proposed schemes, using Atmel Atmega128L platform, shows that our method requires less arithmetic operations, and hence less processing time and/or the energy consumption while providing better performance in terms of PSNR metric.
Prolonging the lifetime of wireless sensor networks (WSN) is the biggest challenging issue. In wireless multimedia sensor networks (WMSNs), sensor nodes have limited energy resource and to increase the lifetime of the network, it is necessary to design an effective fast algorithm that aims at reducing the consumed power. This paper proposes an energy-efficient discrete cosine transform (DCT) approximation requiring only 12 additions. Associated with a JPEG compression chain, this DCT approximation ensures a very good rate-distortion compromise, but above all, a very low computational complexity and significant compatibility with the exact DCT. Simulation results clearly show that the proposed fast transform algorithm achieves a better trade-off between image quality, computational complexity and energy consumption compared to any existing pruned DCT approximations. Furthermore, it is very suitable for the resource constrained wireless visual sensor networks (WVSNs) requiring low bitrates.
In the task of power line inspection, Unmanned Aerial Vehicles (UAVs) are frequently used for capturing images. With the rapid advancement of sensor technology, the spatial, radiometric, and spectral resolutions of UAV images are constantly improving, leading to an increased storage requirement for individual images. Given that UAVs usually operate with limited computational resources, transmission capability and storage space, there are significant challenges in image compression, storage and transmission. This underscores the importance of a high-performance image compression technique. To solve the above problem, we unveil a compression strategy for images that have been acquired through learning utilizing discrete Gaussian mixture-based probability distributions to increase the efficiency of image compression and the fidelity of reconstruction. In addition, to speed up decoding, we employ a parallel context model, which facilitates decoding in a highly parallel manner. Experimental evidence indicates that our approach attains performance that is at the forefront of the field while significantly expediting the decoding process (speeding up the decoding process by more than 49.78%) in our experiments, outpacing traditional coding standards and existing learned compression approaches by 5.75 dB and 1.23 dB in PSNR.
The concentration of the cones and ganglion cells is much higher in the fovea than the rest of the retina. This non-uniform sampling results in a retinal image that is sharp at the fixation point, where a person is looking, and blurred away from it. This difference between the sampling rates at the different spatial locations presents us with the question of whether we can employ this biological characteristic to achieve better image compression. This can be achieved by compressing an image less at the fixation point and more away from it. It is, however, known that the vision system employs more that one fixation to look at a single scene which presents us with the problem of combining images pertaining to the same scene but exhibiting different spatial contrasts. This article presents an algorithm to combine such a series of images by using image fusion in the gradient domain. The advantage of the algorithm is that unlike other algorithms that compress the image in the spatial domain our algorithm results in no artifacts. The algorithm is based on two steps, in the first we modify the gradients of an image based on a limited number of fixations and in the second we integrate the modified gradient. Results based on measured and predicted fixations verify our approach.
This paper proposes a fractal image encoding algorithm based on matching error threshold: first the authors set up two kick-out conditions to reduce the capacity of the codebook, and then set up a matching threshold when searching the best matching blocks, which can shorten its runtime greatly. Meanwhile, the authors discard the isometric transformations that are mentioned in most literatures, because the usage of the isometric transformations only increase the computational complexity, the same or even better reconstructed image can achieve through reducing the sliding step of producing domain blocks. Experimental results indicate that the proposed algorithm can both shorten the encoding time greatly and achieve the same or better reconstructed image quality as compared with the basic fractal encoding algorithm with full search.
In this paper a fast fractal coding method based on fractal dimension is proposed. Image texture is an important content in image analysis and processing which can be used to describe the extent of irregular surface. The fractal dimension in fractal theory can be used to describe the image texture, and it is the same with the human visual system. The higher the fractal dimension, the rougher the surface of the corresponding graph, and vice versa. Therefore in this paper a fast fractal encoding method based on fractal dimension is proposed. During the encoding process, using the fractal dimension of the image, all blocks of the given image first are defined into three classes. Then each range block searches the best match in the corresponding class. The method is based on differential box counting which is chosen specifically for texture analysis. Since the searching space is reduced and the classification operation is simple and computationally efficient, the encoding speed is improved and the quality of the decoded image is preserved. Experiments show that compared with the full search method, the proposed method greatly reduced the encoding time, obtained a rather good retrieved image, and achieved the stable speedup ratio.