Chronic kidney disease (CKD) is a life-threatening condition which is hard to identify early, as there are no symptoms. It is possible to prevent or minimize the evolution of this chronic condition before it reaches an end stage, where dialysis or surgical intervention is the only way to save the patient’s life. Early detection and adequate therapy can increase the risk of this occurring. The majority of supervised techniques employ labeled datasets to build in-domain predictions of CKD. However, the results are typically poor when a classifier is used to categorize an image of CKD in a different domain after it has been trained on labeled images for that domain. However, a machine learning technique known as “cross-domain few-shot learning” involves training a model to generalize information from one domain to another using only a small number of samples from the target domain. So, a novel hybrid classifier with few shot learning is proposed to improve the cross-domain CKD prediction performance. The kidney’s CT images were collected and preprocessed using ENSNet annotation, autoencoder denoising, green fire blue filter and image restoration. The pre-data were provided for the segmentation process using supervised InfNet segmentation, which segments an exact region of CKD. The segmented regions were fed to an input of the classification process. A few shot learning-based hybrid classifiers were developed to predict CKD in the cross-domain. In a few shot learning, more samples were collected for the training phase, and fewer samples were used for the classifier’s testing. Based on the few shot learning, the proposed CKD model was tested and diagnosed with the input samples at appropriate conditions. The designed proposed model offers 97.9% accuracy, 96.4% precision, 96.4% recall and 98.6% negative predictive value. In addition, observed values of the proposed model were contrasted with some other approaches for validating the process. The proposed few-shot learning-based hybrid classifier is an effective choice for more precise cross-domain CKD prediction at low processing time.
Sparse representation has recently been extensively studied in the field of image restoration. Many sparsity-based approaches enforce sparse coding on patches with certain constraints. However, extracting structural information is a challenging task in the field image restoration. Motivated by the fact that structured sparse representation (SSR) method can capture the inner characteristics of image structures, which helps in finding sparse representations of nonlinear features or patterns, we propose the SSR approach for image restoration. Specifically, a generalized model is developed using structured restraint, namely, the group l2,1-norm of the coefficient matrix is introduced in the traditional sparse representation with respect to minimizing the differences within classes and maximizing the differences between classes for sparse representation, and its applications with image restoration are also explored. The sparse coefficients of SSR are obtained through iterative optimization approach. Experimental results have shown that the proposed SSR technique can significantly deliver the reconstructed images with high quality, which manifest the effectiveness of our approach in both peak signal-to-noise ratio performance and visual perception.
Optical properties of water distort the quality of underwater images. Underwater images are characterized by poor contrast, color cast, noise and haze. These images need to be pre-processed so as to get some information. In this paper, a novel technique named Fusion of Underwater Image Enhancement and Restoration (FUIER) has been proposed which enhances as well as restores underwater images with a target to act on all major issues in underwater images, i.e. color cast removal, contrast enhancement and dehazing. It generates two versions of the single input image and these two versions are fused using Laplacian pyramid-based fusion to get the enhanced image. The proposed method works efficiently for all types of underwater images captured in different conditions (turbidity, depth, salinity, etc.). Results obtained using the proposed method are better than those for state-of-the-art methods.
Remote sensing image deblurring is a long-term and challenging inverse problem. Among them, the ability to find the correct image prior is the key to recovering high-quality and clear images. Therefore, in order to recover high-quality clear images, this paper has found a new and effective image prior: The dark pixel a priori in remote sensing images and a fuzzy remote sensing image restoration method based on dark pixel prior is proposed. Since the dark pixels in the clear remote sensing image will increase the pixel value of the dark pixels in the blurred remote sensing image due to the weighted balance with the bright pixels around it, the sparsity of the dark pixels in the blurred remote sensing image is reduced. Therefore, by using this nonsparse feature of dark pixels in fuzzy remote sensing images, fuzzy remote sensing images and clear remote sensing images can be effectively distinguished, thus realizing the restoration of fuzzy remote sensing images. The experimental results show that the proposed method has obvious effects on the restoration effect and time.
In order to discover the difference among dual strategies, we propose an alternating primal-dual algorithm (APDA) that can be considered as a general version for minimizing problem which is multiple-summed separable convex but not necessarily smooth. First, the original multiple-summed problem is transformed into two subproblems. Second, one subproblem is solved in the primal space and the other is solved in the dual space. Finally, the alternating direction method is executed between the primal and the dual part. Furthermore, the classical alternating direction method of multipliers (ADMM) is extended to solve the primal subproblem which is also multiple summed, therefore, the extended ADMM can be seen as a parallel method for the original problem. Thanks to the flexibility of APDA, different dual strategies for image restoration are analyzed. Numerical experiments show that the proposed method performs better than some existing algorithms in terms of both speed and accuracy.
The sheer complexity of the visual task and the need for robust behavior provide a unifying theme for computer vision — what can be accomplished is constrained by relatively limited computational resources, but the resulting performance must be robust. As a consequence parallel computation over space and time becomes essential for machine vision systems. Parallel computation is all encompassing and includes both image representations and processing. We give an overview in this paper of a taxonomy of parallel hardware architectures which includes pipelining, Single instruction multiple data (SIMD), multiple instruction multiple data (MIMD), data-flow, and neurocomputing. Specific applications and the relevance of parallel hardware architectures for computer vision tasks are discussed as well.
A method to reduce the side effects of dual-line timed address-event (TAE) vision system is proposed in this paper. The side effects include edge discontinuity and the natural insensitivity to object edges in the motion direction. X-event, a kind of artificial event is introduced to represent light intensity difference perpendicular to the motion direction of the target object. New timestamps are attached to the raw TAE data to adjust temporary resolution to the same order of magnitude with the vertical axis in the TAE representation. After removing noisy and redundant events, designed templates are used to generate X-events to renovate broken lines and reproduce perpendicular edges. It is a real-time process which is unnecessary to wait for the collection of all the raw TAE data. A behavioral model of a 2 × 256 TAE vision sensor is established in Matlab, and X-events Generation block is realized in FPGA. Experimental results show that the proposed method can patch the TAE representation effectively to obtain a one-pixel-wide, precise, closed and connected contour.
Images captured in degraded weather conditions often suffer from bad visibility. Pre-existing haze removal methods, the ones that are effective are computationally complex too. In common de-hazing approaches, estimation of atmospheric light is not achieved properly as a consequence, haze is not removed significantly from the sky region. In this paper, an efficient method of haze removal from a single image is introduced. To restore haze-free images comprising of both sky as well as nonsky regions, we developed a linear model to predict atmospheric light and estimated the transmission map using the dark channel prior followed by an application of a guided filter for quick refinement. Several experiments were conducted on a large variety of images, both reference and nonreference, where the proposed image de-hazing algorithm outperforms most of the prevalent algorithms in terms of perceptual visibility of the scene and computational efficiency. The proposed method has been empirically measured through quantitative and qualitative evaluations while retaining structure, edges, and improved color.
The fractal geometries are applied extensively in many applications like pattern recognition, texture analysis and segmentation. The application of fractal geometry requires estimation of the fractal features. The fractal dimension and fractal length are found effective to analyze and measure image features, such as texture, resolution, etc. This paper proposes a new wavelet–fractal technique for image resolution enhancement. The resolution of the wavelet sub-bands are improved using scaling operator and then it is transformed into texture vector. The proposed method then computes fractal dimension and fractal length in gradient domain which is used for resolution enhancement. It is observed that by using scaling operator in the gradient domain, the fractal dimension and fractal length becomes scale invariant. The major advantage of the proposed wavelet–fractal technique is that the feature vector retains fractal dimension and fractal length both. Thus, the resolution enhanced image restores the texture information well. The texture information has also been observed in terms of fractal dimension with varied sample size. We present qualitative and quantitative analysis of the proposed method with existing state of art methods.
Salt-and-pepper noise consists of outlier pixel values which significantly impair image structure and quality. Multiparent fractal image coding (MFIC) methods substantially exploit image redundancy by utilizing multiple domain blocks to approximate the range block, partially compensating for the information loss caused by noise. Motivated by this, we propose two novel image restoration methods based on MFIC to remove salt-and-pepper noise. The first method integrates Huber M-estimation into MFIC, resulting in an improved anti-salt-and-pepper noise robust fractal coding approach. The second method incorporates MFIC into a total variation (TV) regularization model, including a data fidelity term, an MFIC term and a TV regularization term. An alternative iterative method based on proximity operator is developed to effectively solve the proposed model. Experimental results demonstrate that these two proposed approaches achieve significantly enhanced performance compared to traditional fractal coding methods.
We present an edge preserving regularization scheme for the restoration of degraded images compressed by the lossy compression method based on Generalized Finite Automata edge encoding and the Iterative Constrained Least Square Regularization technique. In this scheme, the degraded image reconstructed from lossy image compressions is treated as the input to the image restoration process. The edge information extracted from the source image is utilized as a priori knowledge for the subsequent reconstruction. In order to compromise the overall bit rate incurred by the additional edge information, a generalized finite automata encoding technique is adopted to encode the bit-planes of the edge image. The generalized finite automata method ensures an efficient and adaptive image-independent encoding of the edge image. The experiment has shown that the proposed scheme could significantly improve both the objective and subjective quality of the reconstructed image over that of the set partitioning in hierarchical trees by recovering more image details and edge structures under the same bit rates.
In this paper, we present a novel noise suppression and detail preservation algorithm. The test image is firstly pre-processed through a multiresolution analysis employing the discrete wavelet transform. Then, we apply a fast and robust total variation technique, incorporating a statistical representation in the style of maximum likelihood estimation. Finally, we compare this proposed approach to current state-of-the-art denoising methods using synthetic and real images. The results demonstrate encouraging performance of our algorithm.
In this paper a novel restoration algorithm based on iterative loops and data mining is proposed to restore the original object image from a few frames of aero-optics degraded images through the turbulence medium. Based on the iterative loops, the iterative mathematic models to estimate the random turbulent optical point spread functions and object image are built. A series of restoration experiments for both simulated and real aero-optics degraded images were performed to examine the proposed algorithm under a computerized simulation environment, which show that the proposed algorithm outperform previous algorithms and therefore is effective for practical applications.
Anisotropic partial differential equation (PDE)-based image restoration schemes employ a local edge indicator function typically based on gradients. In this paper, an alternative pixel-wise adaptive diffusion scheme is proposed. It uses a spatial function giving better edge information to the diffusion process. It avoids the over-locality problem of gradient-based schemes and preserves discontinuities coherently. The scheme satisfies scale space axioms for a multiscale diffusion scheme; and it uses a well-posed regularized total variation (TV) scheme along with Perona-Malik type functions. Median-based weight function is used to handle the impulse noise case. Numerical results show promise of such an adaptive approach on real noisy images.
Although many image restoration methods have been proposed recently, they do not result in a good reconstructed image when the missing region is large or contains lines and edges, because the direction of lines is not easy to predict. This paper proposes a line prediction method to predict the lines and edges in the missing regions. Furthermore, gray levels of pixels are usually smooth in images. Thus, smooth gray-level detection is proposed to restore the missing regions using the surrounded existing regions according to the smoothness of pixels in the image. Therefore, this paper proposes a novel image restoration method based on smooth gray-level detection, and a line prediction method. Experimental results reveal that the proposed method outperforms other methods.
This paper proposes a method to remove JPEG noise artifacts from frame sequences. Using extensive experimental results we show how an online system with periodic noise estimation functionality can estimate the real frame noise even if the images are in JPEG format. We present the mathematical basis of the methodology and show in real content that we can have reliable measurements. We also present the results obtained on a real network camera and show that our method can provide a much better estimation of the noise standard deviation compared to common practice but comparable inter-channel and spatial intra-channel correlation estimates. We also provide some guidelines for capturing datasets necessary to apply computer vision tasks. Our approach exploits the well known stochastic linearization phenomenon which we prove that is present in our case.
In this work, we introduce a feature adaptive second-order p-norm filter with local constraints for image restoration and texture preservation. The p-norm value of the filter is chosen adaptively between 1 and 2 in a local region based on the regional image characteristics. The filter behaves like a mean curvature motion (MCM) [A. Marquina and S. Osher, SIAM Journal of Scientific Computing22, 387–405 (2000)] in the regions where the p-norm value is 1 and switches to a Laplacian filter in the rest of the regions (where the p-norm value is 2). The proposed study considerably reduces stair-case effect and effectively removes noise from images while deblurring them. The noise is assumed as Gaussian distributed (with zero mean and variance σ2) and blur is linearly shift invariant (out-of-focus). The filter converges at a faster rate with semi-implicit Crank–Nicholson scheme. The regularization parameter is initialized and updated based on the local image features and therefore this filter preserves edges, structures, textures and fine details present in images very well. The method is applied on different kinds of images with different image characteristics. We show the response of the filter to various kinds of images and numerically quantify the performance in terms of standard statistical measures.
This paper proposes an efficient noise reduction method for gray and color images that are contaminated by salt-and-pepper noise. In the proposed method, the pixels that are more compatible with adjacent pixels are replaced with target (noisy) pixels. The algorithm is applied on noisy Lena and Mansion images that are contaminated by salt-and-pepper noise with 0.1 and 0.2 noise intensities. Although this method is developed for reducing noise from the images that are contaminated by salt-and-pepper noise, it can also reduce the noise from the images that are contaminated by other types of noises; yet it is more efficient for reducing salt-and-pepper noise. Both numerical and visual comparisons are demonstrated in the experimental simulations. The results show the proposed algorithm can successfully remove impulse noise from images that are contaminated by salt-and-pepper noise.
The degraded image during the process of image analysis needs more number of iterations to restore it. These iterations take long waiting time and slow scanning, resulting in inefficient image restoration. A few numbers of measurements are enough to recuperate an image with good condition. Due to tree sparsity, a 2D wavelet tree reduces the number of coefficients and iterations to restore the degraded image. All the wavelet coefficients are extracted with overlaps as low and high sub-band space and ordered them such that they are decomposed in the tree ordering structured path. Some articles have addressed the problems with tree sparsity and total variation (TV), but few authors endorsed the benefits of tree sparsity. In this paper, a spatial variation regularization algorithm based on tree order is implemented to change the window size and variation estimators to reduce the loss of image information and to solve the problem of image smoothing operation. The acceptance rate of the tree-structured path relies on local variation estimators to regularize the performance parameters and update them to restore the image. For this, the Localized Total Variation (LTV) method is proposed and implemented on a 2D wavelet tree ordering structured path based on the proposed image smooth adjustment scheme. In the end, a reliable reordering algorithm proposed to reorder the set of pixels and to increase the reliability of the restored image. Simulation results clearly show that the proposed method improved the performance compared to existing methods of image restoration.
Often in practice, during the process of image acquisition, the acquired image gets degraded due to various factors like noise, motion blur, mis-focus of a camera, atmospheric turbulence, etc. resulting in the image unsuitable for further analysis or processing. To improve the quality of these degraded images, a double hybrid restoration filter is proposed on the two same sets of input images and the output images are fused to get a unified filter in combination with the concept of image fusion. First image set is processed by applying deconvolution using Wiener Filter (DWF) twice and decomposing the output image using Discrete Wavelet Transform (DWT). Similarly, second image set is also processed simultaneously by applying Deconvolution using Lucy–Richardson Filter (DLR) twice followed by the above procedure. The proposed filter gives a better performance as compared to DWF and DLR filters in case of both blurry as well as noisy images. The proposed filter is compared with some standard deconvolution algorithms and also some state-of-the-art restoration filters with the help of seven image quality assessment parameters. Simulation results prove the success of the proposed algorithm and at the same time, visual and quantitative results are very impressive.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.