Please login to be able to save your searches and receive alerts for new content matching your search criteria.
This paper proposes an image effect creation system that can create various types of effects for images, to satisfy different needs of users for specific design goals. Some operations needed for creating image effects, such as flow-based bilateral filter, flow-based Gaussian filter, curve-shaped filters, line drawing, pencil texture generator, and modified shock filter, are first proposed, and various image effects that can be converted by the proposed system are illustrated. The experiments show very rich results in terms of image effects.
Images captured in hazy weather are usually of poor quality, which has a negative effect on the performance of outdoor computer imaging systems. Therefore, haze removal is critical for outdoor imaging applications. In this paper, a quick single-image dehazing method based on a new effective image prior, luminance dark prior, was proposed. This new image prior arose from the observation that most local patches in the luminance image of a haze-free outdoor YUV color space image usually contain pixels of very low intensity, which is similar to the dark channel prior used with HE for RGB images. Using this new prior, a transmission map was used to estimate the thickness of the haze in an image directly from the luminance component of the YUV color image. To obtain a transmission map with a clear edge outline and depth layer of scene objects, a joint filter containing a bilateral filter and Laplacian operator was employed. Experimental results demonstrated that the proposed method unveiled details and recovered vivid colors even in heavily hazy regions, and provided superior visual effects to many other existing methods.
Three-dimensional reconstruction of teeth plays an important role in the operation of living dental implants. However, the tissue around teeth and the noise generated in the process of image acquisition bring a serious impact on the reconstruction results, which must be reduced or eliminated. Combined with the advantages of wavelet transform and bilateral filtering, this paper proposes an image denoising method based on the above methods. The method proposed in this paper not only removes the noise but also preserves the image edge details. The noise in high frequency subbands is denoised using a locally adaptive thresholding and the noise in low frequency subbands is filtered by the bilateral filtering. Peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM) and 3D reconstruction using the iso-surface extraction method are used to evaluate the denoising effect. The experimental results show that the proposed method is better than the wavelet denoising and bilateral filtering, and the reconstruction results meet the requirements of clinical diagnosis.
A novel optical flow estimation is proposed in this paper, which addresses some issues including the credible estimation of optical flow and the prevention of over-smoothing across motion boundaries. Our main contribution is that we estimated the optical flow by a nonlinear filtering process instead of an energy minimizing process, the latter often needs the corresponding smoothing constraint to be restricted to some form of convex and differentiable entity. In this way, it avoids some restrictions needed by the regularization. So we can choose a nonlinear filter from more flexible forms, which can help to deal with flow discontinuities more efficiently in some sense. We modified and extended a scalar 2D bilateral filter to optical flow field as the wanted nonlinear filter. Qualitative and quantitative results show that the new method can produce a reliable result.
We propose that the Magno (M)-channel filter, belonging to the extended classical receptive field (ECRF) model, provides us with "vision at a glance", by performing smoothing with edge preservation. We compare the performance of the M-channel filter with the well-known bilateral filter in achieving such "vision at a glance" which is akin to image preprocessing in the computer vision domain. We find that at higher noise levels, the M-channel filter performs better than the bilateral filter in terms of reducing noise while preserving edge details. The M-channel filter is also significantly simpler and therefore faster than the bilateral filter. Overall, the M-channel filter enables us to model, simulate and arrive at a better understanding of some of the initial mechanisms in visual pathway, while simultaneously providing a fast, biologically inspired algorithm for digital image preprocessing.
In this paper we present a new efficient iterative nonlinear scheme for recovering of a piecewise constant image from an observed image containing additive noise. We apply an adaptive neighborhood filter which comes from robust statistics and completely rejects outliers being greater than a certain constant. We prove that the iterated application of the scheme leads to a piecewise constant image. This observation generalizes the known results on convergence of nonlinear diffusion schemes to a constant steady-state. Moreover, we show that the partition of the image determining the piecewise constant steady-state after an infinite iteration process can already be found after a finite number of iteration steps. This result can be used for a fast approximation of the piecewise constant image by a mean value procedure. We examine the relations of our scheme to average and bilateral filtering, diffusion filtering and wavelet shrinkage. Numerical experiments illustrate the performance of the algorithm.
This paper describes the design of a low-cost and high performance wheeze recognition system. First, respiratory sounds are captured, amplified and filtered by an analog circuit; then digitized through a PC soundcard, and recorded in accordance with the Computerized Respiratory Sound Analysis (CORSA) standards. Since the proposed wheeze detection algorithm is based on the spectrogram processing of respiratory sounds, spectrograms generated from recorded sounds have to pass through a 2D bilateral filter for edge-preserving smoothing. Finally, the processed spectra go through an edge detection procedure to recognize wheeze sounds.
Experiment results show a high sensitivity of 0.967 and a specificity of 0.909 in qualitative analysis of wheeze recognition. Due to its high efficiency, great performance and easy-to-implement features, this wheeze recognition system could be of interest in the clinical monitoring of asthma patients and the study of physiological mechanisms in the respiratory airways.