Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The problems of underexposure, under-detail-enhancement and residual haze are detected in the previous dehazing techniques. These issues occur due to different reasons and are highly difficult to be resolved using a single algorithm. Therefore, a three-stage dehazing model (TSDM) is proposed in this paper using pre-processing, dehazing and post-processing modules. The improved auto-color transfer (IACT) approach is presented as part of pre-processing to efficiently enhance the hazy image to overcome underexposure. Also, adaptive dehazing (AD) is developed in this work which considers the global characteristics of the hazy image as a parameter, to adaptively enhance the details. Moreover, adaptive contrast enhancement (ACE) is proposed as a post-processing operation that adaptively fuses the dehazed image and its contrast-enhanced image to effectively improve the contrast. However, the IACT operation is performed on the hazy image only when dark regions are detected. Similarly, the ACE is performed only when a dehazed image exhibits residual haze. Based on these prior conditions, the proposed work can be implemented in four distinct ways i.e. using only the AD technique; using IACT and AD approaches, using AD and ACE methods, and using all IACT, AD and ACE algorithms. The proposed TSDM is experimentally analyzed using many databases which shows improved results compared to previous techniques.
Photoacoustic computed tomography (PACT) is an innovative biomedical imaging technique that has gained significant application in the field of biomedicine due to its ability to visualize optical contrast with high resolution and deep tissue penetration. However, the inherent challenges associated with photoacoustic signal excitation, propagation and detection often result in suboptimal image quality. To overcome these limitations, researchers have developed various advanced algorithms that span the entire image reconstruction pipeline. This review paper aims to present a detailed analysis of the latest advancements in PACT algorithms and synthesize these algorithms into a coherent framework. We provide tripartite analysis — from signal processing to reconstruction solution to image processing, covering a spectrum of techniques. The principles and methodologies, as well as their applicability and limitations, are thoroughly discussed. The primary objective of this study is to provide a thorough review of advanced algorithms applicable to PACT, offering both theoretical foundations and practical guidance for enhancing the imaging effect of PACT.
The results of a strain gradient finite element model of polycrystalline plastic deformation in an HCP alloy were analysed in terms of orientation-related meso-scale grain groups. The predictions for meso-scale elastic strains were post-processed to construct energy dispersive diffraction peak patterns. Synchrotron X-ray polycrystalline diffraction was thereafter employed to record experimentally multiple peaks from deformed samples of Ti-6Al-4V alloy. Model parameters were adjusted to provide the best simultaneous match to multiple peaks in terms of intensity, position and shape. The framework provides a rigorous means of validating polycrystal plasticity finite element model. The study represents an example of the parallel development of modelling and experimental tools that is useful for the study of statistically stored dislocations (SSDs) and geometrically necessary dislocations (GNDs) effects on the deformation behaviour of (poly)crystals.
In this paper, a hybrid post-processing system for improving the performance of Handwritten Chinese Character Recognition is presented. In order to remove two kinds of frequently encountered errors in the recognition result, namely mis-recognized character and unrecognized character, both confusing character characteristics of the recognizer and the contextual linguistic information are utilized in our hybrid three-stage post-processing system. In the first stage, the confusing character set and a statistical Noisy-Channel model are employed to identify the most promising candidate character and append possible unrecognized similar-shaped characters into candidate character set when a candidate sequence is given. Secondly, dictionary-based approximate word matching is conducted to further append contextual linguistic-prone characters into candidate character set and bind the candidate characters into a word-lattice. Finally, a Chinese word BI-Gram Markov model is employed in the third stage to identify a most promising sentence by selecting plausible words from the word-lattice.
On the average, our system achieves a 5.1% recognition rate improvement for the first candidate when the original character recognition rate is 90% for the first candidate and 95% for the top-10 candidates by an online HCCR engine.
This paper presents a post-processing system for improving the recognition rate of a Handwritten Chinese Character Recognition (HCCR) device. This three-stage hybrid post-processing system reduces the misclassification and rejection rates common in the single character recognition phase. The proposed system is novel in two respects: first, it reduces the misclassification rate by applying a dictionary-look-up strategy that bind the candidate characters into a word-lattice and appends the linguistic-prone characters into the candidate set; second, it identifies promising sentences by employing a distant Chinese word BI-Gram model with a maximum distance of three to select plausible words from the word-lattice. These sentences are then output as the upgraded result. Compared with one of our previous works in single Chinese character recognition, the proposed system improves absolute recognition rates by 12%.
In this paper, we propose a new algorithm called LL-Diff, which is innovative compared to traditional augmentation methods in that it introduces the sampling method of Langevin dynamics. This sampling approach simulates the motion of particles in complex environments and can better handle noise and details in low-light conditions. We also incorporate a causal attention mechanism to achieve causality and address the issue of confounding effects. This attention mechanism enables us to better capture local information while avoiding over-enhancement. We have conducted experiments on the LOL-V1 and LOL-V2 datasets, and the results show that LL-Diff significantly improves computational speed and several evaluation metrics, demonstrating the superiority and effectiveness of our method for low-light image enhancement tasks. The code will be released on GitHub when the paper has been accepted.
Random numbers are important parameters for the security of cryptographic applications. In this study, a secure and efficient generator is proposed to generate random numbers. The first part of the generator is a true random number generator that consists of chaotic systems implemented on FPGA. The second part of the generator is a post-processing algorithm used to overcome the problems that emerge from the generator or environmental factors. As the post-processing algorithm, Keccak, the latest standard of hash algorithm, was rearranged and used. Random numbers with the proposed approach meet the security requirements for cryptographic applications. Furthermore, the NIST 800-22 test suite and autocorrelation test are used to ensure the generated numbers have no statistical weakness. The successful test results demonstrate the security of the generated numbers. An important advantage of the proposed generator does not cause any data loss and perform 100% efficiency although data loss can be up to 70% in some post-processing algorithms.
Enhancing the content and structure of a web site is a very important task which can help to maintain people visiting a web site and gain new visits (or customers). Web mining area helps to enhance a web site organization and contents using data mining algorithms. In particular we may perform Web Mining using a Self Organizing Feature Map (SOFM or SOM) it is always needed an analysis phase by experts. To help analysts to perform this phase after SOFMs' training, many post-processing techniques have been developed (component planes, labels, etc.); however, none of these techniques are useful when working in web mining for off-line enhancements of a web site. In this paper an algorithm called Reverse Cluster Analysis (RCA) will be provided. It aims to identify important web pages based on a self organizing feature map (SOFM) when performing web text mining (WTM) and web usage mining (WUM). We successfully applied this technique in a real web site to show its effectiveness. We have extended previous work performing a comparison with another unsupervised technique, administrators survey and an extended survey.
In this paper, text detection in moving business cards for helping visually impaired persons using a wearable camera is presented. The current methods to help visually impaired persons to read the natural scene text, menus and book covers have three problems. First, they assumed that the blind persons are standing still and the captured scene is not moving. Second, the blind persons do not know that the menu or the book cover is captured by the camera. Third, these methods cannot "see" business cards to help the blind persons. The proposed method includes moving detection, thumb detection motion blur detection and text detection methods. Experimental results show that the proposed method can reduce the time complexity. The reduced rates for training and testing sets are 83.18% and 77.08%, respectively. The text detection rates for training and testing sets are 93.44% and 94.58%, respectively. The fps is 53 for 320 × 240 video frames. The program size is 102KB and can be run on mobile devices.
This paper is concerned with the application of the discontinuous Galerkin (DG) method to the solution of unsteady linear hyperbolic conservation laws on Cartesian grids. We present several superconvergence results and we construct a robust recovery-type a posteriori error estimator for the directional derivative approximation based on an enhanced recovery technique. We first identify a special numerical flux and a suitable initial discretization for which the L2-norm of the solution is of order p+1, when tensor product polynomials of degree at most p are used. Then, we prove superconvergence towards a particular projection of the directional derivative. The order of superconvergence is proved to be p+1/2. Moreover, we establish an 𝒪(h2p+1) global superconvergence for the solution flux at the outflow boundary of the domain. We also provide a simple derivative recovery formula which is 𝒪(hp+1) superconvergent approximation to the directional derivative. We use the superconvergence results to construct asymptotically exact a posteriori error estimate for the directional derivative approximation by solving a local steady problem on each element. Finally, we prove that the a posteriori DG error estimate at a fixed time converges to the true error in the L2-norm at 𝒪(hp+1) rate. Our results are valid without the flow condition restrictions. Numerical examples validating these theoretical results are presented.
In this paper we propose a new post-processing technique for edge detection problems. We use a novel Fuzzy Clustering algorithm to performance a global evaluation over binary images. These binary images were obtained from outputs of edge detection algorithms as the one of Sobel or Canny. In a first step the edges identified by the detection algorithm are modeled as candidates to be edge. After that, the segments are built connecting those candidates that are connected. The next step requires a Fuzzy Clustering in order to select the good segments that will be considered as final edges. We show the effectiveness of this technique over classical edge detectors in many scenarios, even without applying smoothing.
In order to highlight the interesting problems and actual results on the state of the art in optical character recognition (OCR), this paper describes and compares preprocessing, feature extraction and postprocessing techniques for commercial reading machines.
Problems related to handwritten and printed character recognition are pointed out, and the functions and operations of the major components of an OCR system are described.
Historical background on the development of character recognition is briefly given and the working of an optical scanner is explained.
The specifications of several recognition systems that are commercially available are reported and compared.