A boundary value problem associated with filtering governed by differential equations with time dependent coefficients and exhibiting a weak nonlinearity is solved. Namely, the boundary value problem is split into two independent boundary problems, i.e., that of Goursat for the concentration of a sorbate and that absorbed by a sorbent. Then each of the formulated problems is solved separately applying the Riemann transformation.
This paper proposes an efficient approach for human face detection and exact facial features location in a head-and-shoulder image. This method searches for the eye pair candidate as a base line by using the characteristic of the high intensity contrast between the iris and the sclera. To discover other facial features, the algorithm uses geometric knowledge of the human face based on the obtained eye pair candidate. The human face is finally verified with these unclosed facial features. Due to the merits of applying the Prune-and-Search and simple filtering techniques, we have shown that the proposed method indeed achieves very promising performance of face detection and facial feature location.
Traditional vision registration technologies require the design of precise markers or rich texture information captured from the video scenes, and the vision-based methods have high computational complexity while the hardware-based registration technologies lack accuracy. Therefore, in this paper, we propose a novel registration method that takes advantages of RGB-D camera to obtain the depth information in real-time, and a binocular system using the Time of Flight (ToF) camera and a commercial color camera is constructed to realize the three-dimensional registration technique. First, we calibrate the binocular system to get their position relationships. The systematic errors are fitted and corrected by the method of B-spline curve. In order to reduce the anomaly and random noise, an elimination algorithm and an improved bilateral filtering algorithm are proposed to optimize the depth map. For the real-time requirement of the system, it is further accelerated by parallel computing with CUDA. Then, the Camshift-based tracking algorithm is applied to capture the real object registered in the video stream. In addition, the position and orientation of the object are tracked according to the correspondence between the color image and the 3D data. Finally, some experiments are implemented and compared using our binocular system. Experimental results are shown to demonstrate the feasibility and effectiveness of our method.
The multi-object tracking is a basic computer vision process having a huge class of real-life tools that range from monitoring of medical video to surveillance. The goal of tracking numerous items is to place numerous objects in a scene, handle their identities throughout time, and construct trajectories for analysis. However, this is a complex task, because of certain issues like occlusions, complicated object dynamics, and variations in the appearance of objects. In this research, a new technique named TPRO-based Deep LSTM is developed for tracking multi-object with occlusion handling. Here, the videos are considered as input wherein the extraction of frames is done from each video. Each frame undergoes pre-processing with filtering to eliminate noise from frames. By using a sparse Fuzzy c-Means (FCM) and Local Optimal-Oriented Pattern (LOOP) features, the localization of objects is done. Moreover, the visual and spatial trackings are considered for hybrid tracking. The second derivative model and neighborhood search model are used to perform visual tracking. Then the occlusion handling is performed. Concurrently, with the use of Deep Long Short-Term Memory (Deep LSTM) the spatial tracking is performed and the Taylor Poor Rich Optimization (TPRO) algorithm assigns the weight and bias of the Deep LSTM. The TPRO is obtained by the unification of the Taylor series along with the Poor and Rich Optimization algorithm. By combining visual and spatial tracking, the final tracked output is generated. The devised method achieves a performance with the highest value of 88.9% for Multiple Object Tracking Precision (MOTP), smallest tracking distance (TD) of 4.185, average MOTP of 0.889, average TD of 4.201, and highest tracking number (TN) of 14.
The first level data cache in modern processors has become a major consumer of energy due to its increasing size and high frequency access rate. In order to reduce this high energy consumption, we propose in this paper a straightforward filtering technique based on a highly accurate forwarding predictor. Specifically, a simple structure predicts whether a load instruction will obtain its corresponding data via forwarding from the load-store structure — thus avoiding the data cache access — or if it will be provided by the data cache. This mechanism manages to reduce the data cache energy consumption by an average of 21.5% with a negligible performance penalty of less than 0.1%. Furthermore, in this paper we focus on the cache static energy consumption too by disabling a portion of sets of the L2 associative cache. Overall, when merging both proposals, the combined L1 and L2 total energy consumption is reduced by an average of 29.2% with a performance penalty of just 0.25%.
Focusing on the problem of characteristic decomposition and filtering of random interference information measured by an optical fiber current transducer (OFCT), a signal filtering algorithm by combing complete ensemble empirical mode decomposition (CEEMD) with normalized autocorrelation function, is proposed. The CEEMD feature decomposition model of the OFCT signal is established and multiple eigenmode functions of the measured signal are extracted. The normalized autocorrelation function models of different types of intrinsic mode function (IMF) are established. By extracting the characteristics of the autocorrelation function, high-weight IMFs are selected. After the mean filtering process is performed on other IMFs, the signal reconstruction is performed together with the effective modal components. With the premise of signal statistical learning and structural risk minimization principles, a support vector regression model is established to classify the data by linear fitting. The more reliable current information after filtered is obtained. Experiment results demonstrate that the proposed signal filtering algorithm by combining the advantages of CEEMD and normalized autocorrelation function decomposes the signal according to the time-scale characteristics of OFCT data itself, without pre-setting any basis functions. The root mean square error of optimized data is reduced by 39.3%, and the signal quality is greatly improved.
Internet of vehicles (IoV) has become an important research topic due to its direct effect on intelligent transportation systems (ITS) development. There are many challenges in the IoV environment, such as communication, big data and best route assigning. In this paper, an effective IoV architecture is proposed. This architecture has four main objectives. The first objective is to utilize a powerful communication scheme in which three tiers of coverage tools — Internet, satellite, high-altitude platform (HAP) — are utilized. Therefore, the vehicles maintain a continuous connection to the IoV environment everywhere. The second objective is to apply filtering and prioritization mechanisms to reduce the detrimental effects of IoV big data. The third objective is to assign the best route for a vehicle after determining its real-time priority. The fourth objective is to analyze the IoV data. The proposed architecture performance is measured using a simulation environment that is created by the NS-3 package. The simulation results proved that the proposed IoV architecture has a positive impact on the IoV environment according to the performance metrics: energy, success rate of route assignment, filtering effect, data loss, delay, usage of coverage tools and throughput.
Prediction, smoothing, filtering and synchronization or observer design given finitely many measurements and a given (possibly nonlinear) dynamical map are discussed from a computational complexity point of view. All these problems are particular instances of finding a zero of an appropriately defined function. The recognition of this fact enables one to approach these questions from a computational complexity point of view. For polynomial maps the computational complexity of a global Newton algorithm adapted to identify the finite trajectory of the dynamical system's state over the desired window scales in a polynomial manner with the condition number (an invariant for the problem at hand) and the degree of the polynomials required to describe the models. The computational complexity analysis allows one to identify the most efficient manner to approach synchronization (prediction, smoothing, filtering) problems. Moreover differences between adaptive and nonadaptive formulations are revealed based on the condition number of the associated zero finding problem. The advocated formulation, with the associated global Newton algorithm has good robustness properties with respect to measurement errors and model errors for both adaptive and nonadaptive problems. These aspects are illustrated through a simulation study based around the Hénon map.
A filtering problem when the pair state-observations is a Markov process is analyzed. Extending a Kunita result to our frame, strong uniqueness for the filtering equation is obtained. In the discrete case, a finite state approximation for the filter is considered and an estimate of the approximation error is given.
We propose and analyze a new single-pass filter designed to restore images highly corrupted by the impulse noise. The filter involves three stages designed to detect particular patterns of noise. The filter recognizes the "short line like" or the "short curve like" patterns of noise which enables us to preserve fine details. Our simulations show that the proposed new filter overperforms the conventional multilevel median filter, the two-state and the multi-state-median filter.
Ionizing particles detection based on phonons counting are considered as a growing research point of great interest. Phononic crystal (PnC) detectors have a higher resolution than other detectors. In the present work, we shall prepare a setup of a radiation detector based on a 1D PnC. The PnC detector can be used in detection and discrimination between protons and alpha particles with incident energy 1MeV. We have proposed a model capable of filtering the energies of two different ionizing particles (proton and alpha particle) of specific lattice frequencies in steps. Firstly, the high probability of phonons production was found at transmitted energy 5KeV from the whole path of protons and alpha particles through a vertical thin sheet made from Mylar and Polymethyl methacrylate (PMMA), respectively. The outgoing elastic waves are subjected to propagate through the proposed PnCs structure (Teflon-Polyethylene)2 that shows the different transmission percentage to each particle. Therefore, the detection and discrimination between ionizing ions were achieved.
The manufacturing industry constantly needs to verify machined objects against their original CAD models. Inspection applied directly on scanned points is desirable. Typical scan data, however, is very large-scale, unorganized and noisy, and usually misses information about the sampled object. Therefore, direct processing of scanned points is problematic. This paper formulates the concept of diverse scan data which may significantly facilitate the direct treatment of scanned points. The paper focuses on the sharp features of a scanned object as a type of diverse scan data, and proposes a new method for sharp feature detection. The proposed Sharp Feature Detection (SFD) method is applied directly on the scanned points and is completely automatic, fast and straightforward to implement. Finally, the paper demonstrates how the proposed SFD method can be integrated into the general framework of utilizing diverse scan data for inspection.
We consider the mean-variance hedging when an investor observes just the stock prices. We explain how the theory developed in Gouriéroux, Laurent and Pham (1998) and Rheinländer and Schweizer (1997) can be extended to this framework. We then focus to a diffusion model when drift of stock prices are not observed directly but only through a measurement process. By using filtering techniques, we obtain explicit formulae for optimal mean-variance hedging strategies and for the associated minimal risk. Closed-form expressions are provided in the case of a Bayesian investor and when the stock drift is modelled as a linear Gaussian process.
We consider an investor with initial wealth X0<1, who wishes to maximize the probability of achieving a goal, XT = 1, when the stock's drift — modeled as a linear mean-reverting diffusion — is not observed directly but only via the measurement process. Adopting a martingale approach, a generalized Cameron–Martin (1945) formula then enables explicit computation of the value of the problem as well as the wealth process. The dynamic optimal allocation can then be determined using Clark's formula.
Previous work on multifactor term structure models has proposed that the short rate process is a function of some unobserved diffusion process. We consider a model in which the short rate process is a function of a Markov chain which represents the "state of the world". This enables us to obtain explicit expressions for the prices of zero-coupon bonds and other securities. Discretizing our model allows the use of signal processing techniques from Hidden Markov Models. This means we can estimate not only the unobserved Markov chain but also the parameters of the model, so the model is self-calibrating. The estimation procedure is tested on a selection of U.S. Treasury bills and bonds.
This paper develops the Bayesian model selection based on Bayes factor for a rich class of partially-observed micro-movement models of asset price. We focus on one recursive algorithm to calculate the Bayes factors, first deriving the system of SDEs for them and then applying the Markov chain approximation method to yield a recursive algorithm. We prove the consistency (or robustness) of the recursive algorithm. To illustrate the construction of such a recursive algorithm, we consider a model selection problem for two micro-movement models with and without stochastic volatility, and provide simulation and real-data examples to demonstrate the effectiveness of the Bayes factor in the model selection for this class of models.
A general model for intraday stock price movements is studied. The asset price dynamics is described by a marked point process Y, whose local characteristics (in particular the jump-intensity) depend on some unobservable hidden state variable X. The dynamics of Y and X may be strongly dependent. In particular the two processes may have common jump times, which means that the actual trading activity may affect the law of X and could be also related to the possibility of catastrophic events. The agents, in this model, are restricted to observing past asset prices. This leads to a filtering problem with marked point process observations. The conditional law of X given the past asset prices (the filter) is characterized as the unique weak solution of the Kushner–Stratonovich equation. An explicit representation of the filter is obtained by the Feyman–Kac formula using a linearization method. This representation allows us to provide a recursive algorithm for the filter computation.
A Hidden Markov Chain (HMC) is applied to study the forward premium puzzle. The weekly quotient of the interest rate differential divided by the log exchange rate change is modeled as a Hidden Markov process. Compared with existing standard approaches, the Hidden Markov approach allows a detailed analysis of the puzzle on a day-to-day basis while taking into full account the presence of noise in the observations. Two and three state models are investigated. A three-state HMC model performs better than two-state models. Application of the three-state model reveals that the above quotient is mostly zero, and hence leads to the rejection of the uncovered interest rate parity hypothesis.
The problem of the arbitrage-free pricing of a European contingent claim B is considered in a general model for intraday stock price movements in the case of partial information. The dynamics of the risky asset price is described through a marked point process Y, whose local characteristics depend on some unobservable jump diffusion process X. The processes Y and X may have common jump times, which means that the trading activity may affect the law of X and could be also related to the presence of catastrophic events. Risk-neutral measures are characterized and in particular, the minimal entropy martingale measure is studied. The problem of pricing under restricted information is discussed, and the arbitrage-free price of the claim B w.r.t. the minimal entropy martingale measure is computed by using filtering techniques.
The contribution of this paper is twofold: we study power utility maximization problems (with and without intermediate consumption) in a partially observed financial market with jumps and we solve by the innovation method the arising filtering problem. We consider a Markovian model where the risky asset dynamics St follows a pure jump process whose local characteristics are not observable by investors. More precisely, the stock price process dynamics depends on an unobservable stochastic factor Xt described by a jump-diffusion process. We assume that agents' decisions are based on the knowledge of an information flow, , containing the asset price history,
. Using projection on the filtration
, the partially observable investment-consumption problem is reduced to a full observable stochastic control problem. The homogeneity of the power utility functions leads to a factorization of the associated value process into a part depending on the current wealth and the so called opportunity process Jt. In the case where
, Jt and the optimal investment-consumption strategy are represented in terms of solutions to a backward stochastic differential equation (BSDE) driven by the
-compensated martingale random measure associated to St, which can be obtained by filtering techniques (Ceci, 2006; Ceci and Gerardi, 2006). Next, we extend the study to the case
, where ηt gives observations of Xt in additional Gaussian noise. This setup can be viewed as an abstract form of "insider information". The opportunity process Jt is now characterized as a solution to a BSDE driven by the
-compensated martingale random measure and the so called innovation process. Computation of these quantities leads to a filtering problem with mixed type observation and whose solution is discussed via the innovation approach.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.