A novel method to recognize occluded objects using Markov model is proposed in this paper. In addition to Markov model having a high tolerance to noise, spatial distribution of features can be incorporated into Markov model in a natural and elegant way. Thus, high recognition accuracy can be achieved by the proposed method. More specifically, for each occluded object in the scene image, its translation, rotation and scale parameters can all be determined by our method even when it may have transformation parameters different from others or it may be duplicated in the scene image with transformation parameters different from each other. Moreover, the recognition process can be performed step by step to find out all of the objects in the scene image according to the confidence measure. Finally, the recognition process can be terminated automatically without knowing the number of objects included in the scene image since hypothesis verification and termination test are performed in our method. Actually, the solution of our method is useful for depth-search applications such as inspection of printed circuit board with multiple layers, underwater diving for searching objects, underground drilling for exploring mine, etc. The proposed method has been applied on two types of databases: puzzle and tool. The effectiveness and practicability of the proposed approach have been proven by various experimental results.
The aim of this study is to show that the optimal order of Markov Model of cursive words can be rigorously stated in order to fit the structural properties of the observed data using Akaike information criterion. The method has been tested on French Postal check amounts up to order 4. An original structural representation of cursive words based on graphemes is used. The conditional probability to have a word model given an observed sequence of graphemes is computed independently of the length of the sequence. The recognition results obtained confirm the optimal order found using Akaike criterion.
In this paper, a developed structure for DC–DC quasi-Z-source (QZS) converters is proposed. First, the proposed two-stage structure is presented and analyzed. Then, the proposed structure is extended to n stages and its relations are calculated. Compared with other conventional structures, the proposed structure has higher voltage gain and higher reliability. The proposed topology is suitable for high power applications. To have the correct performance of conventional QZS converter, all impedance network elements must be intact. In the case of small failure in one of the elements, the operation of the whole system is disrupted. The proposed structure has high reliability because when one stage fails, the fault management system separates that stage from the other stages and the remaining stages continue to transmit power. In this paper, in addition to analyzing the operation of the proposed converter in different operating modes, calculations of voltage gain, voltage stresses across capacitors and reliability analysis are also presented. Reliability is calculated according to well-known Markov model. Moreover, a comprehensive comparison in terms of voltage gain and reliability is made between the proposed converter and the other conventional structures. Also, the rating values of inductors and capacitors are designed. Finally, experimental and simulation results are presented by using power system computer-aided design (PSCAD) software to verify the theories.
In Low Earth Orbit (LEO) satellite networks, it is a challenge to allocate the limited resources to meet the needs of different calls. In this paper, a dynamic channel reservation strategy based on priorities of multi-traffic and multi-user in LEO satellite networks is proposed. The dynamic admission threshold reserved for different calls is the key of this strategy. Firstly, the traffic prediction model based on LEO satellite mobility is established. Then the channel allocation model is built on the Markov process. Finally, the reserved admission thresholds are dynamically changed according to the predicted traffic. And the calculation of the admission thresholds is solved by the genetic algorithm. The simulation results show that the proposed strategy not only meets the needs of calls of different type traffic and different level users, but also improves the overall quality of service in LEO satellite networks.
Point-of-Interest recommendation is an efficient way to explore interesting unknown locations in social media mining of social networks. In order to solve the problem of sparse data and inaccuracy of single user model, we propose a User-City-Sequence Probabilistic Generation Model (UCSPGM) integrating a collective individual self-adaptive Markov model and the topic model. The collective individual self-adaptive Markov model consists of three parts such as the collective Markov model, the individual self-adaptive Markov model and the self-adaptive rank method. The former determines the topic sequence for all users in system and mines the behavioral patterns of users in a large environment. The later mines behavioral patterns for each user in a small environment. The last determines a self-adaptive-rank for each user in niche. We conduct a large amount of experiments to verify the effectiveness and efficiency of our method.
Software component technology has a substantial impact on modern IT evolution. The benefits of this technology, such as reusability, complexity management, time and effort reduction, and increased productivity, have been key drivers of its adoption by industry. One of the main issues in building component-based systems is the reliability of the composed functionality of the assembled components. This paper proposes a reliability assessment model based on the architectural configuration of a component-based system and the reliability of the individual components, which is usage- or testing-independent. The goal of this research is to improve the reliability assessment process for large software component-based systems over time, and to compare alternative component-based system design solutions prior to implementation. The novelty of the proposed reliability assessment model lies in the evaluation of the component reliability from its behavior specifications, and of the system reliability from its topology; the reliability assessment is performed in the context of the implementation-independent ISO/IEC 19761:2003 International Standard on the COSMIC method chosen to provide the component's behavior specifications. In essence, each component of the system is modeled by a discrete time Markov chain behavior based on its behavior specifications with extended-state machines. Then, a probabilistic analysis by means of Markov chains is performed to analyze any uncertainty in the component's behavior. Our hypothesis states that the less uncertainty there is in the component's behavior, the greater the reliability of the component. The system reliability assessment is derived from a typical component-based system architecture with composite reliability structures, which may include the composition of the serial reliability structures, the parallel reliability structures and the p-out-of-n reliability structures. The approach of assessing component-based system reliability in the COSMIC context is illustrated with the railroad crossing case study.
Class temporal specification is a kind of important program specifications especially for object-oriented programs, which specifies that interface methods of a class should be called in a particular sequence. Currently, most existing approaches mine this kind of specifications based on finite state automaton. Observed that finite state automaton is a kind of deterministic models with inability to tolerate noise. In this paper, we propose to mine class temporal specifications relying on a probabilistic model extending from Markov chain. To the best of our knowledge, this is the first work of learning specifications from object-oriented programs dynamically based on probabilistic models. Different from similar works, our technique does not require annotating programs. Additionally, it learns specifications in an online mode, which can refine existing models continuously. Above all, we talk about problems regarding noise and connectivity of mined models and a strategy of computing thresholds is proposed to resolve them. To investigate our technique's feasibility and effectiveness, we implemented our technique in a prototype tool ISpecMiner and used it to conduct several experiments. Results of the experiments show that our technique can deal with noise effectively and useful specifications can be learned. Furthermore, our method of computing thresholds provides a strong assurance for mined models to be connected.
Individual-based models (IBMs) enable modelers to avoid far-reaching abstractions and strong simplifications by allowing for a state-based representation of individuals. The fact that IBMs are not represented using a standardized mathematical framework such as differential equations makes it harder to reproduce IBMs and introduces difficulties in the analysis of IBMs. We propose a model architecture based on representing individuals via Markov models. Individuals are coupled to populations — for which individuals are not explicitly represented — that are modeled by differential equations. The resulting models consisting of continuous-time finite-state Markov models coupled to systems of differential equations are examples of piecewise-deterministic Markov processes (PDMPs). We will demonstrate that PDMPs, also known as hybrid stochastic systems, allow us to design detailed state-based representations of individuals which, at the same time, can be systematically analyzed by taking advantage of the theory of PDMPs. We will illustrate design and analysis of IBMs using PDMPs via the example of a predator that intermittently feeds on a logistically growing prey by stochastically switching between a resting and a feeding state. This simple model shows a surprisingly rich dynamics which, nevertheless, can be comprehensively analyzed using the theory of PDMPs.
With the increasing demand for high reliability in mission critical systems such as space shuttle, digital flight and real time control to mention a few, reliability analysis of fault tolerant systems continues to be the focus of most researchers. The reliability analysis of triple modular redundancy (TMR) and hybrid redundancy (TMR with spares) systems is in general carried out with the assumption of failure rate being precise. However, in practice failure rate is imprecise due to the uncertainties of system operation. In this paper, the dependability analysis of hybrid redundancy systems (HRS) comprising of N-modular redundancy (NMR) and standby redundancy is presented assuming failure rates and repair rates as fuzzy numbers. Each module of the NMR is assumed to have access to a number of cold spares and a repair facility. A Markov model for the HRS is developed. As the Markov model parameters may not be precisely known due to various reasons, vertex method and α-cut method is applied. These methods allow uncertainty-based parameters that are represented as fuzzy numbers. The dependability measures such as availability and reliability are obtained. A comparative study of the fuzzy results and the conventional results using probability concepts is presented.
A Redundant Array of Independent Disks (RAID) system with n disks or hard drives can be modeled as a k-out-of-n repairable system for which at least k operational disks are required. This paper proposes a hierarchical Markov model to estimate the reliability of RAID systems. The method encompasses Markov models for evaluating the reliability of individual disks at the lower level and the redundant model for evaluating the reliability of entire RAID at the system level. Both hardware and media failures are considered, and the media failures can be potentially recovered via the disk self-restoration mechanism. The system mean-time-to-data-loss is also derived and results are compared to those estimated from system-based Markov models. The major contribution of this work is that the reliability for RAID systems is approached by combining the redundant modeling technique with the single disk-based Markov model, thus simplifying the computational efforts. The proposed method is applied on RAID-5 and RAID-6 systems to demonstrate the applicability and performance of the new model.
Reliability and availability assessment of a complex system in a single model, considering binary states of individual components, using Markov technique is difficult due to state space explosion problem. Inclusion of various types of dependencies in the model further aggravates it. To overcome this, Stochastic Petri net (SPN) modeling based on decomposition is proposed for reliability and availability assessment of mechanical systems. The decomposition is based on three aspects of the system: Hierarchical level, basic structure and dependency. The model is demonstrated at three hierarchical levels. Individual component (level '3') SPN model is developed assuming Weibull failure distribution, while the individual subsystem (level '2') SPN model is developed considering the arrangement of components within the subsystem. The individual model at level '2' is reduced to an equivalent single net model and its equivalent transition rate is derived from its basic structure assuming the independence of components. This along with the dependencies (e.g. repair, standby redundancy) are included in the system model (level '1'). The repair distribution in this model is assumed exponential. Reachable markings are generated for the system model to obtain the reduced state space of Semi-Markov model for assessment of reliability and availability of the system. The steps of the proposed methodology are suggested and these are illustrated for a pumping system, with two pumps; one in standby.
Safety instrumented systems (SISs) are installed to provide risk reduction and the performance of a SIS can be assessed by its ability to reduce risk. This article introduces a new quantitative measure for the risk reduction, denoted PFD*. Compared with the current reliability measures, the new measure takes into account the demand rate, and therefore can be used for SISs operating in both low-demand and high-demand mode. For a SIS operating in low-demand mode, the PFD* is approximately equal to the standard probability of failure on demand (PFD) used in IEC 61508 and related standards. PFD* can therefore be considered as an extension and improvement of the standard PFD. Successful handling of a demand verifies the functional status of a SIS in a way similar to a functional test, and the PFD* will therefore depend on the demand rate. The PFD* can be used to select the functional test interval according to the risk reduction allocated to the specific SIS. The properties of the new measure are analyzed through a case study of a 1-out-of-2 system of pressure transmitters.
With the increasing demand for high availability in safety-critical systems such as banking systems, military systems, nuclear systems, aircraft systems to mention a few, reliability analysis of distributed software/hardware systems continue to be the focus of most researchers. The reliability analysis of a homogeneous distributed software/hardware system (HDSHS) with k-out-of-n : G configuration and no load-sharing nodes is analyzed. However, in practice the system load is shared among the working nodes in a distributed system. In this paper, the dependability analysis of a HDSHS with load-sharing nodes is presented. This distributed system has a load-sharing k-out-of-(n + m) : G configuration. A Markov model for HDSHS is developed. The failure time distribution of the hardware is represented by the accelerated failure time model. The software faults are detected during software testing and removed upon failure. The Jelinski–Moranda software reliability model is used. The maintenance personal can repair the system up on both software and hardware failure. The dependability measures such as reliability, availability and mean time to failure are obtained. The effect of load-sharing hosts on system hazard function and system reliability is presented. Furthermore, an availability comparison of our results and the results in the literature is presented.
We analyze the statistics of skipping events in a deterministic and a noisy thermoreceptor within the model developed by Braun et al (Int. J. Bif. Chaos 8 881 (1998)). The statistics of skips can be captured by an intermittent map, which, for sufficient noise, can be approximated by a one-step Markov process with transition amplitudes that depend on the noise intensity. The theoretical and model results are in reasonable agreement with experimental data on cat coldreceptors.
This is a review of a new and essentially simple method of inferring phylogenetic relationships from complete genome data without using sequence alignment. The method is based on counting the appearance frequency of oligopeptides of a fixed length (up to K=6) in the collection of protein sequences of a species. It is a method without fine adjustment and choice of genes. Applied to prokaryotic genomes it has led to results comparable with the bacteriologists' systematics as reflected in the latest 2002 outline of the Bergey's Manual of Systematic Bacteriology. The method has also been used to compare chloroplast genomes and to the phylogeny of Coronaviruses including human SARS-CoV. A key point in our approach is subtraction of a random background from the original counts by using a Markov model of order K-2 in order to highlight the shaping role of natural selection. The implications of the subtraction procedure is specially analyzed and further development of the new approach is indicated.
This work introduces MACACO, a macroscopic calcium currents simulator. It provides a parameter-sweep framework which computes macroscopic Ca2+ currents from the individual aggregation of unitary currents, using a stochastic model for L-type Ca2+ channels. MACACO uses a simplified 3-state Markov model to simulate the response of each Ca2+ channel to different voltage inputs to the cell. In order to provide an accurate systematic view for the stochastic nature of the calcium channels, MACACO is composed of an experiment generator, a central simulation engine and a post-processing script component. Due to the computational complexity of the problem and the dimensions of the parameter space, the MACACO simulation engine employs a grid-enabled task farm. Having been designed as a computational biology tool, MACACO heavily borrows from the way cell physiologists conduct and report their experimental work.
This paper investigates the spatial and temporal properties of persistent meteorological and hydrological droughts in the UK at national to sub-regional scales. Using 1961–1990 as the reference period, it is shown that the longest observed run of below average rainfall since the 1870s persisted for four years in northern England and parts of Scotland during 1892–1896. The longest observed run of below average discharge since the 1950s/1960s was found for some groundwater fed rivers in the English lowlands and lasted up to 5.5 years during 1988–1993. Distributions of dry-spell lengths were represented by a Markov model fit to each rainfall and discharge record. This model provides a good fit to observed geometric distributions of spell lengths and provides credible runs of below average river flows lasting up to a decade in some vulnerable catchments in southern England. Droughts of this persistence may not yet have occurred within the instrumented record but could have profound water management implications for the region. Predicted 100-year drought durations for catchments in northern England may not be as long but could have serious ramifications for surface water supplies. These findings point to a risk of irreversible drought impacts on aquatic communities that are simultaneously stressed by unsustainable abstractions, poor water quality and/or habitat modifications.
A popular means of modeling metabolic networks is through identifying frequently observed pathways. However the definition of what constitutes an observation of a pathway and how to evaluate the importance of identified pathways remains unclear. In this paper we investigate different methods for defining an observed pathway and evaluate their performance with pathway classification models. We use three methods for defining an observed pathway; a path in gene over-expression, a path in probable gene over-expression and a path of most accurate classification. The performance of each definition is evaluated with three classification models; a probabilistic pathway classifier - HME3M, logistic regression and SVM. The results show that defining pathways using the probability of gene over-expression creates stable and accurate classifiers. Conversely we also show defining pathways of most accurate classification finds a severely biased pathways that are unrepresentative of underlying microarray data structure.
The aim of this paper is to analyse the total length of time spent by a group of patients in an Accident and Emergency (A&E) department using a multi-stage Markov model. A patient's pathway through A&E consists of a sequence of stages, such as triage, examination, and a decision of whether to admit to hospital or not. Using Coxian phase-type distributions, this paper models these stages and illustrates the difference in the distribution of the time spent in A&E for those patients who are admitted to hospital, and those patients who are not. A theoretical approach to modelling the costs accumulated by a group of patients in A&E is also presented. The data analysed refers to the time spent by 53,213 patients in the A&E department of a hospital in Northern Ireland over a one year period.
The aim of this study is to show that the optimal order of Markov Model of cursive words can be rigorously stated in order to fit the structural properties of the observed data using Akaike information criterion. The method has been tested on French Postal check amounts up to order 4. An original structural representation of cursive words based on graphemes is used. The conditional probability to have a word model given an observed sequence of graphemes is computed independently of the length of the sequence. The recognition results obtained confirm the optimal order found using Akaike criterion.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.