Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Reliability evaluation has a vital importance at all stages of processing and controlling modern engineering systems. In most of these modern systems the components have different contributions to the system, which are defined as weights. Among the other weighted systems, a linear consecutive-weighted-k-out-of-n:F system consists of n components, where each component has its own weight and reliability. The system fails if and only if the total weight of the failed consecutive components is at least k. The aim of this paper is to study the reliability of linear consecutive-weighted-k-out-of-n:F system consisting of independent & nonidentical and nonhomogeneous Markov dependent components. Exact formulae are provided for computing the reliability for above mentioned cases. Approximation formulae for the reliability are also presented.
This paper proposes a performance index to evaluate the capability of a maintainable computer network (MCN) that is required to send d units of data from the source to the sink through two paths within time T. The proposed system reliability performance index quantifies the probability that a MCN delivers a sufficient capacity with a maintenance budget no greater than B. Two procedures are integrated in the algorithm — an estimation procedure for estimated system reliability and an adjusting procedure utilizing the branch-and-bound approach for accurate system reliability. Subsequently, the estimated system reliability with lower bound and upper bound, and accurate system reliability can be derived by applying the recursive sum of disjoint products (RSDP) algorithm.
This paper addresses a scheduling problem with random processing time when the operating system is subject to stochastic failure. First, scheduling of a single job is reviewed and the expected cost rate is obtained. Single scheduling time and multiple scheduling times for one job are discussed and the expected cost functions are obtained to setup the optimal due date. Numerical example is given to demonstrate the scheme of multiple scheduling times. Next, the scheduling problems of N tandem jobs and N parallel jobs are formulated and the expected cost functions are also derived. The optimal scheduling times which minimize the expected cost functions are discussed analytically.
We consider how to allocate simulation budget to estimate the risk measure of a system in a two-stage simulation optimization problem. In this problem, the first stage simulation generates scenarios that serve as inputs to the second stage simulation. For each sampled first stage scenario, the second stage procedure solves a simulation optimization problem by evaluating a number of decisions and selecting the optimal decision for the scenario. It also provides the estimated performance of the system over all sampled first stage scenarios to estimate the system’s reliability or risk measure, which is defined as the probability of the system’s performance exceeding a given threshold under various scenarios. Usually, such a two-stage procedure is very computationally expensive. To address this challenge, we propose a simulation budget allocation procedure to improve the computational efficiency for two-stage simulation optimization. After generating first stage scenarios, a sequential allocation procedure selects the scenario to simulate, followed by an optimal computing budget allocation scheme that determines the decision to simulate in the second stage simulation. Numerical experiments show that the proposed procedure significantly improves the efficiency of the two-stage simulation optimization for estimating system’s reliability.
A linear consecutively-connected system consists of N + 2 linear ordered positions. The first position contains a source of a signal and the last one contains a receiver. M statistically independent multistate elements (retransmitters) with different characteristics are to be allocated at the N intermediate positions. The elements provide retransmission of the received signal to the next few positions. Each element can have different states determined by a number of positions that are reached by the signal generated by this element. The probability of each state for any given element depends on the position where it is allocated. The signal retransmission process is associated with delays. The system fails if the signal generated by the source can not reach the receiver within a specified time period.
A problem of finding an allocation of the multistate elements that provides the maximal system reliability is formulated. An algorithm based on the universal generating function method is suggested for the system reliability determination. This algorithm can handle cases where any number of multistate elements are allocated in the same position while some positions remain empty. It is shown that such an uneven allocation can provide greater system reliability than an even one. A genetic algorithm is used as an optimization tool in order to solve the optimal element allocation problem.
A circular consecutive 2-out-of-(m, n): F system is considered. A recursive method for calculating the reliability of the system is presented. The method depends on the one-to-one correspondence relation of the systems and the class of 0–1 matrices having no two consecutive 1's at any row or column.
This paper provides guidance for the planners of a test of any system that operates in sequential stages: only if the first stage functions properly (e.g., a vehicle's starter motor rotates adequately) can the second stage be activated (ignition system performs) and hence tested, followed by a third stage (engine starts and propels vehicle), with further stages such as wheels, and steering, and finally brakes eventually brought to test. Each sequential stage may fail to operate because its design, manufacture, or usage has faults or defects that may give rise to failure. Testing of all stages in the entire system in appropriate environments allows failures at the various stages to reveal defects, which are targets for removal. Early stages' fault activations thus postpone exposure of later stages to test. It is clear that only by allowing the entire system to be tested end-to-end, through all stages, and to observe several total system successes can one be assured that the integrated system is relatively free of defects and is likely to perform well if fielded.
The methodology of the paper permits a test planner to hypothesize the numbers of (design) faults present in each stage, and the stagewise probability of a fault activation, leading to a system failure at that stage, given survival to that stage. If the test item fails at some stage, then rectification ("fix") of the design occurs, and the fault is (likely) removed. Failure at that stage is hence less likely on future tests, allowing later stages to be activated, tested, and fixed. So reliability grows.
To allow many Test and Fix (TAF) cycles is obviously impractical. A stopping criterion proposed by E. A. Seglie that suggests test stopping as soon as an uninterrupted run/sequence of r (e.g., 5) consecutive system successes has been achieved is studied quantitatively here. It is shown how to calculate the probability of eventual field success if the design is frozen and the system fielded after such a sequential stopping criterion is achieved. The mean test length is also calculated. Many other calculations are possible, based on formulas presented.
Since multiple-failure event can destroy the safety policy of redundancy, accurate prediction of multiple-failure probability is of great importance. However, the statistical dependence between component failures might lead to unrealistic estimation of the conventional system failure probability model since it is valid only in the situation of independent component failures. On the other hand, the lack of multiple-failure event data makes the statistical estimation of multiple-failure probability suffer serious uncertainty. General failure event data experienced by other systems might be the only data available to estimate the system under study. In order to evaluate the target system in such a situation, an appropriate approach is highly required by which the right information can be mined from the operating experience of reference systems.
Based on the multiple-failure information contained in load-strength interference relationship, this paper presents an approach to estimate multiple-failure probability of dependent k-out-of-n system according to failure event data available. The data may come from the operating experience of the system to be evaluated or reference systems with similar load-strength interference relationships. As examples, the failure probabilities of emergency diesel generator groups are estimated according to the multiple-failure event data of reference groups of different sizes. The estimation results are consistent well with the operating records.
This paper proposes a new reliability model, circular consecutive 2-out-of-(r,r)-from-(n,n):F model. In this model, the system consists of a square grid of side n (containing of n2 components) such that the system fails if and only if there is at least one square of side r which includes among them at least two failed components. For i.i.d. case an algorithm is given for computing the reliability of circular system. The reliability function can be expressed by the number of 0–1 matrices having no two or more 0s at any square of side r.
In the paper we calculate reliability of radar system in Vessel Traffic Services Zatoka. Reliability and availability of the system were calculated on the base of reliability of the system components. There was assumed that system is a series-"m out of n". Conclusion is that appropriate evaluation of system reliability and availability is decisive in choosing service support location.
This paper focuses on performance evaluation of a manufacturing system from the network analysis perspective. Due to failure, partial failure, or maintenance, the capacity of each machine is stochastic (i.e., multistate). Hence, a manufacturing system can be constructed as a stochastic-flow network, named manufacturing network herein. Considering reworking action and failure rates of machines, this paper assesses the probability that the manufacturing network can satisfy demand. Such a probability is defined as the system reliability. First, a graphical technique is proposed to decompose the manufacturing network into one general processing path and one reworking path. Subsequently, two algorithms are utilized for different network models to generate the minimal capacity vector of machines that guarantee that the manufacturing network is able to produce sufficient products fulfilling the demand. The system reliability of the manufacturing network is derived in terms of such a capacity vector afterward.
Road vibrations cause fatigue failures in the automotive exhaust system. Evaluation of exhaust system reliability is investigated by using bivariate joint distribution model to account for dependence between exhaust components. Cumulative damages are derived to be used as the random variables in the distribution model. In the case study with a light duty truck exhaust system, the model parameters were estimated using maximum likelihood method based on marginal distributions. The dependence parameters between components were determined through bench tests of twelve (12) exhaust systems. Result comparison demonstrated the influence of component dependence on the point estimate and statistic inference of the system reliability.
The authors of this paper present a quantitative insight of a long argued question in hard disk drive (HDD) industry about the reliability effects of the number of head-disk interfaces (HDI). The competition between complexity and data transfer load is modeled from system reliability perspective: competing components with load sharing. Product failure probability ratio and steady-state MTTF ratio between different data storage capacities are derived in terms of their head-disk interface number ratio and data transfer ratio. It is found that the reliability dominance of these two factors is conditional to the mathematical characteristics of their governing failure physics. The detailed discussion is conducted on the system reliability with head-disk interface failures governed by Weibull life distribution and Inverse Power Law stress-life relationship.
Technical Specifications define the limiting conditions of operation, maintenance and surveillance test requirements for the various Nuclear Power plant systems in order to meet the safety requirements to fulfill regulatory criteria. These specifications impact even the economics of the plant. The regulatory approach addresses only the safety criteria, while the plant operators would like to balance the cost criteria too. The attempt to optimize both the conflicting requirements presents a case to use Multi-objective optimization. Evolutionary algorithms (EAs) mimic natural evolutionary principles to constitute search and optimization procedures. Genetic algorithms are a particular class of EA's that use techniques inspired by evolutionary biology such as inheritance, mutation, natural selection and recombination (or cross-over). In this paper we have used the plant insights obtained through a detailed Probabilistic Safety Assessment with the Genetic Algorithm approach for Multi-objective optimization of Surveillance test intervals. The optimization of Technical Specifications of three front line systems is performed using the Genetic Algorithm Approach. The selection of these systems is based on their importance to the mitigation of possible accident sequences which are significant to potential core damage of the nuclear power plant.
The estimation of the reliability of a series system of k(≥ 2) independent components, where the life time of each component is exponentially distributed, has been considered. First, sufficient conditions for obtaining improvements over scale equivariant estimators for the one-parametric model are derived. As a consequence, we derive estimators that improve upon the maximum likelihood estimator (MLE), an analog of the uniformly minimum variance unbiased estimator (UMVUE) and the best-scale equivariant estimator (BSEE). Bayes and generalized Bayes estimators are also obtained and are shown to be admissible. We consider also the case where the lifetimes follow two-parametric exponential distributions and derive the UMVUE of the system reliability. Further, the MLE and the modified MLE (MMLE) are discussed for this case. Finally, the risks of these estimators are compared numerically for the case k = 2.
A three-dimensional consecutive (r1, r2, r3)-out-of-(m1, m2, m3):F system was introduced by Akiba et al. [J. Qual. Mainten. Eng.11(3) (2005) 254–266]. They computed upper and lower bounds on the reliability of this system. Habib et al. [Appl. Math. Model.34 (2010) 531–538] introduced a conditional type of two-dimensional consecutive-(r, s)-out-of-(m, n):F system, where the number of failed components in the system at the moment of system failure cannot be more than 2rs. We extend this concept to three dimension and introduce a conditional three-dimensional consecutive (s, s, s)-out-of-(s, s, m):F system. It is an arrangement of ms2 components like a cuboid and it fails if it contains either a cube of failed components of size (s, s, s) or 2s3 failed components. We derive an expression for the signature of this system and also obtain reliability of this system using system signature.
Effectively assessing and improving production systems in different scenarios are critical for the management of production processes. This paper proposes a new approach that takes process improvement into consideration to evaluate system reliability. The system reliability is defined as the probability that the volume of input processed successfully through the individual workstations within a production system. This system reliability evaluation approach models the production system as a confidence-based multistate production network (CP-MPN). The discrete-time Markov chain is adopted to feature the CP-MPN, and the target success rate (TSR) is set as a reference to achieve the purpose of process improvement of each workstation. Our results indicate that, when the TSR is improved, the CP-MPN can meet both the given demand and confidence level with less input. This curtailed input volume diminishes the loadings of workstations so that the system reliability of the CP-MPN can be enhanced. Two examples (a tile production system and a printed circuit board (PCB) production system) are presented to demonstrate the confidence-based reliability evaluation approach and prove our results. This paper provides not only a practicable method to evaluate system reliability that includes the process improvement perspective but also a workable indicator to guide production managers to improve the quality of workstations.
The relationship between the reliability probabilities of the component and the system is hard to get. If this relationship can be obtained easily, the reliability of the system can be calculated by using the reliability structures of the components. The common method to express this relationship is using the linear correlation index, which only shows the linear correlation between the components failure rather than the relationship between high nonlinear functions. In order to describe this relationship accurately and calculate the system reliability using the component reliability structures, a Uniform Design (UD)-Saddlepoint Approximation (SA)-based system reliability analysis method is proposed. The system reliability analysis method is decomposed to three simple steps: (1) calculating the weight coefficient which represents the contribution rate of each component to system reliability, (2) approximating Cumulant Generation Function (CGF) of each component, (3) calculating CGF of the system and approximating the system reliability with SA method. The weight coefficient of each component is derived from UD method, and a variable interval selection method is developed to decrease the required number of samples and increase the accuracy of the weight coefficients. First-Order Saddlepoint Approximation (FOSA) method or Mean-Value First-Order Saddlepoint Approximation (MVFOSA) method is used to analyze the CGF of a component performance function. Then the CGF of the system can be obtained by the weighted addition law by combining the CGFs of components performance functions with the weight coefficients. Finally, the system reliability can be approximated by SA method. Four examples are employed to demonstrate that the new method outperforms other methods for system reliability analysis in terms of efficiency and accuracy.
This paper develops a Monte Carlo Simulation (MCS) approach to estimate the performance of a multistate manufacturing network (MMN) with joint buffers. In the MMN, products are allowed to be produced by two production lines with the same function to satisfy demand. A performance index, system reliability, is applied to estimate the probability that all workstations provide sufficient capacity to satisfy a specified demand and buffers possess adequate storage. The joint buffers with finite storage are considered in the MMN. That is, extra work-in-process output from different production lines can be stored in the same buffer. An MCS algorithm is proposed to generate the capacity state and to check the storage usage of buffers to evaluate whether the demand can be satisfied or not. System reliability of the MMN is estimated through this MCS algorithm. Besides, performability for demand pairs assigned to production lines can be obtained. A practical example of touch panel manufacturing system is used to demonstrate the applicability of the MCS approach. Experimental result shows that system reliability is overestimated when buffer storage is assumed to be infinite. Moreover, joint buffer for an MMN is more reliable than buffers are installed separately in different production lines.
A Combined m-Consecutive-k-out-of-n and Consecutive-kb-out-of-n: F System consists of n components ordered in a line such that the system fails iff there exist at least kb consecutive failed components, or at least m nonoverlapping runs of k consecutive failed components, where kb<mk. This system was been introduced by Mohan et al. [P. Mohan, M. Agrawal and K. Sen, Combined m-consecutive-k-out-of-n: F and consecutive-kc-out-of-n: F systems, IEEE Trans. Reliab.58 (2009) 328–337] where they propose an algorithm to evaluate system reliability by using the (GERT) technique, in the independent case. In this paper, we propose a new formula of the reliability of this system for nonhomogeneous Markov-dependent components. For a Combined m-Consecutive-k-out-of-n and Consecutive-kb-out-of-n: F System with nonhomogeneous Markov-dependent components, we derive closed-form formulas for the marginal reliability importance measure of a single component, and the joint reliability importance measure of two or more than two components using probability generating function (pgf) and conditional pgf methods.