Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    INCREMENTAL CHECKPOINT SCHEMES FOR WEIBULL FAILURE DISTRIBUTION

    Incremental checkpoint mechanism was introduced to reduce high checkpoint overhead of regular (full) checkpointing, especially in high-performance computing systems. To gain an extra advantage from the incremental checkpoint technique, we propose an optimal checkpoint frequency function that globally minimizes the expected wasted time of the incremental checkpoint mechanism. Also, the re-computing time coefficient used to approximate the re-computing time is derived. Moreover, to reduce the complexity in the recovery state, full checkpoints are performed from time to time. In this paper we present an approach to evaluate the appropriate constant number of incremental checkpoints between two consecutive full checkpoints. Although the number of incremental checkpoints is constant, the checkpoint interval derived from the proposed model varies depending on the failure rate of the system. The checkpoint time is illustrated in the case of a Weibull distribution and can be easily simplified to the exponential case.

  • articleNo Access

    HOW LONG COULD/SHOULD BE THE RESTORATION TIME FOR HIGH AVAILABILITY?

    A simple physically meaningful predictive model is developed for the assessment of the acceptable duration of the repair (restoration) time to keep the system's availability level sufficiently high. The role of the random nature of the restoration time is taken into account. High availability can be achieved by making large, after a malfunction is detected, the ratio of the intensity of restorations to the failure rate. It is shown how this way to go could be effectively quantified. The general concept is illustrated by a numerical example.

  • articleNo Access

    Statistics-related and reliability-physics-related failure processes in electronics devices and products

    The well known and widely used experimental reliability "passport" of a mass manufactured electronic or a photonic product — the bathtub curve — reflects the combined contribution of the statistics-related and reliability-physics (physics-of-failure)-related processes. When time progresses, the first process results in a decreasing failure rate, while the second process associated with the material aging and degradation leads to an increased failure rate. An attempt has been made in this analysis to assess the level of the reliability physics-related aging process from the available bathtub curve (diagram). It is assumed that the products of interest underwent the burn-in testing and therefore the obtained bathtub curve does not contain the infant mortality portion. It has been also assumed that the two random processes in question are statistically independent, and that the failure rate of the physical process can be obtained by deducting the theoretically assessed statistical failure rate from the bathtub curve ordinates. In the carried out numerical example, the Raleigh distribution for the statistical failure rate was used, for the sake of a relatively simple illustration. The developed methodology can be used in reliability physics evaluations, when there is a need to better understand the roles of the statistics-related and reliability-physics-related irreversible random processes in reliability evaluations. The future work should include investigations on how powerful and flexible methods and approaches of the statistical mechanics can be effectively employed, in addition to reliability physics techniques, to model the operational reliability of electronic and photonic products.

  • articleNo Access

    Reliability Analysis of High Gain Integrated DC–DC Topologies for Xenon Lamp Applications

    Emerging switched-mode power supplies incorporated applications demand reliable, less volume and high efficient dc–dc converters. The persistent usage of the dc–dc converters in various applications makes their reliability a significant concern. Hence, this paper deals with a family of non-isolated high gain integrated dc–dc converter topologies derived from a quadratic converter. The reliability analysis is carried out using electronic equipment reliability handbook, MIL-HDBK-217F. For the first time, reliability prediction is done based on the working environment of the power electronic equipments. We developed the reliability prediction for the converters used in the lighting application such as automotive headlamp and aircraft landing lights. The mean time to failure for both the environment is calculated. The reliability comparison is carried out for the proposed topologies and the most reliable converter is chosen. Also, all the converter topologies are simulated using nL5 simulator to confirm their theoretical results. Finally, a laboratory prototype for 40 W with input voltage of 12 V is implemented for the most reliable topology to validate the steady-state analysis.

  • articleNo Access

    A SURVEY ON DISCRETE LIFETIME DISTRIBUTIONS

    This paper presents a comprehensive survey of discrete probability distributions used in reliability for modeling discrete lifetimes of nonrepairable systems. The basic properties of each model are given. A classification into two families is proposed, highlighting the interest of using a Pólya urn scheme. The quality of the estimation of models parameters is numerically assessed. Some criteria are given in order to select among the presented distributions the most useful for applications.

  • articleNo Access

    MEAN RESIDUAL LIFE AND OTHER PROPERTIES OF WEIBULL RELATED BATHTUB SHAPE FAILURE RATE DISTRIBUTIONS

    The two-parameter Weibull distribution is widely used in reliability analysis. Because of its monotonic ageing behaviour, its applicability is hampered in certain reliability situations. Several generalizations and extensions of the Weibull model have been proposed in the literature to overcome this limitation but their properties have not yet been described in a unified manner. In this paper, graphical displays of the mean residual life curves of several families of Weibull related life distributions are given together with their corresponding failure rate functions. The relationship between these two functions are visibly demonstrated. We focus our attention on the Weibull related families that have bathtub or modified bathtub shape failure rates. Important reliability characteristics such as burn-in, change point and flatness of bathtub of these families are examined. Model selection and parameters estimation are also discussed.

  • articleNo Access

    SOFTWARE FAILURE INTENSITY, RELIABILITY AND OPTIMAL STOPPING TIME INCORPORATING REPAIR POLICIES

    Reliability of a software application, its failure intensity and the residual number of faults are three important metrics that provide a quantitative assessment of the failure characteristics of an application. Ultimately, it is also necessary, based on these metrics, to determine an optimal release time at which costs justify the stop test decision. Typically, one of the many stochastic models known as software reliability growth models (SRGMs) is used to characterize the failure behavior of an application to provide estimates of the failure intensity, residual number of faults, reliability, and optimal release time and cost. To ensure analytical tractability, SRGMs assume instantaneous repair and thus the estimates of these metrics obtained using SRGMs tend to be optimistic. In practice, repair activity consumes a non trivial amount of time and resources. Also, repair may be conducted according to many policies which reflect the schedule and budget constraints of a project. A few efforts which incorporate repair into SRGMs are restrictive, since they consider only some SRGMs, model the repair process using a constant repair rate, and provide an estimate of only the residual number of faults. These efforts do not address the issue of estimating the failure intensity, reliability and optimal release time and cost in the presence of repair. In this paper we present a generic framework based on the rate-based simulation technique to incorporate repair policies into finite failure non homogeneous Poisson process (NHPP) class of SRGMs. We describe a methodology to compute the failure intensity and reliability in the presence of repair, and apply it to four popular finite failure NHPP models. We also present an economic cost model which considers explicit repair in providing estimates of optimal release time and cost. We illustrate the potential of the framework to quantify the impact of the parameters of the repair policies on the above metrics using examples. Through these examples we discuss how the framework could be used to guide the allocation of resources to achieve the desired reliability target in a cost-effective manner.

  • articleNo Access

    THE NEW APPROACH FOR REGRESSION MODELS ANALYSIS

    A new approach to the regression models analysis has been presented. It is based on the use of the Generalized Law of Reliability and a method allowing one to transform initial lifetime data to fit a model of accelerated tests. A model of the accelerated tests and regression models of both lung cancer and leukemia trails have been considered.

  • articleNo Access

    CONFIDENCE INTERVAL FOR THE CRITICAL TIME OF INVERSE GAUSSIAN FAILURE RATE

    Critical time is the point at which the failure rate attains its maximum and then decreases. For the inverse Gaussian distribution, the critical time always exists and can be used as a guide for conducting burn-in. In this paper, we use two different reparametrization schemes to establish monotonicity property of critical time. This property is then used to obtain exact confidence intervals for the critical time when either one of the parameters of the inverse Gaussian distribution is known. When both parameters are unknown we construct an analytically exact confidence interval for the critical time that guarentees the desired coverage probability. An approximate confidence interval, motivated by conservative nature of the above bound, is also proposed. Monte-Carlo simulation is conducted to investigate the performance of the two confidence intervals in terms of the their coverage probability and average width. Finally, a numerical example on repair time data is provided to illustrate the procedure.

  • articleNo Access

    A FINITE RANGE DISCRETE LIFE DISTRIBUTION

    Discrete life data arise in many practical situations and even for continuous data we may find cases where the data are presented in grouped form, so that a discrete model can be used. In this paper, we propose a new two-parameter discrete lifetime distribution for modeling this type of data. The distribution under consideration has some interesting ageing properties; in particular, it is able to describe bathtub-shaped failure rate as well as upside-down bathtub-shaped mean residual life. We use this discrete distribution to model Halley’s mortality data and find it fits reasonably well. The proposed model, though quite simple in appearance, is flexible and potentially useful in describing various types of failure time. Some analytical results will also be presented.

  • chapterNo Access

    RELIABILITY AND SURVIVAL IN FINANCIAL RISK

    The aim of this paper is to create a platform for developing an interface between the mathematical theory of reliability and the mathematics of finance. This we are able to do because there exists an isomorphic relationship between the survival function of reliability, and the asset pricing formula of fixed income investments. This connection suggests that the exponentiation formula of reliability theory and survival analysis be reinterpreted from a more encompassing perspective, namely, as the law of a diminishing resource. The isomorphism also helps us to characterize the asset pricing formula in non-parametric classes of functions, and to obtain its crossing properties. The latter provides bounds and inequalities on investment horizons. More generally, the isomorphism enables us to expand the scope of mathematical finance and of mathematical reliability by importing ideas and techniques from one discipline to the other. As an example of this interchange we consider interest rate functions that are determined up to an unknown constant so that the set-up results in a Bayesian formulation. We may also model interest rates as “shot-noise processes”, often used in reliability, and conversely, the failure rate function as a Lévy process, popular in mathematical finance. A consideration of the shot noise process for modelling interest rates appears to be new.

  • chapterNo Access

    SIGNATURE-RELATED RESULTS ON SYSTEM LIFETIMES

    The performance (lifetime, failure rate, etc.) of a coherent system in iid components is completely determined by its “signature” and the common distribution of its components. A system's signature, defined as a vector whose ith element is the probability that the system fails upon the ith component failure, was introduced by Samaniego (1985) as a tool for indexing systems in iid components and studying properties of their lifetimes. In this paper, several new applications of the signature concept are developed for the broad class of mixed systems, that is, for stochastic mixtures of coherent systems in iid components. Kochar, Mukerjee and Samaniego (1999) established sufficient conditions on the signatures of two competing systems for the corresponding system lifetimes to be stochastically ordered, hazard-rate ordered or likelihood-ratio ordered, respectively. Partial results are obtained on the necessity of these conditions, but all are shown not to be necessary in general. Necessary and sufficient conditions (NASCs) on signature vectors for each of the three order relations above to hold are then discussed. Examples are given showing that the NASCs can also lead to information about the precise number and locations of crossings of the systems' survival functions or failure rates in (0, ∞) and about intervals over which the likelihood ratio is monotone. New results are established relating the asymptotic behavior of a system's failure rate, and the rate of convergence to zero of a system's survival function, to the signature of the system.

  • chapterNo Access

    Estimation of Hazard Functions with Shape Restrictions Using Regression Splines

    In the estimation of distributions with time-to-event data, it is often natural to impose shape and smoothness constraints on the hazard function. Systems that fail because of wearing out might be assumed to have monotone hazard, or perhaps monotone-convex. Organ transplant failures are often assumed to have convex or bathtub-shaped hazard function. In this paper we present estimates that maximize the likelihood over a set of shape-restricted regression splines. Right censoring is a simple extension. The methods are applied to real and simulated data sets to illustrate their properties and to compare with existing nonparametric estimators.