The 2006 Asian International Workshop on Advanced Reliability Modeling (AIWARM) is the second symposium in a series of biennial workshops for the dissemination of state-of-art research and the presentation of practice in reliability and maintenance engineering in Asia. It brings together researchers and engineers from not only Asian countries but also all over world to discuss the state of research and practice in dealing with both reliability issues at the system design phase and maintenance issues at the system operation phase. The theme of AIWARM 2006 is “reliability testing and improvement”. The contributions in this volume cover all the main topics in reliability and maintenance engineering, providing an in-depth presentation of theory and practice.
Sample Chapter(s)
Chapter 1: Optimal Burn-In for Minimizing Total Warranty Cost (311 KB)
https://doi.org/10.1142/9789812773760_fmatter
PREFACE.
CONTENTS.
https://doi.org/10.1142/9789812773760_0001
Burn-in is a widely used method to improve the quality of products or systems after they have been produced. In this paper, optimal burn-in procedures for a system with two types of failures (i.e., minor and catastrophic failures) are investigated. A new system surviving burn-in time b is put into field operation and the system is used under a warranty policy under which the manufacturer agrees to provide a replacement system for any system that fails to achieve a lifetime of at least w. Upper bounds for optimal burn-in time minimizing the total expected warranty cost are obtained under a more general assumption on the shape of the failure rate function which includes the bathtub shaped failure rate function as a special case.
https://doi.org/10.1142/9789812773760_0002
This paper proposes an efficient approach to enumerate the subsets of minimal cutsets (SCGs) of a communication network having heterogeneous link capacities to evaluate capacity related reliability (CRR), i.e. probability that the network has, at least, a minimum carrying capacity, (Wmin), between a (s, t) pairs of nodes. In this paper we refer it as CRNR. The efficient methodologies in vogue use apriori information of either path sets or cutsets of the network, which can block or allow a required amount of flow through the network, termed as a first step in CRNR evaluation. In the second step, the disjoint sets of the qualified cutsets or pathsets are obtained using an appropriate SDP approach. Efficient approaches do exist for the second step to obtain the mutually disjoint terms; however the first step is still in open area of research and has attracted much attention in the recent past. In the present paper, we focus on to devise a methodology to solve the first step in the CRNR evaluation using minimal cutsets and propose an efficient technique to enumerate irredundant SCGs, which would be devoid of any redundancy check overheads at the end by which most of the existing algorithms suffer. The technique is applied to several complex networks and experimental results are obtained. A comparison with respect to the number of subsets generation, number of external/internal redundant subsets removal in obtaining irredundant SCGs with recent algorithms have been made to show less computational efforts in using the proposed approach and thus a better performance of than the existing approaches.
https://doi.org/10.1142/9789812773760_0003
In practice, there arc usually limited test data available for reliability analysis in development phase of manufacturing facility and the obtained data are usually failure and time truncated. This study is concerned on development of a reliability prediction and test model in early acquisition stage of under-development equipment. For the prediction of system reliability, many of failure data in test phases are needed and it takes a lot of long time to test. For this problem, in this study we developed both failure and time truncated Weibull model for reliability analysis and prediction. The central purpose of this model is focused on the analysis of under-developing system reliability and the prediction of the system reliability based on obtained test data. For the computing purpose we developed computer program, and have shown the results of sample example for both two-state and multi-state reliability.
https://doi.org/10.1142/9789812773760_0004
The objective of this study is to provide an integrated framework for an effective implementation of manufacturing facility design based on optimizing systems configuration, RAM design and the system life cycle cost. The proposed framework consists of four steps. In Step 1, we set up the initial system configuration to meet the required production rate. In Step 2, we proposed an integrated model of system reliability, availability and maintainability, RAM. In Step 3, we developed a cost model of the system life cycle based on the system configuration and system availability. In Step 4, we developed a simulation model for searching the optimality of the system to meet the production requirement by including the factors of system configuration, system RAM, and life cycle cost. We develop the computer programs and apply it for generating the results of sample regimes of manufacturing facility design. We are expecting that the new framework can facilitate workable base for system performance evaluation in both design and operational stage. Furthermore, we point out that the framework proposed in this paper can be extended easily for various problems solving with a variety form of outputs.
https://doi.org/10.1142/9789812773760_0005
As the Internet has been greatly developed, the demand for improvement of the reliability of the Internet has increased. Recently, there exists a problem in the illegal access which attacks a server intentionally. In particular, DoS (Denial of Service) attack which sends a huge number of packets has been a serious problem. In order to cope with this problem, IDS (Intrusion Detection System) has been widely used. This paper formulates a stochastic model for a server system with illegal access. A server has the function of IDS. The mean time and the expected monitoring number until a server system becomes faulty are derived. Further, an optimal policy which minimizes the expected cost is discussed. Finally, a numerical example is given.
https://doi.org/10.1142/9789812773760_0006
This paper discusses reliability and sensitivity analysis of a repairable system with warm standbys and imperfect coverage. Breakdown times of primary and standby units are assumed to have exponential distribution, and repair times of the broken-down units are also assumed to have exponential distribution. The system fails when the number of primary units is less than K or any one of the broken-down units is not covered. We study the impact of the coverage factor c and other system parameters on the reliability function, the mean and the variance of time to system failure.
https://doi.org/10.1142/9789812773760_0007
This paper presents a combined framework of Multi-Objective Generic Algorithm (MOGA) and Monte Carlo Simulation (MCS) in order to improve backbone topology by leveraging the Virtual Link (VL) system in an hierarchical Link-State (LS) routing domain. Given that the sound backbone topology structure has a great impact on the overall routing performance in a hierarchical LS domain, the importance of this research is evident. The proposed decision model is to find an optimal configuration of VLs that properly meets two-pronged engineering goals in installing and maintaining VLs: i.e., operational costs and network reliability. The experiment results clearly indicates that it is essential to the effective operations of hierarchical LS routing domain to consider not only engineering aspects but also specific benefits from systematical layout of VLs, thereby presenting the validity of the decision model and MOGA with MCS.
https://doi.org/10.1142/9789812773760_0008
This study deals with the access network design problem in universal mobile telecommunication systems (UMTS) networks. We provide a mathematical formulation of the problem with constraints on RNC and node-B capacities, along with a lower bounding method. We also develop a heuristic algorithm with two different initial solution methods designed to strengthen the solution quality, and demonstrate the computational efficacy of these procedures with several test problems.
https://doi.org/10.1142/9789812773760_0009
Burn in is a common procedure to improve the quality of products after they have been produced, but it is also costly. This paper presents two stage burn-in policy for the assembly of a complex electronic product. A product is assembled by two different items which have a bathtub-shaped failure rate. Item I is performed burn in time b1. The successful Item I is then assembled with item II, and then we perform the second stage burn in for the product, the second stage burn in time equal to τ- b1. There are two types of failure possible for the product in the second stage burn in. One is the type I failure (item I failure), which can be removed by a complete repair, and the other is a type II failure (item II failure), which can be removed by a minimal repair. In this article, we present a two stage burn-in policy for the assembled product.
https://doi.org/10.1142/9789812773760_0010
We study the reliability analysis of a repairable system with M operating units, W warm standby units, and R repairmen in which switching failures and reboot delay are considered. It is assumed that failures in standby units have a significant probability q of a switching failure. Failure times of an operating unit and of a standby unit are assumed to be exponentially distributed with parameters λ and α, respectively. Reboot delay times are also assumed to be exponentially distributed with parameter β. The explicit expressions of the reliability characteristics such as the system reliability, Rγ(t), and the mean time to system failure, (MTTF) are derived. Several cases are analyzed graphically to study the effects of various system parameters on the Rγ(t) and the MTTF.
https://doi.org/10.1142/9789812773760_0011
A method for calculating the exact top event probability of a fault tree with priority AND gates and repeated basic events, is proposed when the minimal cut sets are given. We assume that the basic events occur s-independent, exponentially distributed, and the basic event is non-repairable. First, we obtain the probability of occurrence of the output event of a single priority AND gate by Markov analysis. Then, the top event probability is given by cut set approach and inclusion-exclusion formula. A procedure to obtain the probabilities corresponding to the logical products in the inclusion-exclusion formula is proposed.
https://doi.org/10.1142/9789812773760_0012
In this paper we developed a model for a liner consecutive-k1 and k2-out-of-n : F (k1 < k2) system based on some practical situation. The system consists of n components ordered in a line, while the system fails if, and only if, there exists at least non-overlapping consecutive k1 failed components and consecutive k2 failed components in whole system. The system reliability formula with the form of product of matrices by using the finite Markov chain imbedding approach is presented. Finally a numerical example is presented to illustrate the results obtained in this paper. It is well-known that the consecutive-k1 and k1-out-of-n : F system becomes into 2 consecutive k-out-of-n: F system when k1 = k2 = k, which has been studied already in the literature. Thus the reliability system built in this paper is an extension of the known consecutive system in both theory and practice.
https://doi.org/10.1142/9789812773760_0013
In this study, we provide a recursive algorithm for evaluating the system state distribution of a generalized multi-state k-out-of-n system. This recursive algorithm is useful for any multi-state k-out-of-n system, including the decreasing, increasing and other non-monotonic multi-state k-out-of-n systems. We calculate the order of computing time and memory capacity of the proposed algorithm. The result of a numerical experiment shows that when n is large, the proposed algorithm is efficient for evaluating the system state distribution of multi-state k-out-of-n system.
https://doi.org/10.1142/9789812773760_0014
This paper describes a Decision Support System (DSS) to assess the influence of planned and unplanned shutdowns and periodic maintenance activities on the overall reliability of complex industrial plants. The methodology here proposed allows to evaluate the effects of plant shutdown on item reliability. The analysis has been carried out on fifteen process plants of an Italian oil refinery for a three-year period from 2001 to 2003. The outcomes of the analysis show that the plant restart represents an important criticality factor for plant operation from a reliability point of view, highlighting an increase in item failure and the subsequent growth of corrective maintenance costs. The present paper describes the application of a methodology to adjust plant operation start up, whose results, applied to API oil refinery in Falconara Marittima (Ancona, Italy), show an improvement in plant reliability in the year 2003.
https://doi.org/10.1142/9789812773760_0015
The N-version programming (NVP) is a programming approach for constructing fault tolerant software systems. In this paper, we formulate the NVP design problem as the multi-objective optimization problem that seeks Pareto solutions, and then we proposed a novel branch-and-bound method to find Pareto solutions for this problem within a practical computing time. To verify the efficiency of our branch-and-bound method, we compared computing time of this method with that of a complete enumeration method. It is observed that the branch-and-bound method significantly dominates the enumeration method. Further we analyze relations between the ability of this branch-arid-bound method and structural characteristics of NVP designs, and clarify what types of the NVP design problems the branch-and-bound method is most applicable.
https://doi.org/10.1142/9789812773760_0016
This paper deals with a maintenance monitoring system subject to two types of contradictory failures, “false alarms” and “failure to alarm.” It is shown that k-out-of-n systems with multiple dependent monitors are preferable to other coherent systems under a milder condition than that of previous research. The condition is that the ratios of the symmetric components of the conditional probabilities for the observed matrix of the monitors given the true state of the system are the same besides the probability matrix being weak-MLR (weak-Multivariate monotone Likelihood Ratio). If the optimal procedure of a monitoring system is given by a k-out-of-n system, the amount of calculation time needed to identify an optimal decision can be greatly reduced. This enables an optimal procedure to be found quickly.
https://doi.org/10.1142/9789812773760_0017
Reliability has been considered as an important design measure in many industrial systems and system designers have made efforts to achieve more reliable system structure. Component reliability and redundancy allocation in each subsystem have mainly been used to improve the system reliability. In the recent highly developed industry, there are many kinds of available components which have different features (e.g. reliability, cost, weight, volume, etc.). In this paper, we consider the redundancy allocation problems with multiple component choices (RAP-CC) subject to some resource constraints. A simulated annealing (SA) algorithm is presented for the problem and several test problems are experimented to show its efficiency and effectiveness. It is found that the SA algorithm gives better solutions than other previous studies within a few seconds of the CPU-time.
https://doi.org/10.1142/9789812773760_0018
This paper presents a global optimization method for solving the series-parallel redundancy allocation problems of determining the optimal number of redundant components in order to maximize the system reliability subject to multiple resource restrictions. Our problems are nonlinear integer programming problems that are in general hard to deal with. We transform them into binary integer programming problems and find global optimal solutions using GAMS. Computational results show that tabu search among heuristic methods is a powerful heuristic one. It is believed that our approach would be a very useful tool to evaluate the efficiency of heuristic methods for the moderate size of problems such as Fyffe et al.[1].
https://doi.org/10.1142/9789812773760_0019
In this paper, a new approach based on the effort minimization algorithm and disjoint products algorithm is derived. Different cost functions containing parameters can be applied to work with the concept of “choice factors”. Even so, the cost factor may not take into account as a function if such data is not available. The choice factor is defined as the ratio of rate of increase of effort to reliability importance for each component. The derived effort minimization algorithm can also be applied to any system structure that has identical or non-identical components. The results show that this algorithm can be a simple and efficient tool for aiding reliability engineers during the design phase of a product.
https://doi.org/10.1142/9789812773760_0020
In this paper, first, we formulate a reliability optimization problem in which unavailability of a communication network system is minimized, and next, we propose a multiobjective Genetic Algorithm (mo-GA) for solving the reliability optimization problem. In order to obtain high efficiency of searching non-dominated solutions, we combine the Improved Saving Pareto solutions Strategy (ISPS) and the adoptive Local Search (LS) with the multiobjective Genetic Algorithm. Through some numerical experiments, we evaluate the reliability optimization problem of a communication network system, and show effectiveness of the multiobjective Genetic Algorithm.
https://doi.org/10.1142/9789812773760_0021
A circular consecutive-k-out-of-n: F system consists of n components arranged along a circular path. This system fails if no less than k consecutive components fail. One of the most important problems for this system is to obtain the optimal component arrangement that maximizes the system reliability. In order to obtain the exact solution for this problem, one needs to calculate n! system reliabilities. As n becomes large, however, the amount of calculation would be intolerably large. In this paper, we propose two kinds of genetic algorithm to obtain the quasi optimal solution for this problem within a reasonable computing time. One employs Grefenstette's direct ordinal representation scheme. The other employs special ordinal representation scheme we have developed. The latter scheme eliminates arrangements with same system reliability produced by rotation and/or reversal of certain arrangements. In addition to that, we have improved the scheme to produce only arrangements that allocate components with low failure probabilities at every k-th position, because system reliabilities of such arrangements should be high. We compared their performance and demonstrated the advantage of the scheme we have developed through numerical experiments.
https://doi.org/10.1142/9789812773760_0022
Single-level systems have been considered in redundancy allocation problems. While this method may be the best policy in some specific situations, it is not the best policy in general. In regards to reliability, it is most effective to duplicate the lowest level objects, because parallel-series systems are more reliable than series-parallel systems, but the redundancy cost can be higher than in modular redundancy. In this paper, redundancy is considered at all levels in a series system, and a mixed integer programming model is summarized. A SA algorithm is considered to solve the problem and some examples are studied.
https://doi.org/10.1142/9789812773760_0023
There are many repair/replacement problems have been investigated during the past decades. A repairable system is usually subject to stochastic failure when it deteriorates with age. The overall objective of this research is to investigate the optimal maintenance policy by modeling the system deteriorating process as a Markov decision process. The optimal repair/replacement policy is proposed with incorporating the costs of operating cost, general repair, failure replacement and preventive replacement under the discounted cost criterion. The specific maintenance actions for a repairable system are whether to replace the system, to perform a general repair or to keep it operating. This paper is an extension of the model developed by Chen and Feldman (1997) in which an optimal policy is investigated. The major modifications of the standard replacement model in this research are the addition of the general repair with age reduction factor and the number of general repairs can be used more than once. Finally, the optimal parameters of the maintenance policy can be obtained by solving the n-stage problem from the backward recursive scheme over a set of finite horizons to approximate the optimal policy for the infinite planning horizons.
https://doi.org/10.1142/9789812773760_0024
From the literature, it is known that preventive maintenance (PM) can reduce the deterioration of equipment to a younger level. Researchers usually focus on studying the reduction effect of age or failure rate when performing PM actions and developing the optimal PM policies. However, the PM actions, such as cleaning, adjustment, alignment, and lubrication work, may not always reduce equipment age or failure rate. Instead, it may only reduce the degradation rate of the equipment to a certain level. In addition, most of the existing optimal PM policies are developed with the consideration of the cost minimization only. Yet, as demonstrated in this paper, the equipment with optimal PM schedule will result in very low reliability. Hence, this paper is to develop an optimal PM policy by minimizing the cost rate with the consideration of the reliability limit for the case of degradation rate reduction after performing PM. The improvement factor method is used to measure the reduction effect of the degradation rate. The algorithm for searching the optimal solutions for this PM model is developed. Examples are also presented with discussions of parameter sensitivity and special cases.
https://doi.org/10.1142/9789812773760_0025
We introduce the concept of partial replenishment to an inventory. The stock of the inventory is replenished either fully with probability p or partially with probability 1 − p by a deliveryman arriving at the inventory according to a Poisson process. The demands for stock to the inventory form a compound Poisson process. The stationary distribution of stock is derived and an optimization is studied.
https://doi.org/10.1142/9789812773760_0026
In this paper we consider the optimal (T, S)-policies in a discrete-time opportunity-based age replacement (DOAR), where S is a restricted duration of opportunities and T (≥ S + 1) is a preventive age replacement time. Based on the expected cost per unit time in the steady-state as a criterion of optimality, we formulate totally 6 DOAR models and derive numerically the optimal (T, S)-policies minimizing the respective expected costs per unit time. A numerical example with real failure time data investigates the dependence of the optimal DOAR policies and their associated minimum expected costs on parameters of the discrete Weibull failure time distribution.
https://doi.org/10.1142/9789812773760_0027
This paper deals with a maintenance and inventory control problem in a linear connected-(p,r)-out-of-(m, n) F lattice system. It is assumed that all components of the connected-(p,r)-out-of-(m, n) F lattice system are identical and have two states: state 1 (operating) and 0 (failed). The purpose of this paper is to develop an optimization scheme that aims at minimizing the expected cost per unit time. We considered an age-based preventive maintenance and modified (s, S) inventory policy. To find the optimal maintenance interval and inventory level, a genetic algorithm is proposed. The expected cost per unit time is obtained by a Monte Carlo simulation. Sensitivity analysis to the different cost parameters is done by numerical examples.
https://doi.org/10.1142/9789812773760_0028
Aged fossil-fired power systems, which need the maintenance for their steady operations, are on the great increase in Japan. The preventive maintenance and/or repair of such systems are indispensable to prevent the serious trouble such as the emergency stop of operation. Because the cumulative fatigue damage of system parts remains, the condition of system after repair cannot return to brand-new. Such repair degradation of system has to be considered when the maintenance plan is established. In this paper, a system is repaired at prespecified schedule when the cumulative damage level is below a managerial level. When the cumulative damage level exceeds a certain critical level, the system fails and such critical level lowers at every repair. The expected cost per unit of time between maintenances is obtained, and the optimal maintenance policy is derived.
https://doi.org/10.1142/9789812773760_0029
This paper addresses statistical estimation problems of the optimal repair-cost limits minimizing the long-run average costs in discrete setting. Two repair-cost limit replacement models are considered with and without imperfect repair. We introduce the discrete total time on test (DTTT) concept and propose non-parametric estimators of the optimal repair-cost limits.
https://doi.org/10.1142/9789812773760_0030
In this paper, we develop a computation algorithm for an aperiodic checkpoint placement problem with a preventive maintenance, which maximizes the steady-state system availability. The proposed algorithm is based on the usual dynamic programming, and provides an effective iterative scheme. In a numerical example, we investigate the dependence of model parameters on the optimal checkpoint sequence, and carry out the sensitivity analysis to examine the effects of failure parameter and the preventive maintenance time.
https://doi.org/10.1142/9789812773760_0031
In this paper, we propose two periodic preventive maintenance (PM) policies based on ARI1 and ARI∞ repair models discussed in Doyen and Gaudoin (2004). In ARI∞, model, a repair reduces the hazard rate of an amount proportional to the current hazard rate while a repair has effect on the relative wear since the last PM in ARI1 model. In both PM policies, the system undergoes the minimal repair at each failure between the preventive maintenances. For each PM policy, we derive mathematical formulas to evaluate the expected cost rate per unit time. Assuming that the system is replaced by a new one at the Nth PM, the optimal values of N, which minimizes the expected cost rate, is solved. For the purpose of illustrating and comparing two proposed PM policies, a numerical example is given when the lifetime distribution of a system is Weibull distribution.
https://doi.org/10.1142/9789812773760_0032
This study suggests a preventive maintenance model for the system which wears continuously in time with a random breakdown threshold under periodic inspections. When each item has significant individual variation to withstand shocks, or component failures are not fully dependent on a physical wear variable which can be measured, it is reasonable that the breakdown threshold is not constant and has a certain distribution. In this paper, the wear accumulated continuously in time is represented by the infinitesimal renewal process. The item is preventively replaced if the wear at periodic inspections exceeds a certain wear limit; on failure, it is replaced immediately. The optimal wear limit for preventive replacement which minimizes the long-run total expected cost per unit time is derived by using renewal theory.
https://doi.org/10.1142/9789812773760_0033
In this paper, the manufacturing lead time in a production system with maintenance period, non-renewal BMAP (Batch Markovian Arrival process) input and bilevel threshold control is analyzed. The factorization principle is used to derive the distribution of the manufacturing lead time and the mean value. A numerical example is provided to see the effect of the non-renewal input on the system performance.
https://doi.org/10.1142/9789812773760_0034
This paper considers the inspection model when a main unit sends signals periodically to a checking unit for the detection of its failure. Such a signal is called an alive message. When the checking unit can not receive the signal until a specified time, it is concluded that the main unit has failed and is replaced. Next, we consider another case that when the checking unit can not receive the signal although the main unit does not fail, the main unit is not replaced. We obtain the expected costs and derive analytically optimal policies which minimize them. Numerical examples are finally given when the failure time is exponential.
https://doi.org/10.1142/9789812773760_0035
This paper considers multiple modular redundant systems as the recovery techniques of error detection and error masking on the finite process execution, and discusses analytically optimal checkpoint intervals. Introducing the overheads of comparison and decision by majority, an error occurrence rate and a native execution time of the process, we obtain the mean times to the completion of the processes for multiple modular systems, and derive optimal checkpoint intervals which minimize them. Further, we extend such checkpoint models to the case where the occurrence of error rate is not constant and increases with the number of checkpoints. The sequential checkpoint intervals for a double modular system are computed numerically.
https://doi.org/10.1142/9789812773760_0036
An optimal maintenance policy for a system with periodic inspection and external environment maintenance (EEM) is investigated. EEM, which is newly introduced concept from this study, is the action that removes external factors causing failure, for example, removing dust inside of electronic appliances. A policy is defined to be optimum if it minimize the cost function comprised of EEM cost, inspection cost, repair cost and system breakdown penalty cost. Inter-relationship between inspection period and EEM period is reflected in the formulation of cost function. Simulation analysis is performed using actual data for failure and EEM period of PDP in subway station for 14 months. As a result, Balance of system failure is observed to be reduced by incorporating EEM.
https://doi.org/10.1142/9789812773760_0037
This study provides a framework to build up an optimal testing policy for single-unit and series systems from a decision theoretical point of view. Simultaneous testing is recommended for series system. Moreover, optimal test interval of each unit in series system construct an almost optimal testing policy for series systems.
https://doi.org/10.1142/9789812773760_0038
This paper investigates the optimal threshold value on failure rate for leased products with a Weibull lifetime distribution. Within a lease period, any failure of the product is rectified by minimal repairs and a penalty may occur to the lessor when the time required to perform a minimal repair exceeds a reasonable time limit. To reduce product failures, additional preventive maintenance actions are carried out when the failure rate reaches a threshold value. Under this maintenance scheme, a mathematical model of the expected total cost is established and the optimal threshold value and the corresponding maintenance degrees are derived such that the expected total cost is minimized. The structural properties of the optimal policy are investigated in detail. Finally, numerical examples are provided to illustrate the features of the optimal policy.
https://doi.org/10.1142/9789812773760_0039
Traditional economic manufacturing quantity (EMQ) model addressed that the perfect production for product. However, there possibly exists the defective product in the manufacturing process. Hence, it is necessary to consider the production process state in the modified EMQ model. Chen and Chung presented the quality selection problem to the imperfect production system for obtaining the optimum production run length and target level. Ladany and Shore proposed the problem of determining optimal warranty period of product in relation to the manufacturer's lower specification limit. In this paper, we further integrate Chen and Chung's and Ladany and Shore's models for obtaining the optimum production run length and warranty period of product for the imperfect production system.
https://doi.org/10.1142/9789812773760_0040
This paper investigates the effects of a Renewing Free-Replacement Warranty (RFRW) on the age replacement policy for a repairable product with a general failure model. In the general model, there are two types of failure when the product fails. One is type I failure (minor failure) which can be removed by a minimal repair; and the other is type II failure (catastrophic failure) which can be removed only by a replacement. After a minimal repair, the product is operational but the failure rate of the product remains unchanged. For both warranted and non-warranted products, cost models are developed, and the corresponding optimal replacement ages are derived such that the long run expected cost rate is minimized.
https://doi.org/10.1142/9789812773760_0041
This paper develops, from the customer's perspective, the optimal spare ordering policy for a non-repairable product with a limited-duration lifetime and under a rebate warranty. The spare unit for replacement is available only by order and the lead time for delivery follows a specified probability distribution. Through evaluation of gains due to the rebate and the costs due to ordering, shortage, and holding, we derive the expected cost per unit time and cost effectiveness in the long run and examine the optimal ordering time by minimizing or maximizing these cost expressions. We show that there exists a unique optimum solution under mild assumptions. Finally, we give some comments and conclusions.
https://doi.org/10.1142/9789812773760_0042
Repairable products which fail within the warranty period are either repaired or replaced by the contractor. In a repair cost limit policy, the repair cost is estimated at each failure and if the cost is greater than a predetermined cost limit, the failed product is replaced by a new one, otherwise it is minimally repaired. In this paper, various repair cost limit shapes with a free warranty period are considered and the best cost limit shape is proposed.
https://doi.org/10.1142/9789812773760_0043
For repairable products, the warrantor has options in choosing the type of repair performed to an item failed within the warranty period. We focus on a particular warranty repair strategy, related to the degree of the warranty repair, under non-renewing, two-dimensional, free of charge to the consumer warranty policy. We consider a rectangular warranty region and divide it into four disjoint subregions. Each of these subregions has a preassigned degree of repair for a faulty item. Our main goal is to determine the subregions, so that the associated expected warranty servicing cost per item sold is minimised.
https://doi.org/10.1142/9789812773760_0044
This paper considers a decision model for a production system in which the demand of the product is influenced by the warranty period offered to the customer. The production process is not perfect; it may shift from an in-control state to an out-of-control state at any random time where some non-conforming items may be produced. The proposed model is formulated under rebate combination warranty policy, assuming that the process shift distribution is arbitrary and product defects are detectable only through time testing for a significant period of time. The expected pre-sale and post-sale costs per unit item is taken as the criterion for optimality. Some characteristics of the model are studied analytically. Optimal decisions are also derived in a numerical example.
https://doi.org/10.1142/9789812773760_0045
The reliability characteristics of automobile components depend on factors or covariates such as the automobile operating environment (e.g. temperature, rainfall, humidity, etc.), usage conditions, manufacturing periods, types of automobiles which use the components, etc. In recent years, many automotive manufacturing companies utilize warranty database as a very rich source of field reliability data that provide valuable information on such covariates for feedback to new product development systems on product performance in actual usage conditions. In warranty database, the information on those covariates are known for the components which fail within the warranty period and are unknown for the censored components. This article considers covariates associated with some reliability-related factors and presents a Weibull regression model for the lifetime of the component as a function of such covariates. The EM algorithm is applied to obtain the ML estimates of the parameters of the model because of incomplete information on covariates. An example based on real field data of automobile component is given to illustrate the use of the proposed method.
https://doi.org/10.1142/9789812773760_0046
For repairable items, the manufacturer is required to rectify all item failures through minimal repair, replacement, and imperfect repair, should failure occur within the period specified in the warranty. In this paper, we look at a new warranty servicing strategy that considers imperfect repair with two-dimensional warranty where the failed item is imperfectly repaired when it fails for the first time in a specified region of the warranty and all other failures are repaired minimally. We derive the optimal values for these to minimize the total expected warranty servicing cost. We compare the results with other strategies reported in the literature.
https://doi.org/10.1142/9789812773760_0047
In this paper we consider a software reliability model (SRM) depending on the number of test cases executed in software testing. The resulting SRM is based on a two-dimensional discrete non-homogeneous Poisson process (NHPP) and is considered as a bivariate extension of the usual NHPP-based SRM by taking account of two time scales; calendar time and number of test cases executed. We apply the Marshall and Olkin's bivariate geometric distribution and develop a two-dimensional discrete geometric SRM. In a numerical example with real software fault data observed in a real development project, we investigate the goodness-of-fit for the proposed SRM and refer to an applicability to the actual software reliability assessment.
https://doi.org/10.1142/9789812773760_0048
This paper deals with the software reliability model based on a nonhomogeneous Poisson process. We introduce a new family of mean value functions which can be either NHPP-I or NHPP-II according to the choice of the distribution function. The proposed mean value function is motivated by the fact that a strictly monotone increasing function can be modelled by a distribution function and that an unknown distribution function can be also approximated by a mixture of beta distributions. Many existing mean value functions can be regarded as special cases of the proposed mean value functions. The maximum likelihood approach is used to estimate the parameters contained in the proposed model.
https://doi.org/10.1142/9789812773760_0049
The inflection S-shaped software reliability growth model (SRGM) proposed by Ohba(1984) is one of the most commonly used SRGM. One purpose of this paper is to estimate the parameters of Ohba's SRGM by applying the Markov chain Monte Carlo techniques to carry out a Bayesian estimation procedures. This paper also considers the optimal software release problem with regard to the expected software cost under this model based on the Bayesian approach. The proposed methods are shown to be quite flexible in many situations and the statistical inference for unknown parameters of interests is readily obtained.
https://doi.org/10.1142/9789812773760_0050
This study focuses on the generalization of several software reliability models and the derivation of confidence intervals of reliability assessment measures. First we propose an incomplete gamma function model, and discuss how to obtain the confidence intervals from a data set by using a bootstrap scheme. A two-parameter numerical differentiation method is applied to the data set to estimate the model parameters. We also show several numerical illustrations of software reliability assessment.
https://doi.org/10.1142/9789812773760_0051
This paper considers a novel modeling framework of software reliability models (SRMs). The resulting SRMs based on the mixed Poisson distribution (MPDs) can involve completely, but are not always equivalent to the non-homogenous Poisson process (NHPP) based SRMs. More precisely, the proposed SRM is given by a mixture of NHPPs, and follows the mixed Poisson process. We develop a parameter estimation method for the MPD-based SRMs based on EM algorithm.
https://doi.org/10.1142/9789812773760_0052
Recently software reliability growth models incorporating coverage growth behavior have been developed and applied in practice, because it is beneficial in order that coverage growth behavior describes a fault detection phenomenon. Performance of such software reliability growth models depends on the kind of selected coverage growth function. This paper first reviews the coverage growth functions considered for software reliability modeling. Then their theoretical characteristics and empirical performance are investigated.
https://doi.org/10.1142/9789812773760_0053
In this paper, we consider the optimal software rejuvenation schedule which maximizes the steady-state system availability. We develop a statistical algorithm to improve the estimation accuracy in the situation where a small number of failure time data is obtained. More precisely, based on the kernel density estimation, we estimate the underlying failure time distribution from the sample data. We propose the framework based on the kernel density estimation to estimate the optimal software rejuvenation schedule from small sample data. In simulation experiments, we show the improvement in the convergence speed to the real optimal solution in comparison with the conventional algorithm.
https://doi.org/10.1142/9789812773760_0054
The black-box approach based on stochastic software reliability models is a simple methodology with only software fault data in order to describe the temporal behavior of fault-detection processes, but fails to incorporate some significant metrics data observed in the testing process. In this paper we develop a proportional intensity-based software reliability models with time-dependent metrics, and propose a statistical framework to assess the software reliability with the time-dependent covariate as well as the software fault data. The resulting model is similar to the usual discrete proportional hazard model, but possesses somewhat different covariate structure from it. We compare three metrics-based software reliability models with some typical non-homogeneous Poisson process models, which are the special cases of our models, and evaluate quantitatively the goodness-of-fit from the viewpoint of information criteria. As an important result, the accuracy on reliability assessment strongly depends on the kind of software metrics data used for analysis and can be improved by incorporating the time-dependent metrics data in modeling.
https://doi.org/10.1142/9789812773760_0055
Software development environment has been changing into new development paradigms such as concurrent distributed development environment and the so-called open source project by using network computing technologies. Especially, an OSS (open source software) system which serve as key components of critical infrastructures in the society are still ever-expanding now. In case of considering the effect of the debugging process on an entire system in the development of a method of reliability assessment for the OSS, it is necessary to grasp the deeply-intertwined factors, such as programming path, size of each component, skill of fault reporter, and so on. In order to consider the effect of each software component on the reliability of an entire system, we propose a new approach to user-oriented software reliability assessment by creating a fusion of neural network and software reliability growth model. In this paper, we show application examples of user-oriented software reliability assessment based on neural network and software reliability growth model for the OSS. Also, we analyze actual software fault count data to show numerical examples of software reliability assessment for the OSS.
https://doi.org/10.1142/9789812773760_0056
We propose the performance evaluation method for the multi-task system with software reliability growth process. The software fault-detection phenomenon in the dynamic environment is described by the Markovian software reliability model with imperfect debugging. We assume that the cumulative number of tasks arriving at the system follows the homogeneous Poisson process. Then we can formulate the distribution of the number of tasks whose processes can be complete within a prespecified processing time limit with the infinite-server queueing model. From the model, several quantities for software performance measurement considering the real-time property can be derived. Finally, we present several numerical examples of the quantities to analyze the relationship between the software reliability characteristics and the system performance measurement.
https://doi.org/10.1142/9789812773760_0057
It has been a great challenge to reliability engineers to evaluate the reliability of their products within an affordable amount of time and effort. Accelerated tests (ATs) combined with censoring have been effectively used for this purpose. ATs are further classified into accelerated life tests (ALTs) and accelerated degradation tests (ADTs). In the former, failure times of test units are observed while, in the latter, their performance characteristics are measured over time. In this paper, the literature on planning ADTs is reviewed with respect to the test scenario, assumed degradation model, and analysis method employed. Finally, recommendations for future research directions are provided.
https://doi.org/10.1142/9789812773760_0058
This paper presents failure analyses and an accelerated degradation test of AC fan motors for refrigerators. Several analyses such as destructive physical analysis, blade fracture analysis, FEM, and fan oscillation are made to identify root causes of failures for the field samples. Next, an accelerated degradation test is planned to determine the significant factors and to predict the lifetime for the predominant failure mode, locked-rotor by clinger between shaft and bearing. The amount of oil, temperature, voltage, length of shaft, and unbalance are considered as accelerating factors, and 16 test conditions are selected by design of experiment using orthogonal array. Data analysis shows that RPM is affected by voltage only, but is not degraded significantly in time. It is also shown that voltage and the amount of oil affects the change of oil amount, which decreases in time. The degradation characteristic of AC fan motor will be characterized and its accelerated degradation model will be made after more testing.
https://doi.org/10.1142/9789812773760_0059
An analytical model is developed for accelerated performance degradation tests. The performance degradations at a specified exposure time of products are assumed to follow a normal population. We assume that the relationship between the location parameter of normal population and the exposure time is a linear function of the exposure time, that is, μ(t) = a + bt that the slope coefficient of the linear relationship has an Arrhenius dependence on temperature and that the scale parameter of the normal population is constant and independent of temperature or exposure time. The method of maximum likelihood estimation is used to estimate the parameters involved. A closed form expression of the likelihood function for the accelerated performance degradation data is derived and the Fisher information matrix is also derived for calculating the asymptotic variance of the 100pth percentile of the lifetime distribution at use temperature.
https://doi.org/10.1142/9789812773760_0060
This paper considers the design of accelerated life tests when an extrinsic failure mode as well as intrinsic one exists. A mixture of two distributions is introduced to describe these failure modes. It is assumed that the lifetime distribution for each failure mode is Weibull. Minimizing the generalized asymptotic variance of maximum likelihood estimators of model parameters is used as an optimality criterion. The optimum test plans are presented for selected values of design parameters and the effects of errors in pre-estimates of the design parameters are investigated.
https://doi.org/10.1142/9789812773760_0061
This paper proposes a method of estimating the lifetime distribution at use condition for constant stress accelerated life tests when an extrinsic failure mode as well as intrinsic one exists. General limited failure population model is introduced to describe these failure modes. It is assumed that the log lifetime of each failure mode follows a location-scale distribution and a linear relation exists between the location parameter and the stress. An estimation procedure using the expectation and maximization algorithm is proposed. Specific formulas for Weibull distribution are obtained and illustrative examples are given.
https://doi.org/10.1142/9789812773760_0062
The degradation characteristics of polymeric humidity sensors under high temperature and humidity were investigated. The degradation tests were carried out at 60°C-90 relative humidity (RH), 85°C–85% RH and 110°C–85% RH. The test results show that the characteristic values of sensors at 60°C–90% RH and at 85°C–85% RH decreased over time, while those of sensors at 110°C–85% RH increased. From these test results, we could see that the degradation mechanism at 110°C–85% RH was different from that at other conditions. According to the failure analysis results, delamination at the interface between the polymer film and the Au electrode was the main reason for degradation at 60°C –90% RH and 85°C–85% RH.
https://doi.org/10.1142/9789812773760_0063
The reliability estimation of pipelines is performed with help of the probabilistic method which includes the uncertainties in the load and resistance parameters of limit state function. The FORM (first order reliability method) and the SORM (second order reliability method) are carried out to estimate the failure probability of pipeline utilizing the FAD (failure assessment diagram). Furthermore, the MCS (Monte Carlo Simulation) is used to verify the results of the FORM and the SORM. It is noted that the failure probability increases with increase of the dent depth, gouge depth, operating pressure and outside radius, and decrease of wall thickness. And it is found that the FORM is useful and is an efficient method to estimate the failure probability for evaluating the reliability of the pipeline utilizing FAD. Furthermore the safety assessment technique for pipeline which utilize the FAD only and is deterministic method, is found to be more conservative than those using the probability theory and the FAD.
https://doi.org/10.1142/9789812773760_0064
The reliability, that is long-term quality, requires a different approach from the previous emphasis on short-term concerns. The purpose of this paper is to present reliability evaluation of high precision oil cooler system. The oil cooler system in question is a cooling device that minimizes deformation by heat of driving devices. This system is used for machine tools and semiconductor equipment and so forth. We carry out reliability prediction based on failure rate database and conducted the reliability test to evaluate life of oil cooler using test-bed. The results of this study have shown the reliability in terms of the failure rate and MTBF for oil cooler system and its components and the distribution of failure mode. It is expected that presented results will help to increase the reliability of oil cooler system and will be applicable to the evaluation of the reliability for other machinery products.
https://doi.org/10.1142/9789812773760_0065
Design of accelerated life test sampling plans (ALTSPs) is considered for products with Weibull lifetime distribution. It is assumed that the Weibull scale and shape parameters are log linear functions of (possibly transformed) stress. Two types of ALTSPs are considered; time-censored and failure-censored. Optimum ALTSPs which satisfy the producer's and consumer's risk requirements and minimize the asymptotic variance of the test statistic for deciding the lot acceptability are obtained.
https://doi.org/10.1142/9789812773760_0066
This paper discusses the failure mechanism and material tests were carried out to predict the useful life of NBR and EPDM for compression motor, which is used in refrigerator component. The heat-ageing process leads not only to mechanical properties change but also to chemical structure change so called degradation. In order to investigate the heat-aging effects on the material properties, the accelerated tests were carried out. The stress-strain curves were plotted from the results of the tensile test for virgin and heat-aged rubber specimens. The rubber specimens were heat-aged in an oven at the temperature ranging from 70°C to 100°C. Compression set results changes as the threshold are used for assessment of the useful life and time to threshold value were plotted against reciprocal of absolute temperature to give the Arrhenius plot. By using the compression set test, several useful life prediction equations for rubber material were proposed.
https://doi.org/10.1142/9789812773760_0067
We consider non-asymptotic and asymptotic properties of mixture failure rates in different settings. We show that the mixture failure rate is 'bent down', compared with the corresponding unconditional expectation of the baseline failure rate. We also consider a problem of mixture failure rate ordering for the ordered mixing distributions. Some results on asymptotic behavior of mixture failure rates are discussed The suggested lifetime model generalizes all three conventional survival models (proportional hazards, additive hazards and accelerated life) and creates possibility of deriving explicit asymptotic formulas. A special emphasis is given to the accelerated life model. It is shown that the mixture failure rate asymptotic behavior depends only on the behavior of a mixing distribution in the neighborhood of zero and not on the whole mixing distribution.
https://doi.org/10.1142/9789812773760_0068
This paper is intended to compare the hazard rate from the Bayesian approach with the hazard rate from the maximum likelihood estimate (MLE) method. The MLE of a parameter is appropriate as long as there are sufficient data. For various reasons, however, sufficient data may not be available, which may make the result of the MLE method unreliable. In order to resolve the problem, it is necessary to rely on judgment about unknown parameters. This is done by adopting the Bayesian approach. The hazard rate of a mixture model can be inferred from a method called Bayesian estimation. For eliciting a prior distribution which can be used in deriving a Bayesian estimate, a computerized-simulation method is introduced. Finally, a numerical example is given to illustrate the potential benefits of the Bayesian approach.
https://doi.org/10.1142/9789812773760_0069
A phase-type (PH) distribution is the probability distribution of a killing time for a finite-state Markov chain. In this paper, we consider an algorithm for fitting parameters of the PH distribution to sample data sets. Especially, we focus on a specific subclass of PH distributions, which admits the minimal representations called canonical forms. The developed estimation procedure is based on the EM (Expectation Maximization) principle. The EM algorithm specified to the canonical forms needs less computational efforts than the EM algorithms for general PH distributions.
https://doi.org/10.1142/9789812773760_0070
System reliability, as a quality index, is the capability to complete the specified functions accurately in mutually harmonious manner under the specified conditions within specified time period. The vague feature is intrinsic and inherent to the system reliability and inevitably engaging fuzzy mathematics. Fuzzy mathematics initiated by Zadeh (1965) facilitated a foundation dealing with vague phenomena in reliability modeling. However, the fuzzy mathematical foundation initiated by Zadeh (1965, 1978) is membership function and possibility measure based and widely used. However, possibility measure, which was originally expected to play the role of probability measure in probability theory but did not because possibility measure does not possess self-duality property as that in probability theory. To resolve this dilemma, Liu (2004) proposed an axiomatic foundation for modeling fuzzy phenomena, named as credibility theory. The credibility measure possesses self-duality property and is able to play the role of that in probability theory. In this paper, we will explore the concept of credibility measure, its axiomatic foundation, concept of fuzzy variable and its credibility distribution in the sense based on the credibility theoretical foundation. Furthermore, we propose the concept of credibility copula for characterizing the fuzzy dependence between fuzzy variables. Finally, we explore the credibility reliability evaluation based on the fuzzy load-strength concept.
https://doi.org/10.1142/9789812773760_0071
System reliability, as a quality index, is the capability to complete the specified functions accurately in mutually harmonious manner under the specified conditions within a specified time period. We notice that high costs are sometimes associated with the occurrence of tiny probability and therefore the reliability index alone would not fully characterize the consequence of system breakdown. Todinov (2005) proposed the cost of failure as a measure for system reliability risk and explored related models. However, the sparse data availability extracted from the system may haunt the modeling exercises. In this paper, we will merge cost of failure idea and the small sample asymptotic idea together for the investigation on the asymptotic distributions for the total cost of failures due to system failures and associated losses.
https://doi.org/10.1142/9789812773760_0072
The maintenance effect is a peculiar factor of repairable systems. Malik (1979) and Brown, Mahoney & Sivazlian (1983) proposed general approaches for the maintenance effects, where each maintenance reduces the age of the unit with respect to the rate of occurrences of failures. An important problem in failure data analysis has been that all parts of data have not always been collected under similar conditions. We consider an estimation problem of repairable systems under two different environments and Malik's proportional age reduction model. Failure intensities depending on environmental conditions and maintenance effect are estimated by the method of maximum likelihood. Simulation results are presented to illustrate the accuracy and the properties of the proposed estimation method.
https://doi.org/10.1142/9789812773760_0073
Used as a mixing distribution for a random Poisson parameter, the gamma distribution leads to a negative binomial process. This appears to be a useful model for failure data, particularly for data from a number of repairable systems all of which follow a Poisson process but with different intensities. The hyper-parameters of the gamma distribution have different meanings according to the sources of randomness in the Poisson failure parameter. Two such sources are failure time and failure rate. Random failure time and random failure rate are interpreted in the resulting negative binomial average failure in terms of the number of failures and the intensity of a failure, respectively.
https://doi.org/10.1142/9789812773760_0074
A nonparametric procedure is proposed to test the exponentiality against the hypothesis that one life distribution has a greater residual life times than the other life distribution. Such a hypothesis turns out to be equivalent to the one that one failure rate is greater than the other and so the proposed test works as a competitor to more IFR tests by Kochar (1979, 1981) and Cheng (1985). Our test statistic utilizes the U-statistics theory to establish its asymptotic normality and consequently, a large sample nonparametric test is proposed. The power of the proposed test is investigated by calculating the Pitman asymptotic relative efficiencies against several alternative hypotheses. A numerical example is presented to exemplify the proposed test.
https://doi.org/10.1142/9789812773760_0075
The gamma distribution, having location (threshold), scale and shape parameters, is used as a model for distributions of life spans, reaction time, and for other types of non-symmetrical data. It has been said that the inference for the three-parameter gamma distribution is difficult because of nonregularity in maximum likelihood estimation although numerous papers have appeared over the years. On the other hand, the methodology for inference for the two-parameter gamma distribution have been established over the years. It is usual to avoid fitting the three-parameter gamma distribution and to fit the two-parameter gamma distribution to data in practice. In this article, we propose a new method of estimation of the shape parameter of the gamma distribution based on the data transformation free from location and scale parameters. The method is easily implemented with the aid of table or graph. A simulation study shows that the proposed estimator performs better than the maximum likelihood estimator of the shape parameter of the two-parameter gamma distribution when the threshold is existent even though that is close to zero.
https://doi.org/10.1142/9789812773760_0076
In order to raise the efficiency of Bayesian decision analysis, the comparison analysis of added information structure with noise are considered by using the correlative coefficient of information structures. The entropy measures the indefiniteness of information system in information theory. It is possible to conduct the evaluation of information reliability, which through denning the decrease of object's indefiniteness by using of conditional entropy, and further to measure the volume of information content in the added information structures. An expression about information correlation and distance that matched to Bayesian decision analysis is derived by considering the standardization of information content. Such analysis demonstrates the correlative coefficient of information structures can be used to compare and appraise the information structures with noise.
https://doi.org/10.1142/9789812773760_0077
Grey theory initiated by Deng (1982) is a mathematical branch dealing with system dynamics having sparse data availability. Grey reliability analysis is thus advantageous because of the small sample size requirements, for example, the first-order one-variable grey differential equation only needs as little as four data points. However, the grey estimation of state dynamic law uses least-square approach, i.e., parameter estimation under L2 norm. Problems associated with L2 norm grey estimation are the model accuracy specifications. The L2 norm grey modeling campus often borrows the model fitting criteria from statistical linear model analysis, for example, using mean-sum-of squared errors as model fitting criterion and even using probability bound for it. These exercises are putting themselves in controversy. In numerical analysis and approximation theory, relative error is a standard approximation criterion although the L2 norm grey modeling campus also uses relative error as model accuracy measure. In this paper, we propose a L1 norm based grey modeling and search the grey parameters in terms of simplex technique in linear programming. We will discuss briefly the grey reliability analysis under L1 norm based grey state dynamics.
https://doi.org/10.1142/9789812773760_0078
Traffic intensity is an important measure for assessing performance of a queueing system. In this paper, we propose a consistent and asymptotically normal estimator of intensity for a queueing system with distribution-free interarrival and service times. Using this estimator and its associated estimated variance, a 100(1-α)% asymptotical confidence interval of intensity is obtained. A numerical simulation study is conducted to demonstrate performance of the proposed estimator and its associated estimated variance applied to interval estimations of intensity for a queueing system.
https://doi.org/10.1142/9789812773760_0079
This paper considers the problem of maximizing an expected liquidation profit of holdings, when the market impact of stock price is caused by the holdings sell-off. The cumulative damage model is applied to the fluctuations of stock price. We derive and analytically discusse an optimal sell-off interval of holdings to maximaize the expected liquidation profit of holdings.
https://doi.org/10.1142/9789812773760_0080
This paper considers a system which is inspected equally spaced points in time and whose deterioration follows a discrete time Markov chain with an absorbing state. After each inspection, one of the following actions can be taken: operation, imperfect repair m (1 ≤ m ≤ M) or replacement. When imperfect repair m is taken for the system which has been repaired n times in state i, it moves to state j with probability . We study an optimal maintenance policy which minimizes the expected total discounted cost for unbounded horizon. It is shown that a generalized control limit policy is optimal under reasonable assumptions. We investigate structural properties of the optimal policy. Furthermore, numerical analysis is conducted to show that these properties could hold under weaker assumptions.
https://doi.org/10.1142/9789812773760_0081
In this paper, we analyze the transient behavior of Internet worms. We show that a stochastic SIS (Susceptible-Infected-Susceptible) model to describe the Internet-worm propagation can be approximated by a simple birth and death process. Specifically, when there are a huge number of hosts, the propagation of Internet worms can be modeled by the simple birth and death processes. Deriving the probability generating function of the number of infected hosts, we formulate the probability mass function explicitly, and define some dependability measures for evaluating the transient behavior of Internet worms.
https://doi.org/10.1142/9789812773760_0082
In this paper, we introduce and deal an inspection schedule for a lot-sizing production system. The system of production process has a general deterioration distribution with increasing failure rate and non-self-announcing failures. Rather than develop a non-Markovian shock model, we focus on a quantile-based reliability model. This research will also provide a strategy of inspection based on the economic production quantity and examples of Weibull shock models will be given to illustrate this inspection schedule.
https://doi.org/10.1142/9789812773760_0083
Process capability indices Cp, Cpk, Cpm and Cpp fitting for nominal-the-best type quality characteristics, are an effective tool to assess process capability since these indices can reflect a centering process capability and process yield adequately. The index Cpp introduced by Greenwich and Jahr-Schaffrath (1995) provides additional and individual information concerning the process accuracy and the process precision. Although Cpp is useful to evaluate process capability for a single product in common situation, Cpp cannot be applied to evaluate the multi-process capability. Referring to Vännman and Deleryd's (Cdr, Cdp)-plot, a fuzzy inference method is proposed in our study to evaluate the multiprocess capability based on values of a confidence box calculated from sample data. This method takes the advantages of fuzzy systems such that a grade instead of sharp evaluation result can be obtained. An illustrated example of ball-point pens demonstrates that the presented method is effective for assessment of multi-process capability.
https://doi.org/10.1142/9789812773760_0084
A lot of businesses bring process capability index into be measurement quality tool. Process capability index Cpm is now in widespread uses in industries because of process capability index Cpm amply react to loss of the process and yield of the process. In general, manufactures select suppliers just only use the index Cpm to evaluate the suppliers' process capability. However, coming the supply chain competitive times, manufacturers have to realize how to select suppliers and subcontractors to be key point work of the supply management. Therefore, for original suppliers, we matched up the process capability index Cpm and process improvement capability index CPIM to evaluated, measured the suppliers' process capability and reduced manufactures' improvement cost. When suppliers' process capability not enough, we can further make use of process improvement capability index CPIM to measure suppliers' process improvement capability, decrease internal company's process of improvement cost effectively, and improve product's quality and production to get up to the goal of business operation forever.
https://doi.org/10.1142/9789812773760_0085
Most models reported in the literature treat the determination of process mean and tolerance limits as two separate research fields. In this paper, the problem of jointly determining the optimum process mean and tolerance limits for each market is considered in situations where there are several markets with different price/cost structures. A profit model is constructed which involves selling price, production cost, penalty cost, and inspection cost. A Taguchi's quadratic loss function is utilized for developing the economic model for determining the optimum process mean and tolerance limits. A numerical example is given.
https://doi.org/10.1142/9789812773760_0086
In spite of the growing importance of multiple response optimization, there have been few research efforts in this area. This article proposes a versatile optimization model by employing the concept of quality loss function and response surface modeling. Taguchi (1986) proposes the use of quality loss function to measure a societal cost incurred by the customer on a monetary scale. Mostly applied to a single quality characteristic problem, however, the quality loss function may also be extended to multiple response systems by combining performances of individual quality characteristics into a single objective function. The overall performance of a multiple response problem is then evaluated by obtaining mean and variance responses for each quality characteristic, and covariance responses among quality characteristics.
https://doi.org/10.1142/9789812773760_0087
Most manufacturing industries have been faced with the problem involving simultaneous optimization of several quality characteristics that may be considered the basis for the product selection by customers. Compared with the case of single quality characteristic, however, the design optimization of multiple quality characteristics has received little attention. This paper employs the concepts of the dual response surface technique and the MSE criterion to the optimization of multiple quality characteristics. The proposed model is aimed at simultaneously minimizing the MSE of individual quality characteristics. However, the optimal solution to one quality characteristic may result in poor performances in other quality characteristics. Thus, a tradeoff among quality characteristics is required and the design optimization of multiple quality characteristics may be viewed as a multiple objective programming problem. A global criterion approach (Lai and Hwang 1994) is employed to set up an optimization model for multiple quality characteristics.
https://doi.org/10.1142/9789812773760_0088
We consider the problem of determining the optimum target value of the process mean for a production process where multiple products are processed. Every outgoing item is inspected, and each item failing to meet the specification limits is scrapped. Assuming that the quality characteristics of the products are normally distributed with known variances and a common process mean, the common process mean is obtained by maximizing the expected profit which includes selling prices, costs of production and inspection, and losses due to the scraps. A method of finding the optimum common process mean is presented and an illustrative example from electronic device production process is given.
https://doi.org/10.1142/9789812773760_0089
Time-between-events (TBE) data are available in industries such as manufacturing, maintenance, and even in service. Recently, control charts have been shown to be useful for the time-between-events to detect changes in the statistical distribution, especially the mean change. A common assumption for control chart design is the time between occurrences of events is exponentially-distributed. However, this is valid only when the events occurrence rate is constant. In this paper, a version of exponentially weighted moving average (EWMA) chart is developed for monitoring Weibull-distributed TBE data. The Average Run length (ARL) and Average Time to Signal (ATS) properties are examined, and an example is given for illustration.
https://doi.org/10.1142/9789812773760_0090
Six Sigma has already become an efficient improvement technique adopted by a great number of enterprises. Numbers of Sigma has become a tool of measuring process capability in some enterprises. However, many enterprises still use Process Capability Indices (PCIs) to measure the process capability. The paper will research the relationship between PCIs and number of Sigma. In bilateral specifications, the paper will research the relationship between the PCIs which are Cp, Cpk, Cpm and Cpmk and number of Sigma. In unilateral specifications, the paper will research the relationship between the PCIs which are Cpu and Cpl and number of Sigma. If supplier and buyer use different tools to measure the process capability, the relationship can decrease the communicate noise.
https://doi.org/10.1142/9789812773760_0091
We consider a design for a balanced mixed model with covariates considered as fixed effects and one factor as random effects. In this paper, some methods for constructing confidence intervals on measures of variability in repeatability and reproducibility are provided. The objective of this study is to compare the confidence intervals on repeatability and reproducibility to apply for R&R study. A numerical example is provided.
https://doi.org/10.1142/9789812773760_0092
A therapeutic strategy is planned by a doctor who takes charge of medical treatment. The strategy is based on a criterion which maximizes an expected utility of patient. In general, the axiom of Savage's expected utility is used to measure the patient's utility. However, Savage's utility can not be used when several probability measures are nominated as a candidate for the derivation of the utility. For measuring a patient's subjective value, health-related QOL scale is considered as one of most important issue. As a recent topic, Q-TWiST(Quality-adjusted Time Without Symptoms and Toxicity method) is a powerful tool to measure the variable subjective value of patient. In order to derive Q-TWiST, the survival probability of patient has to be estimated from censored or non-censored clinical data. Q-TWiST takes different values when there are two or more recommended estimation methods. Which values should be adopted by the viewpoint of patient? For the purpose of solving the above-mentioned problems, Q-TWiST which is composed by several probability measures is proposed. The sensitivity analysis concerning the utility value which depends on the state of the patient is made.
https://doi.org/10.1142/9789812773760_0093
Traditionally, using a control chart to monitor a process assumes that process observations are normally and independently distributed. In fact, for many processes, products are either connected or autocorrelated and, consequently, obtained observations are autocorrelative rather than independent. In this scenario, applying an independence assumption instead of the autocorrelation for process monitoring is unsuitable. This study examines a generally weighted moving average (GWMA) with a time-varying control chart for monitoring the mean of a process based on autocorrelated observations from a first-order autoregressive process (AR(1)) with random error.
https://doi.org/10.1142/9789812773760_0094
The c chart is often used to monitor the number of non-conforming products. When small shifts in the nonconformities of process mean result from assignable causes, classical c charts are relatively inefficient in detecting small shifts in the nonconformities of process mean. Therefore, the Poisson exponentially weighted moving average (Poisson EWMA) control scheme is another superior alternative to c charts. In this paper, we extended the Poisson EWMA control chart. This generalized chart, is called herein the Poisson generally weighted moving average (Poisson GWMA) control chart. This study presents the Poisson GWMA control chart for monitoring the Poisson counts. Simulation is used herein to evaluate the average run length (ARL) properties of the c chart, the Poisson EWMA control chart and the Poisson GWMA control chart. The Poisson GWMA control chart is superior to the Poisson EWMA control chart as measured by ARL. An example is also given to illustrate this study.
https://doi.org/10.1142/9789812773760_0095
In the original, there is only personal information management function build in the personal digital assistant (PDA). But under the circumstance that by the importance, PDA is less than mobile phone and by function it is not as good as Notebook PC, it's product features are being overlooked, if we could clarify the factors that will affect the satisfaction and importance in the promotion course that would be helpful for the spreading of product and the market development. In order to know the satisfactory and respective on the product feature, brand name, service conveniences of PDA as to clarify the factor that will affect the purchasing desire of PDA so that might help for the IA promotion in the future. Therefore, this article will take the example of PDA and utilize the DMAIC methodology of 6-Sigma to build up the measurement and improvement model to promote the customer overall satisfaction performance. At first, we define the questions and measurement model and then using the questionnaire to measure the performance of customer satisfaction and importance), and then construct the satisfaction performance metrics and overall satisfaction performance control chart thus be used as the analyzing tool to find out the key improvement items and review items, and then, using the cause effect diagram to make analysis and find out the corrective action, then implementing these corrective action in the critical review and improvement items. In the final, focus on the critical and improvement items re-do the questionnaire survey and build up the overall satisfaction performance control chart to control and sustain the execution result of the relevant corrective actions. Through the comprehensive measurement and improvement model of this article, the enterprise could quickly and effectively measure, analyze, improve and control their service quality and then under the reasonable cost condition to effectively promote overall customer satisfaction performance to create high value-added quality competitiveness and effectively enhance the profit gaining capability of enterprise.
https://doi.org/10.1142/9789812773760_bmatter
AUTHOR INDEX.
Sample Chapter(s)
Chapter 1: Optimal Burn-In for Minimizing Total Warranty Cost (311k)