Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Often, the duration of a reliability growth development test is specified in advance and the decision to terminate or continue testing is conducted at discrete time intervals. These features are normally not captured by reliability growth models. This paper adapts a standard reliability growth model to determine the optimal time for which to plan to terminate testing. The underlying stochastic process is developed from an Order Statistic argument with Bayesian inference used to estimate the number of faults within the design and classical inference procedures used to assess the rate of fault detection. Inference procedures within this framework are explored where it is shown the Maximum Likelihood Estimators possess a small bias and converges to the Minimum Variance Unbiased Estimator after few tests for designs with moderate number of faults. It is shown that the Likelihood function can be bimodal when there is conflict between the observed rate of fault detection and the prior distribution describing the number of faults in the design. An illustrative example is provided.
In this paper we consider a software reliability model (SRM) depending on the number of test cases executed in software testing. The resulting SRM is based on a two-dimensional discrete non-homogeneous Poisson process (NHPP) and is considered as a bivariate extension of the usual NHPP-based SRM by taking account of two time scales; calendar time and number of test cases executed. We apply the Marshall and Olkin's bivariate geometric distribution and develop a two-dimensional discrete geometric SRM. In a numerical example with real software fault data observed in four real development projects, we investigate the goodness-of-fit for the proposed SRM and refer to an applicability to the actual software reliability assessment.
It is commonplace to replicate critical components in order to increase system lifetimes and reduce failure rates. The case of a general N-plexed system, whose failures are modeled as N identical, independent nonhomogeneous Poisson process (NHPP) flows, each with rocof (rate of occurrence of failure) equal to λ(t), is considered here. Such situations may arise if either there is a time-dependent factor accelerating failures or if minimal repair maintenance is appropriate. We further assume that system logic for the redundant block is 2-out-of-N:G. Reliability measures are obtained as functions of τ which represents a fixed time after which Maintenance Teams must have replaced any failed component. Such measures are determined for small λ(t)τ, which is the parameter range of most interest. The triplex version, which often occurs in practice, is treated in some detail where the system reliability is determined from the solution of a first order differential-delay equation (DDE). This is solved exactly in the case of constant λ(t), but must be solved numerically in general. A general means of numerical solution for the triplex system is given, and an example case is solved for a rocof resembling a bathtub curve.
The today's fast-paced, competitive environment in the field of Science and Technology, demands highly reliable hardware and software in order to achieve new breakthroughs in quality and productivity. In this scenario, first release of software products includes enough features and functionality to make it useful for the customers. Later, software companies have to come up with upgradation or add-ons in their software to survive in the market through a series of releases. Each succeeding upgradation offers some innovative performance or new functionality, distinguishing itself from the past releases. In one-dimensional Software Reliability Growth Models (SRGM) researcher used one factor such as Testing-Time, Testing-Effort or Coverage, etc. but within a two-dimensional SRGM environment, the process depends on two-types of reliability growth factors like Testing-time and Testing-effort. In addition, we also consider the combined effect of bugs encountered during testing of present release and user reported bugs from the operational phase. The model developed in the paper takes into consideration the testing and the operational phase where fault removal phenomenon follows, logistic and Weibull model, respectively. The paper also comprises of formulating an optimal release problem based on Multi-Attribute Utility Theory (MAUT). Lastly, the model validation is done on real dataset of software already released in the market with successive generations.
This paper presents a software reliability growth model framework modeled from a non-homogenous poisson process (NHPP). Invariably, software testing continues to be a paramount measure for validating the standard of software. Test coverage measures appraise and estimate the proportion and gradation of testing in software. Therefore, presenting a correct picture of the test coverage becomes a prime requisite to guarantee software reliability. As an enhancement over the existing models, the proposed model integrates testing coverage (TC), error propagation, and fault withdrawal efficiency while keeping the number of parameters restrained to make the framework more reliable for parameter estimation. A relative analysis to assess the efficacy of the proposed model and some existing models has been carried out on the failure data obtained from three real-world software applications using six comparison criteria. Finally, the weighted criteria rank method has been used to rank the models and assess their performance. In addition, sensitivity analysis has been carried out to demonstrate the effect of the parameters of the proposed model on the mean value function.
This paper introduces an innovative approach aimed at enhancing software reliability by integrating testing coverage within a nonhomogeneous Poisson process (NHPP). The reliability of software holds paramount significance for both developers and users, hinging on precise reliability estimations. In this paper, three real-world datasets are used to examine goodness-of-fit of the proposed model and the performance is compared with 11 other existing NHPP models. The performance of all the models is evaluated using five goodness-of-fit criteria including mean square error (MSE), Akaike’s information criterion (AIC), Bayesian information criterion (BIC), predictive risk ratio (PRR) and Pham’s criterion (PC). The results reveal that, when it comes to predictive power and goodness-of-fit, our proposed model surpasses the other models.
This paper presents a new stochastic model for determining the optimal release time for a computer software in testing phase, taking account of the debugging time lag. In the earlier works, most of software release models were considered, but it was assumed that an error detected can be removed instantaneously. In other words, none discussed quantitatively the effect of the software maintenance action in the optimal software release time. Main purpose of this work is to relate the optimal software release policy with the arrival-service process on the software operation phase by users. We use the Non-Homogeneous Poisson Process (NHPP) type of software reliability growth models as the software error detection phenomena and obtain the optimal software release policies minimizing the expected total software costs. As a result, the usage circumstance of a software in operation phase gives a monotone effect to the software release planning.
In this paper, software reliability models based on a nonhomogeneous Poisson process (NHPP) are summarized. A new model based on NHPP is presented. All models are applied to two widely used data sets. It can be shown that for the failure data used here, the new model fits and predicts much better than the existing models. A software program is written, using Excel & Visual Basic, which can be used to facilitate the task of obtaining the estimators of model parameters.