Please login to be able to save your searches and receive alerts for new content matching your search criteria.
In this paper, software reliability models based on a nonhomogeneous Poisson process (NHPP) are summarized. A new model based on NHPP is presented. All models are applied to two widely used data sets. It can be shown that for the failure data used here, the new model fits and predicts much better than the existing models. A software program is written, using Excel & Visual Basic, which can be used to facilitate the task of obtaining the estimators of model parameters.
This paper presents a software reliability growth model framework modeled from a non-homogenous poisson process (NHPP). Invariably, software testing continues to be a paramount measure for validating the standard of software. Test coverage measures appraise and estimate the proportion and gradation of testing in software. Therefore, presenting a correct picture of the test coverage becomes a prime requisite to guarantee software reliability. As an enhancement over the existing models, the proposed model integrates testing coverage (TC), error propagation, and fault withdrawal efficiency while keeping the number of parameters restrained to make the framework more reliable for parameter estimation. A relative analysis to assess the efficacy of the proposed model and some existing models has been carried out on the failure data obtained from three real-world software applications using six comparison criteria. Finally, the weighted criteria rank method has been used to rank the models and assess their performance. In addition, sensitivity analysis has been carried out to demonstrate the effect of the parameters of the proposed model on the mean value function.
The today's fast-paced, competitive environment in the field of Science and Technology, demands highly reliable hardware and software in order to achieve new breakthroughs in quality and productivity. In this scenario, first release of software products includes enough features and functionality to make it useful for the customers. Later, software companies have to come up with upgradation or add-ons in their software to survive in the market through a series of releases. Each succeeding upgradation offers some innovative performance or new functionality, distinguishing itself from the past releases. In one-dimensional Software Reliability Growth Models (SRGM) researcher used one factor such as Testing-Time, Testing-Effort or Coverage, etc. but within a two-dimensional SRGM environment, the process depends on two-types of reliability growth factors like Testing-time and Testing-effort. In addition, we also consider the combined effect of bugs encountered during testing of present release and user reported bugs from the operational phase. The model developed in the paper takes into consideration the testing and the operational phase where fault removal phenomenon follows, logistic and Weibull model, respectively. The paper also comprises of formulating an optimal release problem based on Multi-Attribute Utility Theory (MAUT). Lastly, the model validation is done on real dataset of software already released in the market with successive generations.
Often, the duration of a reliability growth development test is specified in advance and the decision to terminate or continue testing is conducted at discrete time intervals. These features are normally not captured by reliability growth models. This paper adapts a standard reliability growth model to determine the optimal time for which to plan to terminate testing. The underlying stochastic process is developed from an Order Statistic argument with Bayesian inference used to estimate the number of faults within the design and classical inference procedures used to assess the rate of fault detection. Inference procedures within this framework are explored where it is shown the Maximum Likelihood Estimators possess a small bias and converges to the Minimum Variance Unbiased Estimator after few tests for designs with moderate number of faults. It is shown that the Likelihood function can be bimodal when there is conflict between the observed rate of fault detection and the prior distribution describing the number of faults in the design. An illustrative example is provided.