Please login to be able to save your searches and receive alerts for new content matching your search criteria.
This paper presents a software reliability growth model framework modeled from a non-homogenous poisson process (NHPP). Invariably, software testing continues to be a paramount measure for validating the standard of software. Test coverage measures appraise and estimate the proportion and gradation of testing in software. Therefore, presenting a correct picture of the test coverage becomes a prime requisite to guarantee software reliability. As an enhancement over the existing models, the proposed model integrates testing coverage (TC), error propagation, and fault withdrawal efficiency while keeping the number of parameters restrained to make the framework more reliable for parameter estimation. A relative analysis to assess the efficacy of the proposed model and some existing models has been carried out on the failure data obtained from three real-world software applications using six comparison criteria. Finally, the weighted criteria rank method has been used to rank the models and assess their performance. In addition, sensitivity analysis has been carried out to demonstrate the effect of the parameters of the proposed model on the mean value function.
This paper introduces an innovative approach aimed at enhancing software reliability by integrating testing coverage within a nonhomogeneous Poisson process (NHPP). The reliability of software holds paramount significance for both developers and users, hinging on precise reliability estimations. In this paper, three real-world datasets are used to examine goodness-of-fit of the proposed model and the performance is compared with 11 other existing NHPP models. The performance of all the models is evaluated using five goodness-of-fit criteria including mean square error (MSE), Akaike’s information criterion (AIC), Bayesian information criterion (BIC), predictive risk ratio (PRR) and Pham’s criterion (PC). The results reveal that, when it comes to predictive power and goodness-of-fit, our proposed model surpasses the other models.
The efficiency and performance of a software application largely depend on the testing strategy adopted by the firm. Apart from the tools, techniques, and skills used for testing, the duration also plays an important influence in establishing software reliability. This defines the operational performance of the software. The testing duration decision is dependent on the failure behavior depicted by the software during the process of testing and the cost spent at various phases of development. In this paper, we study the multi-release model whose fault removal process is affected by random irregular fluctuations (white noise) and error generation phenomenon and determine the optimal time of testing for a multi-release-based software. The fault removal process is governed by the phenomenon of testing coverage which is affected by random fluctuations. The model shows encouraging results as it handles the stochastic property of the fault detection process. The optimal testing time is determined with the goal to minimize the expected development cost related to software while achieving the desired reliability levels of software for that release using a Genetic Algorithm. A real-life four-release fault dataset of tandem computers has been used to numerically demonstrate the methodology. It is observed through sensitivity analysis that the presence of white noise directly affects the cost and optimal testing duration. The potential to improve sensitivity, flexibility, early detection, discovering unsuspected patterns, and boost fault diagnosis is enhanced by collecting irregular fluctuations in the fault detection rate. This technology, by going beyond existing methodologies, has distinct advantages for detecting faults and can contribute to more dependable and efficient systems in a variety of domains.