Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The problem of nonpreemptively scheduling a set of n independent jobs with identical release times so as to minimize the number of late jobs is considered. It is known that the problem can be solved in O(n log n) time, by an algorithm due to Moore, for a single processor, and that it becomes NP-hard for two or more identical processors, even if all jobs have identical due dates. In this paper we give a fast heuristic, based on Moore’s algorithm, for the multiprocessor case. Like Moore’s algorithm, our heuristic also admits an O(n log n) implementation. It is shown that the performance ratio of the heuristic is 4/3 for two identical processors, where the performance ratio is defined to be the least upper bound of the ratio of the number of on-time jobs in an optimal schedule versus that in the schedule generated by the heuristic.
Software reliability plays an important role in assuring the quality of a software. To ensure software reliability, the software is tested thoroughly during the testing phase. The time invested in the testing phase or the optimal software release time depends on the level of reliability to be achieved. There are two different concepts related to software reliability, viz., testing reliability and operational reliability. In this paper, we compare both types of software reliabilities to determine the optimal testing time of the software so as to minimize the total expected software maintenance cost. We consider a software has a number of clusters of modules, each having a different number of errors and a different failure rate. A hyperexponential model is employed for analyzing software reliability growth. Parameter estimation using the maximum likelihood estimation technique is also discussed. Numerical illustrations are taken to explore the effect of various parameters on reliability and maintenance cost. It is noticed that the operational reliability concept should be adopted for the software testing time problem.
Reliable software is the need of the hour especially as it is an indispensable part of our new technological world. Many SRGMs have been proposed considering the change point approach in literature. Change point is defined as the point where the fault detection rate changes, it happens due to number of reasons viz, proficiency of the testing team, nature of faults to be detected etc. In this paper we develop a generalized modeling framework incorporating change point discussing the changing nature of fault detection rate and form a related release time problem which minimizes the total cost of the software and maintains a desirable level of reliability. A numerical example is given for the release problem and the proposed models are validated on real software error data to show their goodness of fit and applicability.
Software industry has reached far beyond since its origin. Technical advancements are taking place at a speed faster than ever. This has further enhanced the pressure on software developers. They are trying hard to keep a pace with these rapid developments by coming out with strategies to increase the pace of their work without significantly affecting the software quality and reliability. One such strategy is distributed development of software in which the software is composed of two different types of components viz. newly developed and re-used component. Software developed in Distributed Development Environment (DDE) is characterized by enhanced availability and reliability. At the same time, the ever growing contention in the market and increasing needs of customers obligate the developers to come out with enhanced functionalities in the software from time to time leading to multiple releases of a software. The added functionalities further enhance the existing complexity of the software and testing team may not be able to perfectly remove the faults leading to imperfect debugging or add more bugs due to lack of knowledge about the software in the initial phase known as error generation. In this paper, we incorporated this real-life phenomenon to come out with a multi up-gradations modeling for removal of software fault in a distributed environment. In the present framework we described a cost model to obtain optimal release time in multi up-gradation of software under distributed environment. To validate the analytical results of the proposed framework, numerical illustration is provided.
Testing life cycle poses a problem of achieving a high level of software reliability while achieving an optimal release time for the software. To enhance the reliability of the software, retain the market potential for the software and reduce the testing cost, the enterprise needs to know when to release the software and when to stop testing. To achieve this, enterprises usually release their product earlier in market and then release patches subsequently. Software patching is a process through which enterprises debug, update, or enhance their software. Software patching when used as a debugging process ensures an optimal release for the product, increasing the reliability of the software while reducing the economic overhead of testing. Today, due to the diverse and distributed nature of software, its journey in the market is dynamic, making patching an inherent aspect of testing. A patch is a piece of software designed to update a computer program or its supporting data to fix or improve it. Researchers have worked in the field to minimize the testing cost, but so far, reliability has not been considered in the models for optimal time scheduling using patching. In this paper, we discuss reliability, which is a major attribute of the quality of software. Thus, to address the issues of testing cost, release time of software, and a desirable reliability level, we propose a reliability growth model implementing software patching to make the software system reliable and cost effective. The numeric illustration has been implemented using real-life software failure data set.
A number of software reliability growth models have been reported in the literature for open source software (OSS) systems but the effect of up-gradations on the reliability growth of multi-releases of such software systems has been discussed by a few. In this paper, the discrete modeling framework has been proposed to study the reliability growth process of OSS systems with multiple releases. The proposed model is based upon the assumption that during up-gradation some new faults are introduced in the code in addition to the left over fault content of the previous version. To validate our model, we have chosen two successful open source projects-Mozilla and Apache for its multi release failure datasets. Graphs representing goodness of fit of the proposed model have been drawn. The parameter estimates and measures of goodness of fit criteria suggest that the proposed software reliability growth model for multi-release OSS fits the actual datasets very well. An optimal release policy has been formulated by taking into account the cost of fault removal during testing and operational phases and reliability targets pre-specified by the decision makers. In addition, numerical example along with the sensitivity analysis has been provided to illustrate optimal release policy.