Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    Reliability as Key Software Quality Metric: A Multi-Criterion Intuitionistic Fuzzy-Topsis-Based Analysis

    Software Quality has many parameters that govern its value. Of them, usually, Reliability has gained much attention of researchers and practitioners. However, today’s ever-demanding environment poses severe challenges in front of software creators as to continue treating Reliability as one of the most important attributes for governing software quality when other important parameters like re-usability, security and resilience to name a few are also available. Evaluating, ranking and selecting the most approximate attribute to govern the software quality is a complex concern, which technically requires a multi-criteria decision-making environment. Through this paper, we have proposed an Intuitionistic Fuzzy Set-based TOPSIS approach to showcase why reliability is one of the most preferable parameters for governing software quality. In order to collate individual opinions of decision makers; software developers of various firms were administered for rating the importance of various criteria and alternatives.

  • articleNo Access

    FAILURE MODES IN MEDICAL DEVICE SOFTWARE: AN ANALYSIS OF 15 YEARS OF RECALL DATA

    Most complex systems today contain software, and systems failures activated by software faults can provide lessons for software development practices and software quality assurance. This paper presents an analysis of software-related failures of medical devices that caused no death or injury but led to recalls by the manufacturers. The analysis categorizes the failures by their symptoms and faults, and discusses methods of preventing and detecting faults in each category. The nature of the faults provides lessons about the value of generally accepted quality practices for prevention and detection methods applied prior to system release. It also provides some insight into the need for formal requirements specification and for improved testing of complex hardware-software systems.

  • articleNo Access

    CLASSIFYING SOFTWARE MODULES INTO THREE RISK GROUPS

    Building on our earlier work in detecting high risk software modules in object-oriented systems, we extend the two group discriminant classification model to three risk groups. First, we give an overview of the discriminant modeling methodology. Using traditional and object-oriented software product measures collected from a commercial system, we develop two discriminant fault models. One model incorporates only traditional measures while the other model includes both traditional and object-oriented measures. The independent variables of both models are principal components derived from the observed software measures. The models are used to classify the modules comprising the system into three groups: high, medium, and low risk. Quality of fit and classification performance of both models are reported.

    We show that for this case study, the addition of the object-oriented measures enhances the model by reducing the overall misclassification rate and significantly reducing the misclassifications in the medium group. Last of all, we tender a cost based method to determine under what condition a three group model is superior to the simpler two group model. Our results suggest that additional case studies are needed to help develop a clearer picture of three group discriminant models and the utility of object-oriented software measures in general.

  • articleNo Access

    PREDICTING SOFTWARE CHANGE IN AN OPEN SOURCE SOFTWARE USING MACHINE LEARNING ALGORITHMS

    Due to various reasons such as ever increasing demands of the customer or change in the environment or detection of a bug, changes are incorporated in a software. This results in multiple versions or evolving nature of a software. Identification of parts of a software that are more prone to changes than others is one of the important activities. Identifying change prone classes will help developers to take focused and timely preventive actions on the classes of the software with similar characteristics in the future releases. In this paper, we have studied the relationship between various object oriented (OO) metrics and change proneness. We collected a set of OO metrics and change data of each class that appeared in two versions of an open source dataset, 'Java TreeView', i.e., version 1.1.6 and version 1.0.3. Besides this, we have also predicted various models that can be used to identify change prone classes, using machine learning and statistical techniques and then compared their performance. The results are analyzed using Area Under the Curve (AUC) obtained from Receiver Operating Characteristics (ROC) analysis. The results show that the models predicted using both machine learning and statistical methods demonstrate good performance in terms of predicting change prone classes. Based on the results, it is reasonable to claim that quality models have a significant relevance with OO metrics and hence can be used by researchers for early prediction of change prone classes.

  • articleNo Access

    REFACTORIZATION'S IMPACT ON SOFTWARE RELIABILITY

    Software refactorization is a process of changing program's source code structure without changing its functionality. The purpose of the refactorization is to make program's source code easier to understand and maintain, which in turn influence the fact that in a long term such code should have fewer errors (be more reliable). In recent years many works described refactorization, but till now there are no researches, which would assess long term influence of refactoring on reliability. In this work we try to depict our fundamental study on software systems reliability improvement in context of refactoring. We tried to find the answer to the question: What are benefits of using refactorization as far as reliability is concerned?

  • articleNo Access

    DYNAMIC MODELS FOR TESTING BASED ON TIME SERIES ANALYSIS

    In this paper, we investigate a dynamic software quality model that incorporates software process and software product measures as covariates. Furthermore, the model is not based on execution time between failures. Instead, the method relies on data commonly available from simple problem tracking and source code control systems. Fault counts, testing effort, and code churn measures are collected from each build during the system test phase of a large telecommunications software system. We use this data to predict the number of faults to expect from one build to the next. The technique we use is called time series analysis and forecasting. The methodology assumes that future predictions are based on the history of past failures and related covariates. We show that the quality model incorporating testing effort as a covariate is better than the quality model derived from fault counts alone.