Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Predicting the quality of system modules prior to software testing and operations can benefit the software development team. Such a timely reliability estimation can be used to direct cost-effective quality improvement efforts to the high-risk modules. Tree-based software quality classification models based on software metrics are used to predict whether a software module is fault-prone or not fault-prone. They are white box quality estimation models with good accuracy, and are simple and easy to interpret.
An in-depth study of calibrating classification trees for software quality estimation using the SPRINT decision tree algorithm is presented. Many classification algorithms have memory limitations including the requirement that datasets be memory resident. SPRINT removes all of these limitations and provides a fast and scalable analysis. It is an extension of a commonly used decision tree algorithm, CART, and provides a unique tree pruning technique based on the Minimum Description Length (MDL) principle. Combining the MDL pruning technique and the modified classification algorithm, SPRINT yields classification trees with useful accuracy. The case study used consists of software metrics collected from a very large telecommunications system. It is observed that classification trees built by SPRINT are more balanced and demonstrate better stability than those built by CART.
Biggerstaff and Richter suggest that there are four fundamental subtasks associated with operationalizing the reuse process [1]. They are finding reusable components, understanding these components, modifying these components, and composing components. Each of these sub-problems can be re-expressed as a knowledge acquisition sub-problem relative to producing a new representation for the components that make them more suitable for future reuse.
In this paper, we express the first two subtasks for the software reuse activity, as described by Biggerstaff and Richter, as a problem in Machine Learning. From this perspective, the goal of software reuse is to learn to recognize reusable software in terms of code structure, run-time behavior, and functional specification. The Partial Metrics (PM) System supports the acquisition of reusable software at three different levels of granularity: the system level, the procedural level, and the code segment level. Here, we describe how the system extracts procedural knowledge from an example Pascal software system that satisfies a set of structural, behavioral, and functional constraints. These constraints are extracted from a set of positive and negative examples using inductive learning techniques. The constraints are expressed quantitatively in terms of various quality models and metrics. The general characteristics of learned constraints that were extracted from a variety of applications libraries are discussed.
This paper presents a discussion of significant issues in selection of a standardized set of the “best” software metrics to support a software reuse program. This discussion illustrates the difficulty in selection of a standardized set of reuse metrics because the “best” reuse metrics are determined by unique characteristics of each reuse application. An example of the selection of a single set of reuse metrics for a specific management situation is also presented.
In a broad spectrum, software metrics play a vital role in attribute assessment, which successively moves software projects. The metrics measure gives many crucial facets of the system, enhancing the system quality of software developed. Moreover, maintenance is the correction process that works out in the software system once the software is initially made. The noteworthy characteristic of any software is ‘change,’ and as a result, additional concern ought to be taken in developing software. So, the software is expected to be modified effortlessly (maintainable). Predicting software maintainability is still challenging, and accurate prediction models with low error rates are required. Since there are so many modern programming languages on the horizon. To accurately measure software maintainability, new techniques have to been introduced. This paper proposes a maintainability index (MI) by considering various software metrics by which the error gets minimized. It also intends to adopt a renowned optimization algorithm, namely Firefly (FF), for the optimum result. The proposed Base Model-FF is compared to other traditional models like BM-Differential Evolution (BM-DE), BM-Artificial Bee Colony (BM-ABC), BM-Particle Swarm Optimization (BM-PSO), and BM-Genetic Algorithm (BM- GA) in terms of performance metrics like Differential ratio, correlation coefficient, and Random Mean Square Error (RMSE).