Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Predicting the quality of system modules prior to software testing and operations can benefit the software development team. Such a timely reliability estimation can be used to direct cost-effective quality improvement efforts to the high-risk modules. Tree-based software quality classification models based on software metrics are used to predict whether a software module is fault-prone or not fault-prone. They are white box quality estimation models with good accuracy, and are simple and easy to interpret.
An in-depth study of calibrating classification trees for software quality estimation using the SPRINT decision tree algorithm is presented. Many classification algorithms have memory limitations including the requirement that datasets be memory resident. SPRINT removes all of these limitations and provides a fast and scalable analysis. It is an extension of a commonly used decision tree algorithm, CART, and provides a unique tree pruning technique based on the Minimum Description Length (MDL) principle. Combining the MDL pruning technique and the modified classification algorithm, SPRINT yields classification trees with useful accuracy. The case study used consists of software metrics collected from a very large telecommunications system. It is observed that classification trees built by SPRINT are more balanced and demonstrate better stability than those built by CART.
Biggerstaff and Richter suggest that there are four fundamental subtasks associated with operationalizing the reuse process [1]. They are finding reusable components, understanding these components, modifying these components, and composing components. Each of these sub-problems can be re-expressed as a knowledge acquisition sub-problem relative to producing a new representation for the components that make them more suitable for future reuse.
In this paper, we express the first two subtasks for the software reuse activity, as described by Biggerstaff and Richter, as a problem in Machine Learning. From this perspective, the goal of software reuse is to learn to recognize reusable software in terms of code structure, run-time behavior, and functional specification. The Partial Metrics (PM) System supports the acquisition of reusable software at three different levels of granularity: the system level, the procedural level, and the code segment level. Here, we describe how the system extracts procedural knowledge from an example Pascal software system that satisfies a set of structural, behavioral, and functional constraints. These constraints are extracted from a set of positive and negative examples using inductive learning techniques. The constraints are expressed quantitatively in terms of various quality models and metrics. The general characteristics of learned constraints that were extracted from a variety of applications libraries are discussed.
This paper presents a discussion of significant issues in selection of a standardized set of the “best” software metrics to support a software reuse program. This discussion illustrates the difficulty in selection of a standardized set of reuse metrics because the “best” reuse metrics are determined by unique characteristics of each reuse application. An example of the selection of a single set of reuse metrics for a specific management situation is also presented.
In a broad spectrum, software metrics play a vital role in attribute assessment, which successively moves software projects. The metrics measure gives many crucial facets of the system, enhancing the system quality of software developed. Moreover, maintenance is the correction process that works out in the software system once the software is initially made. The noteworthy characteristic of any software is ‘change,’ and as a result, additional concern ought to be taken in developing software. So, the software is expected to be modified effortlessly (maintainable). Predicting software maintainability is still challenging, and accurate prediction models with low error rates are required. Since there are so many modern programming languages on the horizon. To accurately measure software maintainability, new techniques have to been introduced. This paper proposes a maintainability index (MI) by considering various software metrics by which the error gets minimized. It also intends to adopt a renowned optimization algorithm, namely Firefly (FF), for the optimum result. The proposed Base Model-FF is compared to other traditional models like BM-Differential Evolution (BM-DE), BM-Artificial Bee Colony (BM-ABC), BM-Particle Swarm Optimization (BM-PSO), and BM-Genetic Algorithm (BM- GA) in terms of performance metrics like Differential ratio, correlation coefficient, and Random Mean Square Error (RMSE).
Due to the central role that conceptual data models play in the design of databases, it is crucial to assure their quality since the early phases of database life cycle. For assessing (and if it is necessary improving) conceptual data model quality it is necessary to dispose of quantitative and objective measures in order to avoid bias in the quality evaluation process. It is in this context that software measurement can help IS designers to make better decision during design activities. The main interest of this article is to provide a state-of-the-art measures for conceptual data models.
This article provides an overview of the basic concepts and state of the art of software measurement. Software measurement is an emerging field of software engineering, since it may provide support for planning, controlling, and improving the software development process, as needed in any industrial development process. Due to the human-intensive nature of software development and its relative novelty, some aspects of software measurement are probably closer to measurement for the social sciences than measurement for the hard sciences. Therefore, software measurement faces a number of challenges whose solution requires both innovative techniques and borrowings from other disciplines. Over the years, a number of techniques and measures have been proposed and assessed via theoretical and empirical analyses. This shows the theoretical and practical interest of the software measurement field, which is constantly evolving to provide new, better techniques to support existing and more recent software engineering development methods.
Improving field performance of telecommunication systems is a key objective of both telecom suppliers and operators, as an increasing amount of business critical systems worldwide are relying on dependable telecommunication. Early defect detection improves field performance in terms of reduced field failure rates and reduced intrinsic downtime. Cost-effective software project management will focus resources towards intensive validation of those areas with highest criticality. This article outlines techniques for identifying such critical areas in software systems. It concentrates on the practical application of criticality-based predictions in industrial development projects, namely the selection of a classification technique and the use of the results in directing management decisions. The first part is comprehensively comparing and evaluating five common classification techniques (Pareto classification, classification trees, factor-based discriminant analysis, fuzzy classification, neural networks) for identifying critical components. Results from a large-scale industrial switching project are included to show the practical benefits. Knowing which technique should be applied to the second area gains even more attention: What are the impacts for practical project management within given resource and time constraints? Several selection criteria based on the results of a combined criticality and history analysis are provided together with concrete implementation decisions.
The use of empirical data to understand and improve software products and software engineering processes is gaining ever increasing attention. Empirical data from products and processes izs needed to help an organization understand and improve its way of doing business in the software domain. Additional motivation for collecting and using data is provided by the need to conform to guidelines and standards which mandate measurement, specifically the SEI's Capability Maturity Model and ISO 9000-3. Some software engineering environments (SEEs) offer automated support for collecting and, in a few cases, using empirical data. Measurement will clearly play a significant role in future SEEs. The paper surveys the trend towards supporting measurement in SEEs and gives details about several existing research and commercial software systems.
Because highly reliable software is becoming an essential ingredient in many systems, software developers apply various techniques to discover faults early in development, such as more rigorous reviews, more extensive testing, and strategic assignment of key personnel. Our goal is to target reliability enhancement activities to those modules that are most likely to have problems. This paper presents a methodology that incorporates genetic programming for predicting the order of software modules based on the expected number of faults. This is the first application of genetic programming to software engineering that we know of. We found that genetic programming can be used to generate software quality models whose inputs are software metrics collected earlier in development, and whose output is a prediction of the number of faults that will be discovered later in development or during operations. We established ordinal evaluation criteria for models, and conducted an industrial case study of software from a military communications system. Case study results were sufficiently good to be useful to a project for choosing modules for extra reliability enhancement treatment.