Please login to be able to save your searches and receive alerts for new content matching your search criteria.
In today's software industry, the design of test cases is mostly based on human expertise, while test automation tools are limited to execution of pre-planned tests only. Evaluation of test outcomes is also associated with a considerable effort by human testers who often have imperfect knowledge of the requirements specification. Not surprisingly, this manual approach to software testing results in heavy losses to the world's economy. In this paper, we demonstrate the potential use of data mining algorithms for automated modeling of tested systems. The data mining models can be utilized for recovering system requirements, designing a minimal set of regression tests, and evaluating the correctness of software outputs. To study the feasibility of the proposed approach, we have applied a state-of-the-art data mining algorithm called Info-Fuzzy Network (IFN) to execution data of a complex mathematical package. The IFN method has shown a clear capability to identify faults in the tested program.
One regression test selection technique proposed for object-oriented programs is the Class firewall regression test selection technique. The selection technique selects test cases for regression test, which test changed classes and classes depending on changed classes. However, in empirical studies of the application of the technique, we observed that another technique found the same defects, selected fewer tests and required a simpler, less costly, analysis. The technique, which we refer to as the Change-based regression test selection technique, is basically the Class firewall technique, but with the class firewall removed. In this paper we formulate a hypothesis stating that these empirical observations are not incidental, but an inherent property of the Class firewall technique. We prove that the hypothesis holds for Java in a stable testing environment, and conclude that the effectiveness of the Class firewall regression testing technique can be improved without sacrificing the defect detection capability of the technique, by removing the class firewall.
Test case prioritization schedules the test cases in an order that increases the effectiveness in achieving some performance goals. One of the most important performance goals is the rate of fault detection. Test cases should run in an order that increases the possibility of fault detection and also detects the most severe faults at the earliest in its testing life cycle. Test case prioritization techniques have proved to be beneficial for improving regression testing activities. While code coverage based prioritization techniques are found to be studied by most scholars, test case prioritization based on requirements in a cost effective manner has not been used for studies so far. Hence, in this paper, we propose to put forth a model for system level Test Case Prioritization (TCP) from software requirement specification to improve user satisfaction with quality software that can also be cost effective and to improve the rate of severe fault detection. The proposed model prioritizes the system test cases based on six factors: customer priority, changes in requirement, implementation complexity, usability, application flow and fault impact. The proposed prioritization technique is experimented in three phases with student projects and two sets of industrial projects and the results show convincingly that the proposed prioritization technique improves the rate of severe fault detection.
Test metrics succeed in analyzing the current level of maturity in testing and give a projection on the way to proceed with testing activities by allowing us to set goals and predict future trends. The objective of test metrics is to capture the planned and actual quantities: the effort, time and resources required to complete all the phases of development of the software project. Test case prioritization is an effective and practical technique in regression testing. It schedules test cases in order of precedence that increases their ability to meet some performance goals, such as code coverage, rate of fault detection. In this paper, we present a new metrics, based on varying requirement priorities, test case priorities, test case execution time, and fault severities. The case study illustrates that the rate of "units-of-test-case-priority-satisfied- per-unit-test-case-time" can be increased, and with improvement on testing quality and customer satisfaction. To assess the practicality of our approach, we apply it to a realistic example from the industrial projects. Also we summarize a test process measurement project of TECHZONE™ (Software development Concern with Testing Department) test teams, and analyze the effectiveness of set of metrics for cost, time, and quality to measure the quality of test process based on the results of the proposed metrics.
Cluster test selection is a new successful approach to select a subset of the existing test suite in regression testing. In this paper, program slicing is introduced to improve the efficiency and effectiveness of cluster test selection techniques. A static slice is computed on the modified code. The execution profile of each test case is filtered by the program slice to highlight the parts of software affected by modification, called slice filtering. The slice filtering reduces the data dimensions for cluster analysis, such that the cost of cluster test selection is saved dramatically. The experiment results show that the slice filtering techniques could reduce the cost of cluster test selection significantly and could also improve the effectiveness of cluster test selection modestly. Therefore, cluster test selection by filtering has more potential scalability to deal with large software.
Regression testing is important for maintaining software quality. However, the cost of regression testing is relatively high. Test case prioritization is one way to reduce this cost. Test case prioritization techniques sort test cases for regression testing based on their importance. In this paper, we design and implement a test case prioritization method based on the location of a change. The method consists of three steps: (1) clustering test cases, (2) prioritizing the clusters with respect to the relevance of the clusters to a code change, and (3) test case prioritization within each cluster based on metrics. We propose a metric for measuring test case importance based on Requirement Complexity, Code Complexity, and Code Coverage. To evaluate our method, we apply it on a launch interceptor problem program, and measure the inclusiveness and precision for clusters of test cases with respect to code change in specific test cases. Our results show that our proposed change-based prioritization method increases the likelihood of executing more relevant test cases earlier.
Regression testing is essential to ensure software quality during software evolution. Two widely-used regression testing techniques, test case selection and prioritization, are used to maximize the value of the continuously enlarging test suite. However, few works consider both these two techniques together, which decreases the usefulness of the independently studied techniques in practice. In the presence of changes during program evolution, regression testing is usually conducted by selecting the test cases that cover the impact results of the changes. It seldom considers the false-positives in the information covered. Hence, the effectiveness of such regression testing techniques is decreased. In this paper, we propose an approach, ComboRT, which combines test case selection and prioritization together to directly generate a ranked list of test cases. It is based on the impact results predicted by the change impact analysis (CIA) technique, FCA–CIA, which generates a ranked list of impacted methods. Test cases which cover these impacted methods are included in the new test suite. As each method predicted by FCA–CIA is assigned with an impact factor value corresponding to the probability of this method to be impacted, test cases are then ordered according to the impact factor values of the impacted methods. Empirical studies on four Java based software systems demonstrate that ComboRT can be effectively used for regression testing in object-oriented Java-based software systems during their evolution.
Regression testing is a very time-consuming and expensive testing activity. Many test case prioritization techniques have been proposed to speed up regression testing. Previous studies show that no one technique is always best. Random strategy, as the simplest strategy, is not always so bad. Particularly, when a test suite has higher fault detection capability, the strategy can generate a better result. Nevertheless, due to the randomness, the strategy is not always as satisfactory as expected. In this context, we present a test case prioritization approach using fixed size candidate set adaptive random testing algorithm to reduce the effect of randomness and improve fault detection effectiveness. The distance between pair-wise test cases is assessed by exclusive OR. We designed and conducted empirical studies on eight C programs to validate the effectiveness of the proposed approach. The experimental results, confirmed by a statistical analysis, indicate that the approach we proposed is more effective than random and the total greedy prioritization techniques in terms of fault detection effectiveness. Although the presented approach has comparable fault detection effectiveness to ART-based and the additional greedy techniques, the time cost is much lower. Consequently, the proposed approach is much more cost-effective.
Fault localization techniques aim to localize faulty statements using the information gathered from both passed and failed test cases. We present a mutation-based fault localization technique called MuSim. MuSim identifies the faulty statement based on its computed proximity to different mutants. We study the performance of MuSim by using four different similarity metrics. To satisfactorily measure the effectiveness of our proposed approach, we present a new evaluation metric called Mut_Score. Based on this metric, on an average, MuSim is 33.21% more effective than existing fault localization techniques such as DStar, Tarantula, Crosstab, Ochiai.
Regression testing is a practice that ensures a System Under Test (SUT) still works as expected after changes have been implemented. The simplest approach for regression testing is Retest-all, which consists of re-executing the entire Test Suite (TS) on the changed version of the SUT. Retest-all could be expensive in case a SUT and its TS grow in size and, if resources are insufficient, its application could be impracticable. A Test Suite Reduction (TSR) approach aims to overcome these issues by reducing the size of TSs, while preserving their fault-detection capability. In this paper, we introduce and validate an approach for TSR based on a multi-objective evolutionary algorithm, namely, Non-dominated Sorting Genetic Algorithm II (NSGA-II). This approach seeks to reduce TSs by maximizing both statement coverage and diversity of test cases of the reduced TSs, while minimizing the size of the reduced TSs. We named this approach Genetic Algorithm for teSt SuitE Reduction (GASSER). To assess GASSER, we conducted an experiment on 19 versions of four software systems from a public dataset—i.e. Software-artifact Infrastructure Repository (SIR). We compared GASSER with nine baseline approaches. The comparison was based on the size of the reduced TSs and their fault-detection capability. The most important take-away result is that GASSER, as compared with the baseline approaches, reduces more the size of the TSs with a non-significant effect on their fault-detection capability. The results of our empirical assessment suggest that the application of multi-objective evolutionary algorithms and, in particular, NSGA-II might represent a viable means to deal with TSR.
This paper discusses an object-based software development and maintenance environment, Opusdei, built and used for several years at Hitachi Software Engineering (HSK - Since 1994, University of Minnesota has been involved in the Opusdei project.) Industrial software is usually large, has many versions, undergoes frequent changes, and is developed concurrently by multiple programmers. Opusdei was designed to handle various problems inherent in such industrial environments. In Opusdei, all information needed for development is stored using an uniform representation in a central repository, and the various documentation and views of the software artifacts can be generated automatically using the tool repository. Opusdeis’ innovative capabilities are 1) uniform software artifacts representation 2) inter-relation and traceability maintenance among software artifacts 3) tools coordination and tool integration using tool composition scenarios 4) automatic documentation and versioning control. Tool coordination and composition has been discussed in the literature as a possible way to make software development environments more intelligent. Opusdei provides a uniform representation of software artifacts and tools which is an essential first step in addressing the issues of tool coordination and composition. Opusdei has been operational for several years and has been used in many large software development projects. The productivity gain reported for some of these projects, by using Opusdei ranged from 50–90%.
Regression testing is an important and often costly software maintenance activity. Retesting the software using existing test suite whenever modifications are made to the system, in order to regain confidence in correctness of the system, is called as Regression Testing. Regression test suites are often too large to re-execute in the given time and cost constraints. Reordering of the test suite is done according to appropriate criteria like code, branch, condition and fault coverage, etc. This process is known as Test Suite Prioritization. We can also select a subset of the original test suite on the basis of some criteria, often called as Regression Test Selection. The research problem that arises from this is the choice of technique or process to be used for selecting and prioritizing according to one or more of the chosen criteria(s). Ant Colony Optimization (ACO) is one such technique that was used by Singh et al. for solving Time-Constrained Test Suite Selection and Prioritization problem using Fault Exposing Potential (FEP). In this paper, we propose improvements to the existing algorithm along with details of the time complexity of the algorithm. It was very convincing to implement the technique considering the results obtained. Implementation of the proposed algorithm has also been demonstrated. The tool was repeatedly run on sample programs by changing the time constraint criterion. The analysis shows the usefulness and effectiveness of using ACO technique for test suite selection and prioritization.