Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    SOFTWARE RELIABILITY ISSUES UNDER OPERATIONAL AND TESTING CONSTRAINTS

    Software reliability plays an important role in assuring the quality of a software. To ensure software reliability, the software is tested thoroughly during the testing phase. The time invested in the testing phase or the optimal software release time depends on the level of reliability to be achieved. There are two different concepts related to software reliability, viz., testing reliability and operational reliability. In this paper, we compare both types of software reliabilities to determine the optimal testing time of the software so as to minimize the total expected software maintenance cost. We consider a software has a number of clusters of modules, each having a different number of errors and a different failure rate. A hyperexponential model is employed for analyzing software reliability growth. Parameter estimation using the maximum likelihood estimation technique is also discussed. Numerical illustrations are taken to explore the effect of various parameters on reliability and maintenance cost. It is noticed that the operational reliability concept should be adopted for the software testing time problem.

  • articleNo Access

    A FUZZY LOGIC BASED APPROACH FOR SOFTWARE TESTING

    How to provide cost-effective strategies for Software Testing has been one of the research focuses in Software Engineering for a long time. Many researchers in Software Engineering have addressed the effectiveness and quality metric of Software Testing, and many interesting results have been obtained. However, one issue of paramount importance in software testing — the intrinsic imprecise and uncertain relationships within testing metrics — is left unaddressed. To this end, a new quality and effectiveness measurement based on fuzzy logic is proposed. Related issues like the software quality features and fuzzy reasoning for test project similarity measurement are discussed, which can deal with quality and effectiveness consistency between different test projects. Experiments were conducted to verify the proposed measurement using real data from actual software testing projects. Experimental results show that the proposed fuzzy logic based metrics is effective and efficient to measure and evaluate the quality and effectiveness of test projects.

  • articleNo Access

    Guided Intelligent Hyper-Heuristic Algorithm for Critical Software Application Testing Satisfying Multiple Coverage Criteria

    This paper proposes a novel algorithm that combines symbolic execution and data flow testing to generate test cases satisfying multiple coverage criteria of critical software applications. The coverage criteria considered are data flow coverage as the primary criterion, software safety requirements, and equivalence partitioning as sub-criteria. black The characteristics of the subjects used for the study include high-precision floating-point computation and iterative programs. The work proposes an algorithm that aids the tester in automated test data generation, satisfying multiple coverage criteria for critical software. The algorithm adapts itself and selects different heuristics based on program characteristics. The algorithm has an intelligent agent as its decision support system to accomplish this adaptability. Intelligent agent uses the knowledge base to select different low-level heuristics based on the current state of the problem instance during each generation of genetic algorithm execution. The knowledge base mimics the expert’s decision in choosing the appropriate heuristics. black The algorithm outperforms by accomplishing 100% data flow coverage for all subjects. In contrast, the simple genetic algorithm, random testing and a hyper-heuristic algorithm could accomplish a maximum of 83%, 67% and 76.7%, respectively, for the subject program with high complexity. black The proposed algorithm covers other criteria, namely equivalence partition coverage and software safety requirements, with fewer iterations. black The results reveal that test cases generated by the proposed algorithm are also effective in fault detection, with 87.2% of mutants killed when compared to a maximum of 76.4% of mutants killed for the complex subject with test cases of other methods.

  • articleNo Access

    TEST SCENARIO GENERATION BASED ON FORMAL SPECIFICATION AND USAGE PROFILE

    This paper presents a method for test scenario generation based on formal specifications and usage profiles. It is a major component of a framework for testing object-oriented programs. In this framework, the requirements of a software system are formally specified. The anticipated application of the system is expressed in a usage profile, which is a state model that indicates the dynamic behavior of the system and execution probabilities for the behaviors. The state model is used as a guide to derive the anticipated operation scenarios. An enhanced state transition diagram is used to represent the state model, which incorporates hierarchy, usage and parameter information. Since the number of feasible scenarios can be extremely large, probability and importance criteria are used to select the most probable and important scenarios.

  • articleNo Access

    AN OBJECT-BASED DATA FLOW TESTING APPROACH FOR WEB APPLICATIONS

    In recent years, Web applications have grown rapidly because of their abilities to provide online information access to anyone at anytime around the world. As Web applications become complex, there is a growing concern about their quality and reliability. This paper extends traditional data flow testing techniques to Web applications. Several data flow issues about analyzing HyperText Markup Language (HTML) documents in Web applications are discussed. An object-based data flow testing approach is presented. The approach is based on a test model that captures data flow test artifacts of Web applications. In the test model, each entity of a Web application is modeled as an object. The data flow information of the functions within an object or across objects is then captured using various flow graphs. Based on the object-based test model, data flow test cases for a Web application can be systematically and selectively generated in five different levels.

  • articleNo Access

    AUTOMATED GENERATION OF TEST TRAJECTORIES FOR EMBEDDED FLIGHT CONTROL SYSTEMS

    Automated generation of test cases is a prerequisite for fast testing. Whereas the research in automated test data generation addressed the creation of individual test points, test trajectory generation has attracted limited attention. In simple terms, a test trajectory is defined as a series of data points, with each (possibly multidimensional) point relying upon the value(s) of previous point(s). Many embedded systems use data trajectories as inputs, including closed-loop process controllers, robotic manipulators, nuclear monitoring systems, and flight control systems. For these systems, testers can either handcraft test trajectories, use input trajectories from older versions of the system or, perhaps, collect test data in a high fidelity system simulator. While these are valid approaches, they are expensive and time-consuming, especially if the assessment goals require many tests.

    We developed a framework for expanding a small, conventionally developed set of test trajectories into a large set suitable, for example, for system safety assurance. Statistical regression is the core of this framework. The regression analysis builds a relationship between controllable independent variables and closely correlated dependent variables, which represent test trajectories. By perturbing the independent variables, new test trajectories are generated automatically. Our approach has been applied in the safety assessment of a fault tolerant flight control system. Linear regression, multiple linear regression, and autoregressive techniques are compared. The performance metrics include the speed of test generation and the percentage of "acceptable" trajectories, measured by the domain specific reasonableness checks.

  • articleNo Access

    AUTOMATIC TEST DATA GENERATION FOR PROGRAM PATHS USING GENETIC ALGORITHMS

    A new technique and tool are presented for test data generation for path testing. They are based on the dynamic technique and on a Genetic Algorithm, which evolves a population of input data towards reaching and solving the predicates along the program paths. We improve the performance of test data generation by using past input data to compose the initial population for the search. An experiment was done to assess the performance of the techniques compared to that of random data generation.

  • articleNo Access

    WHAT INFORMATION IS RELEVANT WHEN SELECTING SOFTWARE TESTING TECHNIQUES?

    One of the main problems in software testing is the development of a suitable set of test cases so that the effectiveness of the test is maximised with a minimum number of test cases. A lot of testing techniques are now available for developing test cases. However, some of them are misused, others are never used and only a few are applied again and again. When developers have to decide what testing techniques(s) they should use in a project, they have little (if any) experiential information about the available testing techniques, their usefulness and, in general, how suited they are to the project. This paper presents the results of developing a characterization scheme for test technique selection. When instantiated for different techniques, the scheme should provide developers with enough information for choosing the best suited to their project. Thus, their decisions would be based on sound knowledge of the techniques, instead of perceptions, suppositions and assumptions.

  • articleNo Access

    Towards Industrially Relevant Fault-Proneness Models

    Estimating software fault-proneness early, i.e., predicting the probability of software modules to be faulty, can help in reducing costs and increasing effectiveness of software analysis and testing. The many available static metrics provide important information, but none of them can be deterministically related to software fault-proneness. Fault-proneness models seem to be an interesting alternative, but the work on these is still biased by lack of experimental validation.

    This paper discusses barriers and problems in using software fault-proneness in industrial environments, proposes a method for building software fault-proneness models based on logistic regression and cross-validation that meets industrial needs, and provides some experimental evidence of the validity of the proposed approach.

  • articleNo Access

    USING GENETIC ALGORITHMS AND DECISION TREE INDUCTION TO CLASSIFY SOFTWARE FAILURES

    This paper describes two laboratory experiments designed to evaluate a failure-pursuit strategy for system level testing. In the first experiment, two GAs are used to automatically generate test suites that are rich in failure-causing test cases. Their performance is compared to random generation. The resulting test suites are then used to train a series of decision trees, producing rules for classifying other test cases. Finally, the performance of the classification rules is evaluated empirically. The results indicate that the combination of GA-based test case generation and decision tree induction can produce rules with high-predictive accuracy that can assist human testers in diagnosing the cause of system failures.

  • articleNo Access

    A GRAMMAR-GUIDED GENETIC PROGRAMMING FRAMEWORK CONFIGURED FOR DATA MINING AND SOFTWARE TESTING

    Genetic Programming (GP) is a powerful software induction technique that can be applied to solve a wide variety of problems. However, most researchers develop tailor-made GP tools for solving specific problems. These tools generally require significant modifications in their kernel to be adapted to other domains. In this paper, we explore the Grammar-Guided Genetic Programming (GGGP) approach as an alternative to overcome such limitation. We describe a GGGP based framework, named Chameleon, that can be easily configured to solve different problems. We explore the use of Chameleon in two domains, not usually addressed by works in the literature: in the task of mining relational databases and in the software testing activity. The presented results point out that the use of the grammar-guided approach helps us to obtain more generic GP frameworks and that they can contribute in the explored domains.

  • articleNo Access

    RESTRICTED RANDOM TESTING: ADAPTIVE RANDOM TESTING BY EXCLUSION

    Restricted Random Testing (RRT) is a new method of testing software that improves upon traditional Random Testing (RT) techniques. Research has indicated that failure patterns (portions of an input domain which, when executed, cause the program to fail or reveal an error) can influence the effectiveness of testing strategies. For certain types of failure patterns, it has been found that a widespread and even distribution of test cases in the input domain can be significantly more effective at detecting failure compared with ordinary RT. Testing methods based on RT, but which aim to achieve even and widespread distributions, have been called Adaptive Random Testing (ART) strategies. One implementation of ART is RRT. RRT uses exclusion zones around executed, but non-failure-causing, test cases to restrict the regions of the input domain from which subsequent test cases may be drawn. In this paper, we introduce the motivation behind RRT, explain the algorithm and detail some empirical analyses carried out to examine the effectiveness of the method. Two versions of RRT are presented: Ordinary RRT (ORRT) and Normalized RRT (NRRT). The two versions share the same fundamental algorithm, but differ in their treatment of non-homogeneous input domains. Investigations into the use of alternative exclusion shapes are outlined, and a simple technique for reducing the computational overheads of RRT, prompted by the alternative exclusion shape investigations, is also explained. The performance of RRT is compared with RT and another ART method based on maximized minimum test case separation (DART), showing excellent improvement over RT and a very favorable comparison with DART.

  • articleNo Access

    IMPROVING CLASS FIREWALL REGRESSION TEST SELECTION BY REMOVING THE CLASS FIREWALL

    One regression test selection technique proposed for object-oriented programs is the Class firewall regression test selection technique. The selection technique selects test cases for regression test, which test changed classes and classes depending on changed classes. However, in empirical studies of the application of the technique, we observed that another technique found the same defects, selected fewer tests and required a simpler, less costly, analysis. The technique, which we refer to as the Change-based regression test selection technique, is basically the Class firewall technique, but with the class firewall removed. In this paper we formulate a hypothesis stating that these empirical observations are not incidental, but an inherent property of the Class firewall technique. We prove that the hypothesis holds for Java in a stable testing environment, and conclude that the effectiveness of the Class firewall regression testing technique can be improved without sacrificing the defect detection capability of the technique, by removing the class firewall.

  • articleNo Access

    ON FAVOURABLE CONDITIONS FOR ADAPTIVE RANDOM TESTING

    Recently, adaptive random testing (ART) has been developed to enhance the fault-detection effectiveness of random testing (RT). It has been known in general that the fault-detection effectiveness of ART depends on the distribution of failure-causing inputs, yet this understanding is in coarse terms without precise details. In this paper, we conduct an in-depth investigation into the factors related to the distribution of failure-causing inputs that have an impact on the fault-detection effectiveness of ART. This paper gives a comprehensive analysis of the favourable conditions for ART. Our study contributes to the knowledge of ART and provides useful information for testers to decide when it is more cost-effective to use ART.

  • articleNo Access

    AUTOMATED TEST CODE GENERATION FROM CLASS STATE MODELS

    This paper presents an approach to automated generation of executable test code from class models represented by the UML protocol state machines. It supports several coverage criteria for state models, including state coverage, transition coverage, and basic and extended round-trip coverage. It allows the tester to add and modify detailed test parameters (e.g., actual arguments for method invocations and implementation-specific environments) if necessary. When the state model is modified due to requirements change, the hand-crafted test parameters, if still valid, are automatically reused. This reduces the working load for regeneration of tests for modified models. In addition to test code, we also automatically generate state wrapper aspects in AspectJ, which facilitates comparing actual object states to expected states during test execution. This enables the automated verdict of pass/failure for test cases without the need to modify the source code of the class under test. We present two examples for which the executable test code is generated. They demonstrate the reuse of test parameters and testing of object interactions, respectively.

  • articleNo Access

    A TEST CASE PRIORITIZATION BASED ON DEGREE OF RISK EXPOSURE AND ITS EMPIRICAL STUDY

    We propose a test case prioritization strategy for risk based testing, in which the risk exposure is employed as the key criterion of evaluation. Existing approaches to risk based testing typically employ risk exposure values as assessed by the tester. In contrast, we employ exposure values that have been determined by experts during the risk assessment stage of the risk management process. If a given method produces greater accuracy in fault detection, that approach is considered more valuable for software testing. We demonstrate the value of our proposed risk based testing method in this sense through its application.

  • articleNo Access

    AUTOMATIC VERIFICATION OF OPTIMIZATION ALGORITHMS: A CASE STUDY OF A QUADRATIC ASSIGNMENT PROBLEM SOLVER

    Metamorphic testing is a technique for the verification of software output without a complete testing oracle. Mathematical optimization, implemented in software, is a problem for which verification can often be challenging. In this paper, we apply metamorphic testing to one such optimization problem, the quadratic assignment problem (QAP). From simple observations of the properties of the QAP, we describe how to derive a number of metamorphic relations useful for verifying the correctness of a QAP solver. We then compare the effectiveness of these metamorphic relations, in "killing" mutant versions of an exact QAP solver, to a simulated oracle. We show that metamorphic testing can be as effective as the simulated oracle for killing mutants. We examine the relative effectiveness of different metamorphic relations, both singly and in combination, and conclude that combining metamorphic relations can be significantly more effective than using a single relation.

  • articleNo Access

    COST-COGNIZANT COMBINATORIAL TEST CASE PRIORITIZATION

    Combinatorial testing has been widely used in practice. People usually assume all test cases in combinatorial test suite will run completely. However, in many scenarios where combinatorial testing is needed, for example the regression testing, the entire combinatorial test suite is not run completely as a result of test resource constraints. To improve the efficiency of testing, combinatorial test case prioritization technique is required. For the scenario of regression testing, this paper proposes a new cost-cognizant combinatorial test case prioritization technique, which takes both combination weights and test costs into account. Here we propose a series of metrics with physical meaning, which assess the combinatorial coverage efficiency of test suite, to guide the prioritization of combinatorial test cases. And two heuristic test case prioritization algorithms, which are based on total and additional techniques respectively, are utilized in our technique. Simulation experimental results illustrate some properties and advantages of proposed technique.

  • articleNo Access

    PRIORITIZATION OF COMBINATORIAL TEST CASES BY INCREMENTAL INTERACTION COVERAGE

    Combinatorial interaction testing is a well-recognized testing method, and has been widely applied in practice, often with the assumption that all test cases in a combinatorial test suite have the same fault detection capability. However, when testing resources are limited, an alternative assumption may be that some test cases are more likely to reveal failure, thus making the order of executing the test cases critical. To improve testing cost-effectiveness, prioritization of combinatorial test cases is employed. The most popular approach is based on interaction coverage, which prioritizes combinatorial test cases by repeatedly choosing an unexecuted test case that covers the largest number of uncovered parameter value combinations of a given strength (level of interaction among parameters). However, this approach suffers from some drawbacks. Based on previous observations that the majority of faults in practical systems can usually be triggered with parameter interactions of small strengths, we propose a new strategy of prioritizing combinatorial test cases by incrementally adjusting the strength values. Experimental results show that our method performs better than the random prioritization technique and the technique of prioritizing combinatorial test suites according to test case generation order, and has better performance than the interaction-coverage-based test prioritization technique in most cases.

  • articleNo Access

    An Approach for Cluster-Based Retrieval of Tests Using Cover-Coefficients

    Retrieving relevant test cases is a recurring theme in software validation. We present an approach for cluster-based retrieval of test cases for software validation. The approach uses a probabilistic notion of coverage among line-based test profiles and can potentially discover groups of test cases executing a small number of unique lines. The distribution of lines across test profiles are analyzed to determine the number of clusters and generate a clustering structure without any additional user input. We also propose a novel and simple approach to identify test cases that are affected by software changes based on test profiles. It is shown that the clustering structures generated can be used to select affected tests economically to produce high quality regression test suites. The approach is applied to four unix utility programs from a popular testing benchmark. Our results show that the generated number of clusters and their average sizes closely track their estimates based on test profiles. The retrieval of affected tests using the clustering structure is economical and produces a good quality regression test suite.