Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Interconnecting computers into clusters or computational grids promises many benefits for users of computational science and engineering, especially in terms of performance and costs. This situation is additionally supported by programming libraries like MPI and PVM, which are portable across different platforms and allow to exploit the available computing power. Consequently, the number of applications utilizing these computing structures is steadily increasing. Yet, there are also some pitfalls with possibly serious consequences, that must not be ignored by software developers. This paper describes some critical issues related to nondeterministic program behavior. With such kinds of programs different program executions are observed although the same input data are provided, leading to the irreproducibility effect, the completeness problem, and the probe effect. The impact of these effects, their weight for software developers, and how they are affected on supercomputer architectures and cluster environments are discussed. These critical issues need to be pointed out to users in order to raise their understanding and awareness of the problems. While the irreproducibility effect is believed to be sufficiently solved by record&replay mechanisms, existing solutions for the probe effect are only partially successful, and only very few approaches address the completeness problem. A simple solution for the latter is offered by automatic event manipulation and artificial replay, which is however restricted by time and memory constraints. In addition, this solution to the completeness problem also solves the probe effect in nondeterministic parallel programs.
Modeling languages such as the unified modeling language (UML) or the systems modeling language (SysML) in combination with constraint languages such as the object constraint language (OCL) allows for an abstract description of a system prior to its implementation. But the resulting system models can be highly non-trivial and, hence, errors in the descriptions can easily arise. In particular, too strong restrictions leading to an inconsistent model are common. Motivated by this, researchers and engineers developed methods for the validation and verification of given formal models. However, while these methods are efficient to detect the existence of an inconsistency, the designer is usually left alone to identify the reasons for it. In this contribution, we propose an automatic method which efficiently determines reasons explaining the contradiction in an inconsistent UML/OCL model. For this purpose, all constraints causing the contradiction are comprehensibly analyzed. By this, the designer is aided during the debugging of his/her model.
Meta-programs are generic, incomplete, adaptable programs that are instantiated at construction time to meet specific requirements. Templates and generative techniques are examples of meta-programming techniques. Understanding of meta-programs is more difficult than understanding of concrete, executable programs. Static and dynamic analysis methods have been applied to ease understanding of programs — can similar methods be used for meta-programs? In our projects, we build meta-programs with a meta-programming technique called XVCL. Meta-programs in XVCL are organized into a hierarchy of meta-components from which the XVCL processor generates concrete, executable programs that meet specific requirements. We developed an automated system that analyzes XVCL meta-programs, and presents developers with information that helps them work with meta-programs more effectively. Our system conducts both static and dynamic analysis of a meta-program. An integral part of our solution is a query language, FQL in which we formulate questions about meta-program properties. An FQL query processor automatically answers a class of queries. The analysis method described in the paper is specific to XVCL. However, the principle of our approach can be applied to other meta-programming systems. We believe readers interested in meta-programming in general will find some of the lessons from our experiment interesting and useful.
We introduce a novel application of feature ranking methods to the fault localization problem. We envision the problem of localizing causes of failures as instances of ranking program’s elements where elements are conceptualized as features. In this paper, we define features as program’s statements. However, in its fine-grained definition, the idea of program’s features can refer to any traits of programs. This paper proposes feature ranking-based algorithms. The algorithms analyze execution traces of both passing and failing test cases, and extract the bug signatures from the failing test cases. The proposed procedure extracts possible combinations of program’s elements when executed together from bug signatures. The feature ranking-based algorithms then order statements according to the suspiciousness of the combinations. When viewed as sequences, the combination of program’s elements produced and traced in bug signatures can be utilized to reason about the common longest subsequence. The common longest subsequence of bug signatures represents the common statements executed by all failing test cases and thus provides a means for identifying statements that contain possible faults. Our evaluation indicates that the proposed feature-based fault localization outperforms existing fault localization ranking schemes.
Empirical studies show that coverage-based fault localizations are very effective in testing and debugging software applications. It is also a commonly held belief that no software testing techniques would perform best for all programs with various data structures and complexity. An important research question posed in this paper is whether the type and complexity of faults in a given program has any influence on the performance of these fault localization techniques.
This paper investigates the performance of coverage-based fault localizations for different types of faults. We explore and compare the accuracy of these techniques for two large groups of faults often observed in object-oriented programs. First, we explore different types of traditional method-level faults grouped into six categories including those related to arithmetic, relational, conditional, logical, assignment, and shift. We then focus on class-level faults related to object-oriented features and group them into four categories including inheritance, overriding, Java-specific features, and common programming mistakes. The results show that coverage-based fault localizations are less effective for class-level faults associated with object-oriented features of programs. We therefore advocate the needs for designing more effective fault localizations for debugging object-oriented and class-level defects.
Fault localization techniques aim to localize faulty statements using the information gathered from both passed and failed test cases. We present a mutation-based fault localization technique called MuSim. MuSim identifies the faulty statement based on its computed proximity to different mutants. We study the performance of MuSim by using four different similarity metrics. To satisfactorily measure the effectiveness of our proposed approach, we present a new evaluation metric called Mut_Score. Based on this metric, on an average, MuSim is 33.21% more effective than existing fault localization techniques such as DStar, Tarantula, Crosstab, Ochiai.
Based on system execution traces, this paper presents a dynamic approach for visualizing and debugging timing constraint violations occurring in distributed real-time systems. The system execution traces used for visualization and debugging are collected during the execution of a target program in such a way that its run-time behavior is not interfered with. This is made possible by our non-interference distributed real-time monitoring system which is capable of collecting system’s run-time traces by monitoring and fetching the data passing through the internal buses of a target system. After the run-time data has been collected, the visualization and debugging activities then proceeded. The timing behavior of a target program is visualized as two graphs—the Colored Process Interaction Graph (CPIG) and the Dedicated Colored Process Interaction Graph (DCPIG). The CPIG depicts the timing behavior of a target program by graphically representing interprocess relationships during their communication and synchronization. The DCPIG can reduce visualization and debugging complexity by focusing on the portion of a target program which has direct or indirect correspondence with an imposed timing constraint. With the help of the CPIG and the DCPIG, a timing analysis method is used for computing the system-related timing statistics and analyzing the causes of timing constraint violations. A visualization and debugging system, called VDS, has been implemented using OpenWindows on Sun-4’s/UNIX workstations.
PARFORMAN (PARallel FORMal ANnotation language) is a high-level specification language for expressing intended behavior or known types of error conditions when debugging or testing parallel programs. Models of intended or faulty target program behavior can be succinctly specified in PARFORMAN. These models are then compared with the actual behavior in terms of execution traces of events, in order to localize possible bugs. PARFORMAN can also be used as a general language for expressing computations over target program execution histories.
PARFORM AN is based on a precise model of target program behavior. This model, called H-space (History-space), is formally defined through a set of general axioms about three basic relations, which may or may not hold between two arbitrary events: they may be sequentially ordered (SEQ), they may be parallel (PAR), or one of them might be included in another composite event (IN).
The general notion of composite event is exploited systematically, which makes possible more powerful and succinct specifications. The notion of event grammar is introduced to describe allowed event patterns over a certain application domain or language. Auxiliary composite events such as Snapshots are introduced to be able to define the notion “occurred at the same time” at suitable levels of abstraction. Finally, patterns and aggregate operations on events are introduced to make possible short and readable specifications. In addition to debugging and testing, PARFORMAN can also be used to specify profiles and performance measurements.
Reasoning with ontologies is a challenging task specially for non-logic experts. When checking whether an ontology contains rules that contradict each other, current description logic reasoners can only provide a list of the unsatisfiable concepts. Figuring out why these concepts are unsatisfiable, which rules cause conflicts, and how to resolve these conflicts, is all left to the ontology modeler himself. The problem becomes even more challenging in case of large or medium size ontologies, because an unsatisfiable concept may cause many of its neighboring concepts to be unsatisfiable.
The goal of this article is to empower ontology engineering with a user-friendly reasoning mechanism. We propose a pattern-based reasoning approach, which offers 9 patterns of constraint contradictions that lead to unsatisfiability in Object-role (ORM) models. The novelty of this approach is not merely that constraint contradictions are detected, but mainly that it provides the causes and suggestions to resolve contradictions. The approach is implemented in the DogmaModeler ontology engineering tool, and tested in building the CCFORM ontology. We discuss that, although this pattern-based reasoning covers most of contradictions in practice, compared with description logic based reasoning, it is not complete. We argue and illustrate both approaches, pattern-based and description logic-based, their implementation in the DogmaModeler, and conclude that both complement each other from a methodological perspective.
Forward-chaining rule-based programs, being data-driven, can function in changing environments in which backward-chaining rule-based programs would have problems. But, degugging forward-chaining programs can be tedious; to debug a forward-chaining rule-based program, certain ‘historical’ information about the program run is needed. Programmers should be able to directly request such information, instead of having to rerun the program one step at a time or search a trace of run details. As a first step in designing an explanation system for answering such questions, this paper discusses how a forward-chaining program run’s ‘historical’ details can be stored in its Rete inference network, used to match rule conditions to working memory. This can be done without seriously affecting the network’s run-time performance. We call this generalization of the Rete network a historical Rete network. Various algorithms for maintaining this network are discussed, along with how it can be used during debugging, and a debugging tool, MIRO, that incorporates these techniques is also discussed.
Research on real-time systems now focuses on formal approaches to specify and analyze the behavior of real-time systems. Temporal logic is a natural candidate for this since it can specify properties of event and state sequences. However, “pure” temporal logic cannot specify “quantitative” aspect of time. The concepts of eventuality, fairness, etc. are essentially “qualitative” treatment of time. The pure temporal logic makes no reference to absolute time. For real-time systems, the pure qualitative specification and analysis of time are inadequate. In this paper, we present a modification of temporal logic—Event-based Real-time Logic (ERL), based on our event-based conceptual model. The ERL provides a high-level framework for specifying timing properties of real-time systems, and it can be implemented using Prolog programming language. In our approach to testing and debugging of real-time systems, the ERL is used to specify both expected behavior (specification) and actual behavior (execution traces) of the target system and to verify that the target system achieves the specification. In this paper, a method is presented to implement the ERL using Prolog programming language for testing and debugging real-time systems.
We introduce a simple microscopic description of software bug dynamics where users, programmers and a maintainer of a given program interact through bug creation, detection and correction. When the program is written from scratch, the first phase of development is characterized by a fast decline of the number of bugs, followed by a slow phase where most bugs have been fixed, hence, are hard to find. Releasing immediately bug fixes speeds up the debugging process, which substantiates bazaar open-source methodology. We provide a mathematical analysis that supports our numerical simulations. Finally, we apply our model to Linux history and determine the existence of a lower bound to the quality of its programmers.
Especially software running on mobile devices does increasingly rely on contextual information such as time and location. And whenever a software product is affected by context, this context has to be replicated for testing and debugging. This paper introduces an external context manipulation interface for a previously developed learning item scheduler. The scheduler determines when to present a learning item in a learning game based on previous interaction in order to maximize learning efficiency and is based on psychological models. As inter-presentation-intervals can be in the range of days to months, system testing cannot be conducted in a conventional manner. Hence, virtual time hops can be used to fast forward to any specific point in virtual time which would make the software act like it was system time. The approach has shown to be a valuable debugging and testing aid and can be extended for other contextual information sources.
Occasionally, it is not possible to debug an application that contains a specific template in the development environment. While the SideLoading function can overcome this, the use of this feature poses great potential safety risks to application website sets if not appropriately applied. Many developers thus face a conundrum in deciding whether to enable this feature or not. This paper explores the characteristics and working mechanism of SideLoading applications in depth, proposes distinct boundaries about when and where this feature should be turned on or off, and offers a detailed code to illustrate how this function can be enabled and disabled in accordance with demand.
Spectrum based fault localization techniques have shown promising results in assisting developers to find the possible locations of faults. These techniques employ the number of failed tests as well as the number of passed tests that cover statements to distinguish faulty statements and non-faulty statements. However, these techniques essentially assume the same importance for all the test cases, which ignore fault diagnosis ability for individual test cases. In this paper, an approach is proposed to quantify fault diagnosis ability for failed tests and passed tests. Two fault localization techniques are then proposed based on the approach to calculate the suspiciousness of statements for further ranking. Experimental results on Siemens Test Suite programs show that the proposed fault localization techniques are significantly more effective than traditional techniques.
This study proposes an automatic fault location approach combining support vector machine. Different from the usual fault localization approach, the support vector machine is applied to classify the program statement into two classes. If there is only one class, the classification probability is used to rank the statements. If a statement has a minimum probability, it will have a maximum probability to be a fault. And empirical results of applying SVM are also presented to locate the fault and compare them against the results of other algorithms for the JTCAS program.
This paper presents a new diagnosis algorithm for Prolog programs. Bugs are located by examining the record of execution trace in some systematic manner, which corresponds to tracing either proof trees or search trees in a top-down manner. Human programmers just need to answer "Yes" or "No" for queries issued during the top-down tracing. Moreover, queries about atoms with the same predicates are issued continually so that not only are segments containing bugs identified more quickly but also queries are easier for human programmers to answer. The outline of an implementation of the diagnosis algorithm is shown as well.