Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Although numerous researchers have pointed out that object-oriented software is easier to extend than software that is not written in an object-oriented style, object-oriented software is still rigid to adapt and maintain. This paper builds on an extension of object-oriented programming which is called adaptive programming. Adaptive programming allows the programmer to write more extensible software called adaptive software without committing to a specific input language. After writing an adaptive program, the programmer selects a specific input language and partially evaluates the program into an executable program. This paper formally studies class dictionaries and informally describes how adaptive programs are partially evaluated by freezing class dictionaries.
A class dictionary is mapped into classes of an object-oriented programming language, for example, C++, CLOS etc. A class dictionary defines both a set of objects and a set of sentences (a language). We derive a set of restrictions on class dictionaries which permit a simple printing algorithm and its inverse, a parsing algorithm, to be bijection functions between objects and sentences of the same class.
We review propagation patterns for describing adaptive object-oriented software at a higher level of abstraction than the one used by today’s object-oriented programming languages. A propagation pattern is an adaptive program which defines a family of programs. From the family, we can select a member by choosing a class dictionary.
The theory presented in this paper has been successfully implemented and used in the Demeter Tools/C++. The system consists of a set of tools that facilitate software evolution.
Complex software networks, as a typical kind of man-made complex networks, have attracted more and more attention from various fields of science and engineering over the past ten years. With the dramatic increase of scale and complexity of software systems, it is essential to develop a systematic approach to further investigate the complex software systems by using the theories and methods of complex networks and complex adaptive systems. This paper attempts to briefly review some recent advances in complex software networks and also develop some novel tools to further analyze complex software networks, including modeling, analysis, evolution, measurement, and some potential real-world applications. More precisely, this paper first describes some effective modeling approaches for characterizing various complex software systems. Based on the above theoretical and practical models, this paper introduces some recent advances in analyzing the static and dynamical behaviors of complex software networks. It is then followed by some further discussions on potential real-world applications of complex software networks. Finally, this paper outlooks some future research topics from an engineering point of view.
This paper presents the software development workbench WSDW (Web structure-oriented Software Development Workbench) together with the tool development language TDL. WSDW is an integrated structure-oriented software environment which contains several tools for software evolution. The integration of tools is achieved by sharing a program representation which is based upon the mathematical concept of relation: the web structure is the basic high level representation of programs within the environment. The TDL language is a structure-oriented language that supports the creation of a wide variety of tools both for software development and maintenance. The elementary statements in a TDL program are web rewriting rules and manipulations of programs are expressed as web transformations. Moreover, to make program transformations more intuitive to the tool programmer, web rewriting rules are expressed graphically. Each tool in WSDW performs a sequence of web transformations and new software tools can be implemented as TDL programs and integrated into WSDW.
A typical software development team leaves behind a large amount of information. This information takes different forms, such as mail messages, software releases, version control logs, defect reports, etc. softChange is a tool that retrieves this information, analyses and enhances it by finding new relationships amongst it, and then allows users to navigate and visualize this information. The main objective of softChange it to help programmers, their management and software evolution researchers in understanding how a software product has evolved since its conception.
Software evolution is an iterative and incremental process that encompasses the modification and alteration of software models at different levels of abstraction. These modifications are usually performed independently, but the objects to which they are applied to, are in most cases mutually dependent. Inconsistencies and drift among related artifacts may be created if the effects of an alteration are not properly identified, recorded, and propagated in other dependent models. For large systems, it is possible that there is a considerable number of such model dependencies, for which manual extraction is not feasible. In this paper, we introduce an approach for automating the identification and encoding of dependence relations among software models and their elements. The proposed dependency extraction technique first uses association rules to map types between models at different levels of abstraction. Formal concept analysis is then used to identify clusters of model elements that pertain to similar or associated concepts. Model elements that cluster together are considered related by a dependency relation. The technique is used to synchronize business process specifications with the underlying J2EE source code models.
One regression test selection technique proposed for object-oriented programs is the Class firewall regression test selection technique. The selection technique selects test cases for regression test, which test changed classes and classes depending on changed classes. However, in empirical studies of the application of the technique, we observed that another technique found the same defects, selected fewer tests and required a simpler, less costly, analysis. The technique, which we refer to as the Change-based regression test selection technique, is basically the Class firewall technique, but with the class firewall removed. In this paper we formulate a hypothesis stating that these empirical observations are not incidental, but an inherent property of the Class firewall technique. We prove that the hypothesis holds for Java in a stable testing environment, and conclude that the effectiveness of the Class firewall regression testing technique can be improved without sacrificing the defect detection capability of the technique, by removing the class firewall.
Software evolution is inevitable. When a system evolves, there are certain relationships among software artifacts that must be maintained. Requirement traceability is one of the important factors in facilitating software evolution since it maintains the artifacts relationship before and after a change is performed. Requirement traceability can be expensive activities. Many researchers have addressed the problem of requirement traceability, especially to support software evolution activities. Yet, the evaluation results of these approaches show that most of them typically provide only limited support to software evolution. Based on the problems of requirement traceability, we have identified three directions that are important for traceability to support software evolution, i.e. process automation, procedure simplicity, and best results achievement. Those three directions are addressed in our multifaceted approach of requirement traceability. This approach utilizes three facets to generate links between artifacts, i.e. syntactical similarity matching, link prioritization, and heuristic-list based processes. This paper proposes the utilization of multifaceted approach to traceability generation and recovery in facilitating software evolution process. The complete experiment has been applied in a real case study. The results show that utilization of these three facets in generating the traceability among artifacts is better than the existing approach, especially in terms of its accuracy.
We present an analysis of the evolution of a Web application project developed with object-oriented technology and an agile process. During the development we systematically performed measurements on the source code, using software metrics that have been proved to be correlated with software quality, such as the Chidamber and Kemerer suite and Lines of Code metrics. We also computed metrics derived from the class dependency graph, including metrics derived from Social Network Analysis. The application development evolved through phases, characterized by a different level of adoption of some key agile practices — namely pair programming, test-based development and refactoring. The evolution of the metrics of the system, and their behavior related to the agile practices adoption level, is presented and discussed. We show that, in the reported case study, a few metrics are enough to characterize with high significance the various phases of the project. Consequently, software quality, as measured using these metrics, seems directly related to agile practices adoption.
Software architecture allows us to make many decisions about a software system and analyze it even before it has been implemented, so as to make planned development possible. Similarly, architecture-based software evolution planning makes planned evolution possible by allowing us to make many decisions about the evolution of a software system and to analyze its evolution at the level of architecture design before software evolution is realized. In this paper, we develop a framework for architecture-based software evolution planning. It is done by defining various foundational terms and concepts, providing a taxonomy of software evolution plans, and then showing how to calculate values for various types of plans. By identifying and defining constituent foundational concepts, this conceptual framework makes precise the notion of "architecture-based software planning". By developing a value-calculation framework for software evolution plans, it also provides a basis for concrete methods for designing and evaluating evolution plans.
Methods for supporting evolution of software-intensive systems are a competitive edge in software engineering as software is often operated over decades. Empirical research is useful to validate the effectiveness of these methods. However, empirical studies on software evolution are rarely comprehensive and hardly replicable. Collaboration may prevent these shortcomings. We designed CoCoMEP — a platform for supporting collaboration in empirical research on software evolution by shared knowledge. We report lessons learned from the application of the platform in a large research programme.
Change impact analysis (CIA) is an essential method in software maintenance and evolution. Its accuracy and usability play a crucial role in its application. However, most CIAs are coarse-grained and limited to class and method levels. Despite the fine-grained CIAs’ success in giving the statement-level impact set, they are still limited without the sub-statement level dependency analysis, leading to low precision. Additionally, their unstructured impact sets make it challenging for users to comprehend the impact content. This paper proposes Hierarchical Change Impact Analysis (HCIA), a Hierarchical CIA technique based on the sub-statement level dependence graph. HCIA can perform a forward hierarchy program slicing on the change set from five levels: sub-statement, statement, method, class, and package. Based on the program slices, HCIA calculates the impact factor of the impact sets at the five levels to generate the final impact set. In the experiment, we evaluate the relationship between the impact factor and the actual affected codes and assess the most appropriate size of HCIA impact sets. Furthermore, we evaluate HCIA on 10 open-source projects by comparing our approach with popular CIAs at the five levels. The experimental result shows that HCIA is more accurate than the popular CIAs.
Recently, object-oriented specifications of distributed systems has gained more attention. The object-oriented approach is known for its flexibility for system construction. However, one of the major challenges is to provide facilities for the dynamic modifications of such specifications during the development and maintenance process. Yet, current work has not addressed the dynamic modifications of specifications of distributed systems. In this paper, we are concerned with formal description techniques that allow for the development and dynamic modification of executable specifications. A two-level model for the evolution of large object-oriented specifications is introduced. The first deals with the dynamic modifications of types (classes), while the second deals with modifications of modules. We have defined a set of structural and behavioral constraints to ensure specification consistency after modification at both levels. To allow dynamic modification of types and modules, we have developed a reflective object-oriented specification language which uses meta-objects to support the modification operations. In this language, types and modules are objects.
In a large software project, the number of classes, and the dependencies between them, generally increase as software evolves. The size and scale of the system often makes it difficult to easily identify the important components in a particular software product. To address this problem, we model software as a network, where the classes are the vertices in the network and the dependencies are the edges, and apply K-core decomposition to identify a core subset of vertices as potentially important classes. We study three open source Java projects over a 10-year period and demonstrate, using different metrics, that the K-core decomposition of the network can help us identify the key classes of the corresponding software. Specifically, we show that the vertices with the highest core number represent the important classes and demonstrate that the core-numbers of classes with similar functionalities evolve at similar trends.
Nowadays, software development and maintenance are highly distributed processes that involve a multitude of supporting tools and resources. Knowledge relevant for a particular software maintenance task is typically dispersed over a wide range of artifacts in different representational formats and at different abstraction levels, resulting in isolated 'information silos'. An increasing number of task-specific software tools aim to support developers, but this often results in additional challenges, as not every project member can be familiar with every tool and its applicability for a given problem. Furthermore, historical knowledge about successfully performed modifications is lost, since only the result is recorded in versioning systems, but not how a developer arrived at the solution. In this research, we introduce conceptual models for the software domain that go beyond existing program and tool models, by including maintenance processes and their constituents. The models are supported by a pro-active, ambient, knowledge-based environment that integrates users, tasks, tools, and resources, as well as processes and history-specific information. Given this ambient environment, we demonstrate how maintainers can be supported with contextual guidance during typical maintenance tasks through the use of ontology queries and reasoning services.
Software systems have become business-critical for many companies. These systems are usually large and complex. Some have evolved over decades and therefore are known as legacy systems. These legacy systems need to be maintained and evolved due to many factors, including error correction, requirements change, business rules change, structural re-organization, etc. A fundamental problem in maintaining and evolving legacy systems is to understand the subject system. Reverse engineering is the process of analyzing a subject system (a) to identify the system's components and their interrelationships and (b) to create representations of the system in another form or at a higher level of abstraction. In this chapter, we will discuss the problems, process, technologies, tools and future directions of reverse engineering.
In this chapter, we motivate and describe the use of rationale knowledge during software development. Rationale methods aim at capturing, representing, and maintaining records about why developers have made the decisions they have. They improve the quality of decisions through clarification of issues and their related tradeoffs. Moreover, they facilitate the understanding and reevaluation of decisions, which is an important prerequisite for managing change during software development. While there are several approaches for dealing with rationale knowledge, the systematic integration of rationale into software engineering processes and tools has yet to happen.
In this chapter, we first introduce the fundamental rationale concepts. Next, we identify the knowledge management tasks that are related to identifying, eliciting, organizing, disseminating, and using rationale knowledge. Based on this, we survey representative rationale methods and illustrate the issues involved with a more detailed example on rationale management for requirements. We conclude with a discussion of open issues and future directions in rationale research.
Though contributing to the significant improvement of software quality, the application of formal methods is limited by rigorous constraints, especially in large systems. As an alternative, partially introducing formal methods avoids the high cost for entire projects. During software evolution, software quality metrics of the existing system can be useful heuristic criteria for selecting worthwhile parts for formal treatment. We first select extensive candidates with high failure rate or high complexity, and then eliminate those with low customer problems, minimal development or maintenance efforts, thereby get a subset of the system to which applying formal development techniques can be most likely payable. Case studies with three real-world systems are presented to illustrate our approach.
Software engineering research has focused primarily on software construction, neglecting software maintenance and evolution. Observed is a shift in research from synthesis to analysis. The process of reverse engineering is introduced as an aid in program understanding. This process is concerned with the analysis of existing software systems to make them more understandable for maintenance, re-engineering, and evolution purposes. Presented is reverse engineering technology developed as part of the Rigi project. The Rigi approach involves the identification of software artifacts in the subject system and the aggregation of these artifacts to form more abstract system representations. Early industrial experience has shown that software engineers using Rigi can quickly build mental models from the discovered abstractions that are compatible with the mental models formed by the maintainers of the underlying software.
This paper research a method that can confirm the software evolution based on Latent Dirichlet Allocation (LDA). LDA is a method that can analyze the interdependency among words, topics and documents, and the interdependency can be expressed as probability. In this paper, adoption of LDA to modeling software evolution, take the package in source code as a document, regard names of function (method), variable names and comments as words, and figure out the probability between the three. Take results compare with update reports, can confirm the software of new version consistent with update reports.