Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The OpenLB project aims at setting up an open source implementation of lattice Boltzmann methods in an object oriented framework. The code, which is written in C++, is intended to be used both by application programmers and by developers who may add their own particular dynamics. It supports advanced data structures that take into account complex geometries and parallel program executions. The programming concepts rely strongly on dynamic genericity through the use of object oriented interfaces as well as static genericity by means of templates. This design allows a straightforward and intuitive implementation of lattice Boltzmann models with almost no loss of efficiency. The aim of this paper is to introduce the OpenLB project and to depict the underlying structure leading to a powerful development tool for lattice Boltzmann methods.
Comprehension of an object-oriented (OO) system, its design and use of OO features such as aggregation, generalisation and other forms of association is a difficult task to undertake without the original design documentation for reference. In this paper, we describe the collection of high-level class metrics from the UML design documentation of five industrial-sized C++ systems. Two of the systems studied were libraries of reusable classes. Three hypotheses were tested between these high-level features and the low-level class features of a number of class methods and attributes in each of the five systems. A further two conjectures were then investigated to determine features of key classes in a system and to investigate any differences between library-based systems and the other systems studied in terms of coupling.
Results indicated that, for the three application-based systems, no clear patterns emerged for hypotheses relating to generalisation. There was, however, a clear (positive) statistical significance for all three systems studied between aggregation, other types of association and the number of methods and attributes in a class. Key classes in the three application-based systems tended to contain large numbers of methods, attributes, and associations, significant amounts of aggregation but little inheritance. No consistent, identifiable key features could be found in the two library-based systems; both showed a distinct lack of any form of coupling (including inheritance) other than through the C++ friend facility.
As computers are used in nuclear safety systems, security engineering is becoming more and more important in the nuclear industry. Like all highly technical endeavours, the development of nuclear safety systems is a knowledge intensive task. Unfortunately, not only do nuclear scientists and software engineers lack the security knowledge, they are also not familiar with the new security requirements. Besides, few young people are studying nuclear science, nuclear engineering and related fields. Therefore, knowledge management can play a central role in encapsulating, storing and spreading the related discipline and knowledge more efficiently in the nuclear industry. In this paper, we propose a security knowledge framework to gather and store security knowledge from those regulatory-based security activities. We adopt an object-oriented paradigm which is easy for software engineers to understand and to express tacit and explicit knowledge. Its aim is intended to decouple between platform-independent security knowledge and platform-specific security controls. Finally, an example is presented to demonstrate the feasibility of linking between security controls and knowledge ontology in our framework.
Object-oriented (OO) approaches of software development promised better maintainable and reusable systems, but the complexity resulting from its features usually introduce some faults that are difficult to detect or anticipate during software change process. Thus, the earlier they are detected, found and fixed, the lesser the maintenance costs. Several OO metrics have been proposed for assessing the quality of OO design and code and several empirical studies have been undertaken to validate the impact of OO metrics on fault proneness (FP). The question now is which metrics are useful in measuring the FP of OO classes? Consequently, we investigate the existing empirical validation of CK + SLOC metrics based on their state of significance, validation and usefulness. We used systematic literature review (SLR) methodology over a number of relevant article sources, and our results show the existence of 29 relevant empirical studies. Further analysis indicates that coupling, complexity and size measures have strong impact on FP of OO classes. Based on the results, we therefore conclude that these metrics can be used as good predictors for building quality fault models when that could assist in focusing resources on high risk components that are liable to cause system failures, when only CK + SLOC metrics are used.
Object-oriented software (OOS) is dominating the software development world today and thus, has to be of high quality and maintainable. However, their recent size and complexity affects the delivering of software products with high quality as well as their maintenance. In the perspective of software maintenance, software change impact analysis (SCIA) is used to avoid performing change in the “dark”. Unfortunately, OOS classes are not without faults and the existing SCIA techniques only predict impact set. The intuition is that, if a class is faulty and change is implemented on it, it will increase the risk of software failure. To balance these, maintenance should incorporate both impact and fault-proneness (FP) predictions. Therefore, this paper propose an extended approach of SCIA that incorporates both activities. The goal is to provide important information that can be used to focus verification and validation efforts on the high risk classes that would probably cause severe failures when changes are made. This will in turn increase maintenance, testing efficiency and preserve software quality. This study constructed a prediction model using software metrics and faults data from NASA data set in the public domain. The results obtained were analyzed and presented. Additionally, a tool called Class Change Recommender (CCRecommender) was developed to assist software engineers compute the risks associated with making change to any OOS class in the impact set.
This paper discusses an object-oriented software requirements analysis method. The approach adopted here draws clear distinction between a system's basic structure (i.e. the object model) and its functionalities. The analysis model generated is a description of a problem domain; it consists of a set of primary and secondary objects that characterize the problem domain, and a set of pseudo objects that define the functional requirements of a system.
There are two stages of analysis in the proposed method: Object Modelling and Functional Requirements Modelling. These two stages are built upon one another. The aim of the object modelling stage is to derive a model of the problem domain in terms of objects, their classification and inter-relationships with one another. The functional requirements modelling stage builds upon this initial object model to complete the requirement analysis specification.
This paper uses a real-life library environment to illustrate how the method can be applied in the specification of an object-oriented software system.
This paper presents a framework for the formal specification of active database systems, and shows how the framework can be used to describe the functionality of three well known example systems, namely Starburst, POSTGRES and Ariel. The framework has been developed using Object-Z to structure specifications in a way that emphasises commonalities and key differences between the designs, and that is readily extensible to support new constructs and systems. Such a formal framework can be used to provide formal descriptions of systems that have previously been described only informally, to compare the functionalities of different systems by contrasting support for fundamental concepts, and as a basis for reasoning about rule bases in the context of different active rule systems. The paper also demonstrates the applicability of object-oriented formal methods to the specification of advanced database functionality.
This paper describes a 3-D visualization method based on the concept of characteristic views (CVs). The idea of characteristic views was derived based on the observation that the infinite possible views of a 3-D object can be grouped into a finite number of equivalence classes so that within each class all the views are isomorphic in the sense that they have the same line-junction graphs. To visualize the changes of scenes in real time, the BSP tree algorithm is known to be efficient in a static environment in which the viewpoint can be changed easily. However, if a scene consists of many objects and each object consists of many polygons, the time complexity involved in traversing a BSP tree increases rapidly so that the original BSP tree algorithm may not be efficient. The method proposed in this paper is object-oriented in the sense that, for all viewpoints, at the preprocessing stage the ordering for displaying the objects is determined. At run time, the objects are displayed based on a pre-calculated ordering according to the viewpoint. In addition, a CV is used as a basic 2-D projected image of a 3-D object.
This paper presents an object-oriented shadow generation algorithm for a large number of convex polyhedra (objects) in a 3-D scene. In the past, the shadow volume binary space partitioning (SVBSP) tree algorithm is known to be efficient in a static environment in which the point light source can be changed. However, if a scene consists of many objects and each object consists of many polygons, the time complexity for generating and traversing an SVBSP tree increases rapidly because the SVBSP tree algorithm only deals with polygons which are components of objects. Furthermore, the SVBSP tree algorithm suffers from polygon-splitting problems resulting in a high cost as the number of polygons increases. Our approach is object-oriented in the sense that an object is used as the basic logical unit instead of polygons. In the preprocessing stage the object ordering for shadow generation is determined for all possible light source positions. At the run time, the shadow detection algorithm is executed, and if necessary, shadow fragments are generated. We also present an approach to retrieving, updating, and displaying a 3-D static or dynamic world which consists of a large number of objects.
O-Raid [1, 2] uses a layered approach to provide support for objects on top of a distributed relational database system called RAID [3], It reuses the replication controller of RAID to allow replication of simple objects as well as replication of composite objects. In this paper, we first describe the experiments conducted on O-Raid that measure the overheads incurred in supporting objects through a layered implementation, and the overheads involved in replicating objects. The overheads are low (e.g. 4ms for an insert query involving objects). We present experiments that evaluate three replication strategies for composite objects, namely, full replication, selective replication and no replication in a two-site and a four-site O-Raid system. For composite object experiments, the selective replication strategy demonstrated the flexibility of tuning replication of member objects based on the patterns of access. The experimentation is performed in different networking environments (LANs and WANs) to further evaluate the replication schemes. The results indicate that selective replication scheme has greater benefits in WAN than in LAN.
This paper illustrates an object-oriented programming environment, called Application Conference Interface (ACI), which has been designed in order to facilitate the implementation of cooperative information systems. It interfaces developers of cooperative applications with services provided by a software platform, called ImagineDesk. The platform offers a rich set of services which can be exploited by developers of cooperative applications in order to manage them, to exchange multimedia data and to control users' interactions according to their roles. Basically, the ACI provides a set of local abstractions of remote services. These abstractions take the form of local objects, hiding the details of the underlying physical network from the application developer. By exploiting the object-oriented paradigm, the ACI clearly confines the host environment and network constraints in few easily upgradable objects, thus resulting in a highly system-independent architecture.
This article describes a planning approach based on the object representation. A planning domain in OAP (Object-oriented Approach for Planning) consists of a dynamic set of objects. OAP provides a language for planning problems modeling and implementation. This approach can evolve a domain model from a literal (predicative) representation to an object based representation, as well as enhancing the development of planning problems. The goal of OAP is to offer the possibility to design and develop planning problems as any other software engineering problem, and to allow the application of planning to a larger class of domains by using methods (functions) that can be implemented within the world objects. Planning systems using OAP as language can be integrated into any existing object-oriented software with a slight additional effort to transform the system to a planning domain model, which allows the use of planning to solve generic tasks in existing software applications (Business, web,…). Therefore planning in real world systems will be easier to model and to implement using all the software engineering facilities offered by the object-oriented tools.
This paper presents a multi-stage software design approach for fault-tolerance. In the first stage, a formalism is introduced to represent the behavior of the system by means of a set of assertions. This formalism enables an execution tree (ET) to be generated where each path from the root to the leaf is, in fact, a well-defined formula. During the automatic generation of the execution tree, properties like completeness and consistency of the set of assertions can be verified and consequently design faults can be revealed. In the second stage, the testing strategy is based on a set of WDFs. This set represents the structural deterministic test for the model of the software system and provides a framework for the generation of a functional deterministic test for the code implementation of the model. This testing strategy can reveal the implementation faults in the program code. In the third stage, the fault-tolerance of the software system against hardware failures is improved in a way such that the design and implementation features obtained from the first two stages are preserved. The proposed approach provides a high level of user-transparency by employing object-oriented principles of data encapsulation and polymorphism. The reliability of the software system against hardware failures is also evaluated. A tool, named Software Fault-Injection Tool (SFIT), is developed to estimate the reliability of a software system.
Since the BSP tree algorithm was introduced, many extended or applied algorithms for the original BSP tree algorithm have been reported. Most of the previous work dealt with polygons that are components of polyhedra. If a scene consists of many polyhedra and they are allowed to move, the management cost of a BSP tree becomes expensive and the size of the tree becomes large. This paper presents an object-oriented BSP tree algorithm that can more efficiently deal with a large number of moving polyhedra.
The development of remote sensing technology, especially the availability of high-resolution satellite imagery, has been applied to building recognition, hazard investigation and rapid pre-evaluation in post-earthquake management. Existing pixel-oriented approaches which are commonly used for satellite high-resolution imagery have limitations in information extraction, ground object classification, and processing speed. This paper presents an object-oriented method to extract earthquake-damaged building information using high-resolution remote sensing imagery of the 5.12 Wenchuan Earthquake. This method segmented the whole image into non-intersecting pieces of image objects, and then classified these pieces to extract damaged/undamaged buildings using image features such as spectral characters, textures, shapes, and their contexts. The results show a higher-precision classification than conventional methods.
In classical artificial intelligence and machine learning fields, the aim is to teach a certain program to find the most convenient and efficient way of solving a particular problem. However, these approaches are not suitable for simulating the evolution of human intelligence, since intelligence is a dynamically changing, volatile behavior, which greatly depends on the environment an agent is exposed to. In this paper, we present several models of what should be considered, when trying to simulate the evolution of intelligence of agents within a given environment. We explain several types of entropies, and introduce a dominant function model. By unifying these models, we explain how and why our ideas can be formally detailed and implemented using object-oriented technologies. The difference between our approach and that described in other papers also — approaching evolution from the point of view of entropies — is that our approach focuses on a general system, modern implementation solutions, and extended models for each component in the system.
Most image processing systems contain a set of predefined data structures which represent images, edges, regions etc. However, there is usually little support for user-defined data types or for extensions of existing types. Furthermore, it is often difficult to store customized data structures in files, or to send them between different processes.
In this article, an object-oriented communication scheme based on persistent objects is presented. It will be demonstrated how such a mechanism can be incorporated into several existing image processing systems and what the advantages of the approach are. In this context, some general principles of object-oriented design and how they can be applied to so called computational networks will also be discussed.
Much as object-oriented programming allows for the creation of more reusable components, it is the reuse of the design of an application that is most promising for attaining the goals of reusability. Object-oriented frameworks further design-level reuse, in that they allow for reusing the abstract design of an entire application, modelling each major component with an abstract class. Yet, application design based on frameworks remains a difficult endeavor, and a comprehensive approach to represent and document designs based on frameworks is still missing. In our work, we have developed a multi-layered model for framework reuse which comprises reuse objects at different levels of abstraction, most notably, micro-architectures. We have adopted, refined, and integrated novel techniques for the representation and documentation of micro-architectures and frameworks, namely, design patterns, contracts, and motifs. We believe that our approach is a valuable step towards better exploiting the reuse potential of frameworks.