Please login to be able to save your searches and receive alerts for new content matching your search criteria.
This paper studies the problem of testing shared memory Java implementations to determine whether the memory behavior they provide is consistent. The complexity of the task is analyzed. The problem is defined as that of analyzing memory access traces. The study showed that the problem is NP-complete, both in the general case and in some particular cases in which the number of memory operations per thread, the number of write operations per variable, and the number of variables are restricted.
Testing parallel applications on a large number of processors is often impractical. Not only does it require access to scarce compute resources, but tracking down defects with the available debugging tools can often be very time consuming. Highly parallel codes should be testable on one processor at a time, so that a developer’s workstation is sufficient for executing and debugging test cases on millions of processes. Thanks to its supersteps, Bulk Synchronous Parallel programs are well suited for this kind of testing. This paper presents a mocking library for BSPlib which enables testing of fast and complex parallel algorithms at scale.
This paper presents a structural approach for testing SRAM-based FPGAs taking into account the configurability of such flexible devices. When SRAM-based FPGA testing is considered, different situations have first to be identified: namely the Application-Oriented Test situation and the Manufacturing-Oriented Test situation. This paper concentrates on Test Pattern Generation and DFT for an Application-Oriented test of SRAM-based FPGAs.
Reversible logic and Quantum dot cellular automata are the prospective pillars of quantum computing. These paradigms can potentially reduce the size and power of the future chips while simultaneously maintaining the high speed. RAM cell is a crucial component of computing devices. Design of a RAM cell using a blend of reversible logic and QCA technology will surpass the limitations of conventional RAM structure. This motivates us to explore the design of a RAM cell using reversible logic in QCA framework. The performance of a reversible circuit can be improved by utilizing a resilient reversible gate. This paper presents the design of QCA-based reversible RAM cell using an efficient, fault-tolerant and low power reversible gate. Initially, a novel reversible gate is proposed and implemented in QCA. The QCA layout of the proposed reversible gate is designed using a unique multiplexer circuit. Further, a comprehensive analysis of the gate is carried out for standard Boolean functions, cost function and power dissipation and it has been found that the proposed gate is 75.43% more cost-effective and 58.54% more energy-efficient than the existing reversible gates. To prove the inherent testability of the proposed gate, its rigorous testing is carried out against various faults and the proposed gate is found to be 69.2% fault-tolerant. For all the performance parameters, the proposed gate has performed considerably better than the existing ones. Furthermore, the proposed gate is explicitly used for designing reversible D latch and RAM cell, which are crucial modules of sequential logic circuits. The proposed latch is 45.4% more cost effective than the formerly reported D latch. The design of QCA-based RAM cell using reversible logic is novel and not reported earlier in the literature.
Quantum-dot cellular automata (QCA) is the best-suggested nanotechnology for designing digital electronic circuits. It has a higher switching frequency, low-power expenditures, low area, high speed and higher scale integration. Recently, many types of research have been on the design of reversible logic gates. Nevertheless, a high demand exists for designing high-speed, high-performance and low-area QCA circuits. Reversible circuits have notably improved with developments in complementary metal–oxide–semiconductor (CMOS) and QCA technologies. In QCA systems, it is important to communicate with other circuits and reversible gates reliably. So, we have used efficient approaches for designing a 3×3 reversible circuit based on XOR gates. Also, the suggested circuits can be widely used in reversible and high-performance systems. The suggested architecture for the 3×3 reversible circuit in QCA is composed of 28 cells, occupying only 0.04μm2. Compared to the state-of-the-art, shorter time, smaller areas, more operational frequency and better performance are the essential benefits of the suggested reversible gate design. Full simulations have been conducted with the utilization of QCADesigner software. Additionally, the proposed 3×3 gate has been schematized using two XOR gates.
Rule-based systems are typically tested using a set of inputs which will produce known outputs. However, one does not know how thoroughly the software has been exercised. Traditional test-coverage metrics do not account for the dynamic data-driven flow of control in rule-based systems. Our literature review found that there has been little prior work on coverage metrics for rule-based systems. This paper proposes test-coverage metrics for rule-based systems derived from metrics defined by prior work, and presents an industrial scale case study.
We conducted a case study to evaluate the practicality and usefulness of the proposed metrics. The case study applied the metrics to a system for computational fluid-dynamics models based on a rule-based application framework. These models were tested using a regression-test suite. The data-flow structure built by the application framework, along with the regression-test suite, provided case-study data. The test suite was evaluated against three kinds of coverage. The measurements indicated that complete coverage was not achieved, even at the lowest level definition. Lists of rules not covered provided insight into how to improve the test suite. The case study illustrated that structural coverage measures can be utilized to measure the completeness of rule-based system testing.
Mandatory access control (MAC) mechanisms control which users or processes have access to which resources in a system. MAC policies are increasingly specified to facilitate managing and maintaining access control. However, the correct specification of the policies is a very challenging problem. To formally and precisely capture the security properties that MAC should adhere to, MAC models are usually written to bridge the rather wide gap in abstraction between policies and mechanisms. In this paper, we propose a general approach for property verification for MAC models. The approach defines a standardized structure for MAC models, providing for both property verification and automated generation of test cases. The approach expresses MAC models in the specification language of a model checker and expresses generic access control properties in the property language. Then the approach uses the model checker to verify the integrity, coverage, and confinement of these properties for the MAC models and finally generates test cases via combinatorial covering array for the system implementations of the models.
Software risk comes mainly from its poor reliability, but how to effectively achieve high reliability still remains a challenge. This paper puts forward a framework for systematically integrating formal specification, review, and testing, and shows how it can be applied to effectively eliminate errors in the major phases of software development process to enhance software reliability. In this framework, requirements errors can be removed and missing requirements can be identified by formalizing requirements into formal specifications whose validity can be ensured by rigorous review. The valid specification can then be used as a firm foundation for implementation and for rigorous inspection, testing, and walkthrough of the implemented program. We discuss how formalization, review, and testing work together at different levels of software development to improve software reliability through detecting and removing errors in documentation.
Significant effort is being put into developing industrial applications for artificial intelligence (AI), especially those using machine learning (ML) techniques. Despite the intensive support for building ML applications, there are still challenges when it comes to evaluating, assuring, and improving the quality or dependability. The difficulty stems from the unique nature of ML, namely, system behavior is derived from training data not from logical design by human engineers. This leads to black-box and intrinsically imperfect implementations that invalidate many principles and techniques in traditional software engineering. In light of this situation, the Japanese industry has jointly worked on a set of guidelines for the quality assurance of AI systems (in the Consortium of Quality Assurance for AI-based Products and Services) from the viewpoint of traditional quality-assurance engineers and test engineers. We report on the second version of these guidelines, which cover a list of quality evaluation aspects, catalogue of current state-of-the-art techniques, and domain-specific discussions in five representative domains. The guidelines provide significant insights for engineers in terms of methodologies and designs for tests driven by application-specific requirements.
PARFORMAN (PARallel FORMal ANnotation language) is a high-level specification language for expressing intended behavior or known types of error conditions when debugging or testing parallel programs. Models of intended or faulty target program behavior can be succinctly specified in PARFORMAN. These models are then compared with the actual behavior in terms of execution traces of events, in order to localize possible bugs. PARFORMAN can also be used as a general language for expressing computations over target program execution histories.
PARFORM AN is based on a precise model of target program behavior. This model, called H-space (History-space), is formally defined through a set of general axioms about three basic relations, which may or may not hold between two arbitrary events: they may be sequentially ordered (SEQ), they may be parallel (PAR), or one of them might be included in another composite event (IN).
The general notion of composite event is exploited systematically, which makes possible more powerful and succinct specifications. The notion of event grammar is introduced to describe allowed event patterns over a certain application domain or language. Auxiliary composite events such as Snapshots are introduced to be able to define the notion “occurred at the same time” at suitable levels of abstraction. Finally, patterns and aggregate operations on events are introduced to make possible short and readable specifications. In addition to debugging and testing, PARFORMAN can also be used to specify profiles and performance measurements.
Research on real-time systems now focuses on formal approaches to specify and analyze the behavior of real-time systems. Temporal logic is a natural candidate for this since it can specify properties of event and state sequences. However, “pure” temporal logic cannot specify “quantitative” aspect of time. The concepts of eventuality, fairness, etc. are essentially “qualitative” treatment of time. The pure temporal logic makes no reference to absolute time. For real-time systems, the pure qualitative specification and analysis of time are inadequate. In this paper, we present a modification of temporal logic—Event-based Real-time Logic (ERL), based on our event-based conceptual model. The ERL provides a high-level framework for specifying timing properties of real-time systems, and it can be implemented using Prolog programming language. In our approach to testing and debugging of real-time systems, the ERL is used to specify both expected behavior (specification) and actual behavior (execution traces) of the target system and to verify that the target system achieves the specification. In this paper, a method is presented to implement the ERL using Prolog programming language for testing and debugging real-time systems.
This paper presents design and performance of a prototype of new humanoid arm that has been developed at the LARM2 laboratory of the University of Rome “Tor Vergata”. This new arm, called LARMbot PK arm, is an upper limb that is designed for the LARMbot humanoid robot. LARMbot is a humanoid robot designed to move freely in open spaces, and able to adapt to task environment. Its objective is to transport objects weighing a few kilograms in order to facilitate the restocking of workstations, or to manage small warehouses and other tasks feasible for humanoids. The LARMbot PK arm is conceived as a solution that is designed on the basis of a parallel tripod structure using linear actuators to provide high agility of movement. This solution is designed with components that can be found on the market or can be created by 3D printing in order to offer a quality and price ratio well convenient for user-oriented humanoid robots. Experimental tests are discussed with the built prototype to demonstrate the capabilities of the proposed solution in terms of agility, autonomy, and power to validate the LARMbot PK arm solution as a satisfactory solution for the new upper limbs of the LARMbot humanoid robot.
This chapter complements the chapters on technical reviews and software reliability engineering in Vol. 1 of the handbook. It is primarily concerned with the verification of code by means of testing, but an example of an informal proof of a program is also given. A practitioner's view of testing is taken throughout, including an overview of how testing is done at Microsoft.
The concept of Visual Routine is introduced. A. description is given of an implemented computer system which can correctly compute in images of simple 2-D geometric shapes eleven common properties and relations. A visual routine programming language is outlined. Issues relevant to the control of visual-routine-based search are discussed. The results of testing the system are reported…