Processing math: 100%
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    COMPLEXITY OF VERIFYING JAVA SHARED MEMORY EXECUTION

    This paper studies the problem of testing shared memory Java implementations to determine whether the memory behavior they provide is consistent. The complexity of the task is analyzed. The problem is defined as that of analyzing memory access traces. The study showed that the problem is NP-complete, both in the general case and in some particular cases in which the number of memory operations per thread, the number of write operations per variable, and the number of variables are restricted.

  • articleNo Access

    Mock BSPlib for Testing and Debugging Bulk Synchronous Parallel Software

    Testing parallel applications on a large number of processors is often impractical. Not only does it require access to scarce compute resources, but tracking down defects with the available debugging tools can often be very time consuming. Highly parallel codes should be testable on one processor at a time, so that a developer’s workstation is sufficient for executing and debugging test cases on millions of processes. Thanks to its supersteps, Bulk Synchronous Parallel programs are well suited for this kind of testing. This paper presents a mocking library for BSPlib which enables testing of fast and complex parallel algorithms at scale.

  • articleNo Access

    Some Aspects of the Test Generation Problem for an Application-Oriented Test of SRAM-Based FPGAs

    This paper presents a structural approach for testing SRAM-based FPGAs taking into account the configurability of such flexible devices. When SRAM-based FPGA testing is considered, different situations have first to be identified: namely the Application-Oriented Test situation and the Manufacturing-Oriented Test situation. This paper concentrates on Test Pattern Generation and DFT for an Application-Oriented test of SRAM-based FPGAs.

  • articleNo Access

    QCA-Based RAM Design Using a Resilient Reversible Gate with Improved Performance

    Reversible logic and Quantum dot cellular automata are the prospective pillars of quantum computing. These paradigms can potentially reduce the size and power of the future chips while simultaneously maintaining the high speed. RAM cell is a crucial component of computing devices. Design of a RAM cell using a blend of reversible logic and QCA technology will surpass the limitations of conventional RAM structure. This motivates us to explore the design of a RAM cell using reversible logic in QCA framework. The performance of a reversible circuit can be improved by utilizing a resilient reversible gate. This paper presents the design of QCA-based reversible RAM cell using an efficient, fault-tolerant and low power reversible gate. Initially, a novel reversible gate is proposed and implemented in QCA. The QCA layout of the proposed reversible gate is designed using a unique multiplexer circuit. Further, a comprehensive analysis of the gate is carried out for standard Boolean functions, cost function and power dissipation and it has been found that the proposed gate is 75.43% more cost-effective and 58.54% more energy-efficient than the existing reversible gates. To prove the inherent testability of the proposed gate, its rigorous testing is carried out against various faults and the proposed gate is found to be 69.2% fault-tolerant. For all the performance parameters, the proposed gate has performed considerably better than the existing ones. Furthermore, the proposed gate is explicitly used for designing reversible D latch and RAM cell, which are crucial modules of sequential logic circuits. The proposed latch is 45.4% more cost effective than the formerly reported D latch. The design of QCA-based RAM cell using reversible logic is novel and not reported earlier in the literature.

  • articleNo Access

    A New Design of a 3×3 Reversible Circuit Based on a Nanoscale Quantum-Dot Cellular Automata

    Quantum-dot cellular automata (QCA) is the best-suggested nanotechnology for designing digital electronic circuits. It has a higher switching frequency, low-power expenditures, low area, high speed and higher scale integration. Recently, many types of research have been on the design of reversible logic gates. Nevertheless, a high demand exists for designing high-speed, high-performance and low-area QCA circuits. Reversible circuits have notably improved with developments in complementary metal–oxide–semiconductor (CMOS) and QCA technologies. In QCA systems, it is important to communicate with other circuits and reversible gates reliably. So, we have used efficient approaches for designing a 3×3 reversible circuit based on XOR gates. Also, the suggested circuits can be widely used in reversible and high-performance systems. The suggested architecture for the 3×3 reversible circuit in QCA is composed of 28 cells, occupying only 0.04μm2. Compared to the state-of-the-art, shorter time, smaller areas, more operational frequency and better performance are the essential benefits of the suggested reversible gate design. Full simulations have been conducted with the utilization of QCADesigner software. Additionally, the proposed 3×3 gate has been schematized using two XOR gates.

  • articleNo Access

    EXPLOITING CHAOTIC DYNAMICS FOR A-D CONVERTER TESTING

    In this paper we discuss the possible use of chaotic signals for testing Analog-to-Digital Converters (ADCs), with particular reference to the well-known Code Density Test (CDT, also called Histogram Test). In detail, we discuss the implementation of a chaos-based discrete-time noise generator circuit, providing the theoretical analysis of its statistical characterization. The implementation of the chaos-based device is discussed with reference to a generic hardware architecture, taking into account the nonidealities introduced by the presence of noise and the variability of the circuit parameters. Based on this device, we propose a method for generating noisy samples that are distributed, over a target subinterval of the circuit output range, according to a probability density function (pdf) that can be made arbitrarily close to the ideal uniform pdf, in exchange for an acceptable reduction of the uniform-distributed samples generation rate. Theoretical results, also supported by two experiments, confirm the reliability of the proposed solution, showing that chaotic systems can represent an alternative with respect to traditional methods for the generation of signals to be used in the Code Density Test of ADCs.

  • articleNo Access

    USING RULE STRUCTURE TO EVALUATE THE COMPLETENESS OF RULE-BASED SYSTEM TESTING: A CASE STUDY

    Rule-based systems are typically tested using a set of inputs which will produce known outputs. However, one does not know how thoroughly the software has been exercised. Traditional test-coverage metrics do not account for the dynamic data-driven flow of control in rule-based systems. Our literature review found that there has been little prior work on coverage metrics for rule-based systems. This paper proposes test-coverage metrics for rule-based systems derived from metrics defined by prior work, and presents an industrial scale case study.

    We conducted a case study to evaluate the practicality and usefulness of the proposed metrics. The case study applied the metrics to a system for computational fluid-dynamics models based on a rule-based application framework. These models were tested using a regression-test suite. The data-flow structure built by the application framework, along with the regression-test suite, provided case-study data. The test suite was evaluated against three kinds of coverage. The measurements indicated that complete coverage was not achieved, even at the lowest level definition. Lists of rules not covered provided insight into how to improve the test suite. The case study illustrated that structural coverage measures can be utilized to measure the completeness of rule-based system testing.

  • articleNo Access

    MODEL CHECKING FOR VERIFICATION OF MANDATORY ACCESS CONTROL MODELS AND PROPERTIES

    Mandatory access control (MAC) mechanisms control which users or processes have access to which resources in a system. MAC policies are increasingly specified to facilitate managing and maintaining access control. However, the correct specification of the policies is a very challenging problem. To formally and precisely capture the security properties that MAC should adhere to, MAC models are usually written to bridge the rather wide gap in abstraction between policies and mechanisms. In this paper, we propose a general approach for property verification for MAC models. The approach defines a standardized structure for MAC models, providing for both property verification and automated generation of test cases. The approach expresses MAC models in the specification language of a model checker and expresses generic access control properties in the property language. Then the approach uses the model checker to verify the integrity, coverage, and confinement of these properties for the MAC models and finally generates test cases via combinatorial covering array for the system implementations of the models.

  • articleNo Access

    A FRAMEWORK FOR INTEGRATING FORMAL SPECIFICATION, REVIEW, AND TESTING TO ENHANCE SOFTWARE RELIABILITY

    Software risk comes mainly from its poor reliability, but how to effectively achieve high reliability still remains a challenge. This paper puts forward a framework for systematically integrating formal specification, review, and testing, and shows how it can be applied to effectively eliminate errors in the major phases of software development process to enhance software reliability. In this framework, requirements errors can be removed and missing requirements can be identified by formalizing requirements into formal specifications whose validity can be ensured by rigorous review. The valid specification can then be used as a firm foundation for implementation and for rigorous inspection, testing, and walkthrough of the implemented program. We discuss how formalization, review, and testing work together at different levels of software development to improve software reliability through detecting and removing errors in documentation.

  • articleOpen Access

    Guidelines for Quality Assurance of Machine Learning-Based Artificial Intelligence

    Significant effort is being put into developing industrial applications for artificial intelligence (AI), especially those using machine learning (ML) techniques. Despite the intensive support for building ML applications, there are still challenges when it comes to evaluating, assuring, and improving the quality or dependability. The difficulty stems from the unique nature of ML, namely, system behavior is derived from training data not from logical design by human engineers. This leads to black-box and intrinsically imperfect implementations that invalidate many principles and techniques in traditional software engineering. In light of this situation, the Japanese industry has jointly worked on a set of guidelines for the quality assurance of AI systems (in the Consortium of Quality Assurance for AI-based Products and Services) from the viewpoint of traditional quality-assurance engineers and test engineers. We report on the second version of these guidelines, which cover a list of quality evaluation aspects, catalogue of current state-of-the-art techniques, and domain-specific discussions in five representative domains. The guidelines provide significant insights for engineers in terms of methodologies and designs for tests driven by application-specific requirements.

  • articleNo Access

    PARFORMAN—AN ASSERTION LANGUAGE FOR SPECIFYING BEHAVIOR WHEN DEBUGGING PARALLEL APPLICATIONS

    PARFORMAN (PARallel FORMal ANnotation language) is a high-level specification language for expressing intended behavior or known types of error conditions when debugging or testing parallel programs. Models of intended or faulty target program behavior can be succinctly specified in PARFORMAN. These models are then compared with the actual behavior in terms of execution traces of events, in order to localize possible bugs. PARFORMAN can also be used as a general language for expressing computations over target program execution histories.

    PARFORM AN is based on a precise model of target program behavior. This model, called H-space (History-space), is formally defined through a set of general axioms about three basic relations, which may or may not hold between two arbitrary events: they may be sequentially ordered (SEQ), they may be parallel (PAR), or one of them might be included in another composite event (IN).

    The general notion of composite event is exploited systematically, which makes possible more powerful and succinct specifications. The notion of event grammar is introduced to describe allowed event patterns over a certain application domain or language. Auxiliary composite events such as Snapshots are introduced to be able to define the notion “occurred at the same time” at suitable levels of abstraction. Finally, patterns and aggregate operations on events are introduced to make possible short and readable specifications. In addition to debugging and testing, PARFORMAN can also be used to specify profiles and performance measurements.

  • articleNo Access

    AN EVENT-BASED REAL-TIME LOGIC FOR THE SPECIFICATION AND ANALYSIS OF REAL-TIME SYSTEMS

    Research on real-time systems now focuses on formal approaches to specify and analyze the behavior of real-time systems. Temporal logic is a natural candidate for this since it can specify properties of event and state sequences. However, “pure” temporal logic cannot specify “quantitative” aspect of time. The concepts of eventuality, fairness, etc. are essentially “qualitative” treatment of time. The pure temporal logic makes no reference to absolute time. For real-time systems, the pure qualitative specification and analysis of time are inadequate. In this paper, we present a modification of temporal logic—Event-based Real-time Logic (ERL), based on our event-based conceptual model. The ERL provides a high-level framework for specifying timing properties of real-time systems, and it can be implemented using Prolog programming language. In our approach to testing and debugging of real-time systems, the ERL is used to specify both expected behavior (specification) and actual behavior (execution traces) of the target system and to verify that the target system achieves the specification. In this paper, a method is presented to implement the ERL using Prolog programming language for testing and debugging real-time systems.

  • articleNo Access

    QUALITY OF SERVICE PREDICTION USING FUZZY LOGIC AND RUP IMPLEMENTATION FOR PROCESS ORIENTED DEVELOPMENT

    In a competitive business landscape, large organizations such as insurance companies and banks are under high pressure to innovate, improvise and differentiate their products and services while continuing to reduce the time-to market for new product introductions. Generating a single view of the customer is critical from different perspectives of the systems developer over a period of time because of the existence of disconnected systems within an enterprise. Therefore, to increase revenues and cost optimization, it is important build enterprise systems more closely with the business requirements by reusing the existing systems. While building distributed based applications, it is important to take into account the proven processes like Rational Unified Process (RUP) to mitigate the risks and increase the reliability of systems. Experiences in developing applications in Java Enterprise Edition (JEE) with customized RUP have been presented in this paper. RUP is adopted into an onsite-offshore development model along with ISO 9001 and SEI CMM Level 5 standards. This paper provides an RUP approach to achieve increased reliability with higher productivity and lower defect density along with competitiveness through cost effective custom software solutions. Early qualitative software reliability prediction is done using fuzzy expert systems, using which the expected number of defects in the software prior to the experimental testing is obtained. The predicted results are then compared with the practical values obtained during the actual testing procedure.

  • articleNo Access

    Two-Dimensional Generalized Framework to Determine Optimal Release and Patching Time of a Software

    Demand for highly reliable software is increasing day by day which in turn has increased the pressure on the software firms to provide reliable software in no time. Ensuring high reliability of the software can only be done by prolonged testing which in result consumes more resources which is not feasible in the existing market situation. To overcome this, software firms are providing patches after software release so as to fix the remaining number of bugs and to give better product experience to users. An update/fix is a minor portion of software to repair the bugs. With such patches, organizations enhance the performance of the software. Delivering patches after release demands extra effort and resources which are costly and hence not economical for the firms. Also, early patch release might cause improper fixation of bugs, on the other hand, delayed release may increase the chances of more failure during the operational phase. Therefore, determining optimal patch release time is imperative. To overcome the above issues we have formulated a two-dimensional time and effort-based cost model to regulate the optimum release and patch time of software, so that the total cost is minimized. Proposed model is validated on a real life data set.

  • articleNo Access

    Cost-Reliability-Optimal Release Time of Software with Patching Considered

    Testing life cycle poses a problem of achieving a high level of software reliability while achieving an optimal release time for the software. To enhance the reliability of the software, retain the market potential for the software and reduce the testing cost, the enterprise needs to know when to release the software and when to stop testing. To achieve this, enterprises usually release their product earlier in market and then release patches subsequently. Software patching is a process through which enterprises debug, update, or enhance their software. Software patching when used as a debugging process ensures an optimal release for the product, increasing the reliability of the software while reducing the economic overhead of testing. Today, due to the diverse and distributed nature of software, its journey in the market is dynamic, making patching an inherent aspect of testing. A patch is a piece of software designed to update a computer program or its supporting data to fix or improve it. Researchers have worked in the field to minimize the testing cost, but so far, reliability has not been considered in the models for optimal time scheduling using patching. In this paper, we discuss reliability, which is a major attribute of the quality of software. Thus, to address the issues of testing cost, release time of software, and a desirable reliability level, we propose a reliability growth model implementing software patching to make the software system reliable and cost effective. The numeric illustration has been implemented using real-life software failure data set.

  • articleNo Access

    A MULTI-STAGED SOFTWARE DESIGN APPROACH FOR FAULT TOLERANCE

    This paper presents a multi-stage software design approach for fault-tolerance. In the first stage, a formalism is introduced to represent the behavior of the system by means of a set of assertions. This formalism enables an execution tree (ET) to be generated where each path from the root to the leaf is, in fact, a well-defined formula. During the automatic generation of the execution tree, properties like completeness and consistency of the set of assertions can be verified and consequently design faults can be revealed. In the second stage, the testing strategy is based on a set of WDFs. This set represents the structural deterministic test for the model of the software system and provides a framework for the generation of a functional deterministic test for the code implementation of the model. This testing strategy can reveal the implementation faults in the program code. In the third stage, the fault-tolerance of the software system against hardware failures is improved in a way such that the design and implementation features obtained from the first two stages are preserved. The proposed approach provides a high level of user-transparency by employing object-oriented principles of data encapsulation and polymorphism. The reliability of the software system against hardware failures is also evaluated. A tool, named Software Fault-Injection Tool (SFIT), is developed to estimate the reliability of a software system.

  • articleNo Access

    DESIGNING COMPONENT TEST PLANS FOR SYSTEM RELIABILITY VIA MATHEMATICAL PROGRAMMING

    For prediction or verification of system reliability, it is often necessary to conduct individual tests of components that comprise the system. The question then arises as to how the total test efforts should be allocated among different components so as to minimize test costs. This paper describes the role of mathematical programming in obtaining the optimum test plans. The problem is formulated on the notion of producer’s and consumer’s risks in traditional acceptance sampling plans. Examples are given for different distributions of component failure times and for a series and a parallel system.

  • articleNo Access

    FEATURES

      The Appearance and Development of Commercial Laboratories in China.

      Independent Medical Laboratories in China - A Sunrise Industry under the Circumstances of Healthcare Reformation.

      Tracing the Rise of KingMed and its Future Route - A Correspondence with Hongbo Li.

      Establishment of IML Quality Managerial System in China.

      The Collaboration between IML and Community Medical Hospitals: Supplementary Service with Tests, Technologies and Beyond.

      The Collaboration between IML and Major Medical Institutions - Supplement Service with Esoteric Testing.

      Notice from Ministry of Health on Printing and Distributing "Basic Standards for Medical Laboratory (on Trial)" - Ministry of Health of the People's Republic of China.

    • articleNo Access

      EYE ON CHINA

        Yak genome provides new insights into high altitude adaptation.

        Gentris and Shanghai Institutes of Preventative Medicine expand collaboration.

        Chinese researchers identify rice gene enhancing quality, productivity.

        Quintiles opens new Center of Excellence in Dalian to support innovative drug development.

        BGI demonstrated genomic data transfer at nearly 10 gigabits per second between US and China.

        Quintiles deepens investment in China - New Quintiles China Headquarters and local lab testing solution announced.

        Beike earns AABB Accreditation for cord blood and cord tissue banking.

        Epigenomic differences between newborns and centenarians provide insight to the understanding of aging.

      • articleNo Access

        FEATURES

          Metabolic Syndrome and Diabetes: Current Asian Perspectives.

          A Crisis in the Development of Antibiotics.

          The Marketing of Unapproved Stem Cell Products: An Industry-wide Challenge.

          Draining the Goodwill of Science – The Direct-to-Consumer Genetic Testing Industry in East Asia.

          Biodiesel – From Lab to Industry.

          The Appearance and Development of Commercial Laboratories in China.

          Cord Blood Banking – To Go Public or Stay Private.

          Open Source – The Future of Drug Discovery.

          VACCINES – Where are we headed?

          Leveraging on External Expertise.