Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    REDUNDANCY AND THE JUSTIFICATION FOR FOURTH NORMAL FORM IN RELATIONAL DATABASES

    The relationship between the absence of redundancy in relational databases and fourth normal form (4NF) is investigated. A relation scheme is defined to be redundant if there exists a legal relation defined over it which has at least two tuples that are identical on the attributes in a functional dependency (FD) or multivalued dependency (MVD) constraint. Depending on whether the dependencies in a set of constraints or the dependencies in the closure of the set is used, two different types of redundancy are defined. It is shown that the two types of redundancy are equivalent and their absence in a relation scheme is equivalent to the 4NF condition.

  • articleNo Access

    REDUNDANCY ISSUES IN SOFTWARE AND HARDWARE SYSTEMS: AN OVERVIEW

    The redundancy is a widely spread technology of building computing systems that continue to operate satisfactorily in the presence of faults occurring in hardware and software components. The principle objective of applying redundancy is achieve reliability goals subject to techno-economic constraints. Due to a plenty of applications arising virtually in both industrial and military organizations especially in embedded fault tolerance systems including telecommunication, distributed computer systems, automated manufacturing systems, etc., the reliability and its dependability measures of redundant computer-based systems have become attractive features for the systems designers and production engineers. However, even with the best design of redundant computer-based systems, software and hardware failures may still occur due to many failure mechanisms leading to serious consequences such as huge economic losses, risk to human life, etc. The objective of present survey article is to discuss various key aspects, failure consequences, methodologies of redundant systems along with software and hardware redundancy techniques which have been developed at the reliability engineering level. The methodological aspects which depict the required steps to build a block diagram composed of components in different configurations as well as Markov and non-Markov state transition diagram representing the structural system has been elaborated. Furthermore, we describe the reliability of a specific redundant system and its comparison with a non redundant system to demonstrate the tractability of proposed models and its performance analysis.

  • articleNo Access

    Unsupervised Feature Selection Based on Spectral Clustering with Maximum Relevancy and Minimum Redundancy Approach

    In this paper, two novel approaches for unsupervised feature selection are proposed based on the spectral clustering. In the first proposed method, spectral clustering is employed over the features and the center of clusters is selected as well as their nearest-neighbors. These features have a minimum similarity (redundancy) between themselves since they belong to different clusters. Next, samples of data sets are clustered employing spectral clustering so that to the samples of each cluster a specific pseudo-label is assigned. After that according to the obtained pseudo-labels, the information gain of the features is computed that secures the maximum relevancy. Finally, the intersection of the selected features in the two previous steps is determined that simultaneously guarantees both the maximum relevancy and minimum redundancy. Our second proposed approach is very similar to the first one whose only but significant difference with the first method is that it selects one feature from each cluster and sorts all the features in terms of their relevancy. Then, by appending the selected features to a sorted list and ignoring them for the next step, the algorithm continues with the remaining features until all the features to be appended into the sorted list. Both of our proposed methods are compared with state-of-the-art methods and the obtained results confirm the performance of our proposed approaches especially the second one.

  • articleNo Access

    OPEN QUESTIONS IN COMPUTATIONAL MOTOR CONTROL

    Computational motor control covers all applications of quantitative tools for the study of the biological movement control system. This paper provides a review of this field in the form of a list of open questions. After an introduction in which we define computational motor control, we describe: a Turing-like test for motor intelligence; internal models, inverse model, forward model, feedback error learning and distal teacher; time representation, and adaptation to delay; intermittence control strategies; equilibrium hypotheses and threshold control; the spatiotemporal hierarchy of wide sense adaptation, i.e., feedback, learning, adaptation, and evolution; optimization based models for trajectory formation and optimal feedback control; motor memory, the past and the future; and conclude with the virtue of redundancy. Each section in this paper starts with a review of the relevant literature and a few more specific studies addressing the open question, and ends with speculations about the possible answer and its implications to motor neuroscience. This review is aimed at concisely covering the topic from the author's perspective with emphasis on learning mechanisms and the various structures and limitations of internal models.

  • articleNo Access

    THE UNIQUENESS THEOREM FOR COMPLEX-VALUED NEURAL NETWORKS WITH THRESHOLD PARAMETERS AND THE REDUNDANCY OF THE PARAMETERS

    This paper will prove the uniqueness theorem for 3-layered complex-valued neural networks where the threshold parameters of the hidden neurons can take non-zeros. That is, if a 3-layered complex-valued neural network is irreducible, the 3-layered complex-valued neural network that approximates a given complex-valued function is uniquely determined up to a finite group on the transformations of the learnable parameters of the complex-valued neural network.

  • articleNo Access

    An Image Encryption Algorithm Based on Information Hiding

    Aiming at resolving the conflict between security and efficiency in the design of chaotic image encryption algorithms, an image encryption algorithm based on information hiding is proposed based on the “one-time pad” idea. A random parameter is introduced to ensure a different keystream for each encryption, which has the characteristics of “one-time pad”, improving the security of the algorithm rapidly without significant increase in algorithm complexity. The random parameter is embedded into the ciphered image with information hiding technology, which avoids negotiation for its transport and makes the application of the algorithm easier. Algorithm analysis and experiments show that the algorithm is secure against chosen plaintext attack, differential attack and divide-and-conquer attack, and has good statistical properties in ciphered images.

  • articleNo Access

    OPTIMAL COMPONENT SELECTION OF COTS BASED SOFTWARE SYSTEM UNDER CONSENSUS RECOVERY BLOCK SCHEME INCORPORATING EXECUTION TIME

    Computer based systems have increased dramatically in scope, complexity, pervasiveness. Most industries are highly dependent on computers for their basic day to day functioning. Safe & reliable software operations are an essential requirement for many systems across different industries. The number of functions to be included in a software system is decided during the software development. Any software system must be constructed in such a way that execution can resume even after the occurrence of failure with minimal loss of data and time. Such software systems which can continue execution even in presence of faults are called fault tolerant software. When failure occurs one of the redundant software modules get executed and prevent system failure. The fault tolerant software systems are usually developed by integrating COTS (commercial off-the-shelf) software components. The motivation for using COTS components is that they will reduce overall system development costs and reduce development time. In this paper, reliability models for fault tolerant consensus recovery blocks are analyzed. In first optimization model, we formulate joint optimization problem in which reliability maximization of software system and execution time minimization for each function of software system are considered under budgetary constraint. In the second model the issue of compatibility among alternatives available for different modules, is discussed. Numerical illustrations are provided to demonstrate the developed models.

  • articleNo Access

    A STUDY OF THE ASSOCIATION BETWEEN DOWNSIZING AND INNOVATION DETERMINANTS

    Using a survey data of UK firms that engaged in downsizing, this paper explores the link between downsizing and innovation determinants. We suggest that the relationship between downsizing and innovation determinants is contingent upon the speed of implementing the downsizing and the severity of downsizing. Overall, the results confirm our general proposition and shed new light on the relationship between downsizing and innovation enhancers, and barriers to innovation.

  • chapterNo Access

    A General Framework for the Analysis of Sets of Constraints

    This paper is about the analysis of sets of constraints, with no further assumptions. We explore the relationship between the minimal representation problem and a certain set covering problem of Boneh. This provides a framework that shows the connection between minimal representations, irreducible infeasible systems, minimal infeasibility sets, as well as other attributes of the preprocessing of mathematical programs. The framework facilitates the development of preprocessing algorithms for a variety of mathematical programs. As some such algorithms require random sampling, we present results to identify those sets of constraints for which all information can be sampled with nonzero probability.

  • chapterNo Access

    A New Solution of Distributed Disaster Recovery Based on Raptor Code

    For the large cost, low data availability in the condition of multi-node storage and poor capacity of intrusion tolerance of traditional disaster recovery which is based on simple copy, this paper put forward a distributed disaster recovery scheme based on raptor codes. This article introduces the principle of raptor codes, and analyses its coding advantages, and gives a comparative analysis between this solution and traditional solutions through the aspects of redundancy, data availability and capacity of intrusion tolerance. The results show that the distributed disaster recovery solution based on raptor codes can achieve higher data availability as well as better intrusion tolerance capabilities in the premise of lower redundancy.