Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  Bestsellers

  • articleNo Access

    MODELLING AND PERFORMANCE EVALUATION OF THE CIRCULATING MULTISEQUENCER, THE MULTI-TOKENS AND THE CONSENSUS ALGORITHMS IN A REAL TIME DISTRIBUTED TRANSACTIONAL SYSTEM

    In a real-time distribute transactional system, customers generate transactions, which should be scheduled to be executed on different servers. The transactions have temporal constraints and must be executed before their deadlines. To schedule these transactions the circulating mutisequencer, the multi-tokens and the consensus algorithms have been considered to obtain a global view of the system. In this paper, mathematical models are developed to obtain the average stay time of a transaction within the system. These models introduce a bulk arrival M/G/1 station with K classes of customers where bulks are considered according to FIFO discipline and customers (actions) are scheduled according to EDF within a group and with the HOL discipline for the algorithm operating. The response time distribution is also computed. This allowed us to determine the minimum relative deadline, to affect to a generated transaction, to guarantee a given probability p that the transaction does not miss its deadline. The system is then called p-feasible. This study enables to determine the number of tokens to use for the multi-tokens algorithm, for a given number of servers and shows that the circulating mutisequencer algorithm presents the best results.

  • articleNo Access

    MOBILE AGENT BASED CHECKPOINTING WITH CONCURRENT INITIATIONS

    Traditional message passing based checkpointing and rollback recovery algorithms perform well for tightly coupled systems. In wide area distributed systems these algorithms may suffer from large overhead due to message passing delay and network traffic. Mobile agents offer an attractive option for designing checkpointing schemes for wide area distributed systems. Network topology is assumed to be arbitrary. Processes are mobile agent enabled. When a process wants to take a checkpoint, it just creates one mobile agent. Concurrent initiations by multiple processes are allowed. Synchronization and creation of a consistent global state (CGS) for checkpointing is managed by the mobile agent(s). In the worst case, for k concurrent initiations among n processes, checkpointing algorithm requires a total of O(kn) hops by all the mobile agents. A mobile agent carries O(n/k) (on the average) size data.

  • articleNo Access

    SELF-STABILIZING COMPUTATION OF 3-EDGE-CONNECTED COMPONENTS

    A self-stabilizing algorithm is a distributed algorithm that can start from any initial (legitimate or illegitimate) state and eventually converge to a legitimate state in finite time without being assisted by any external agent. In this paper, we propose a self-stabilizing algorithm for finding the 3-edge-connected components of an asynchronous distributed computer network. The algorithm stabilizes in O(dnΔ) rounds and every processor requires O(nlogΔ) bits, where Δ(≤ n) is an upper bound on the degree of a node, d(≤ n) is the diameter of the network, and n is the total number of nodes in the network. These time and space complexity are at least a factor of n better than those of the previously best-known self-stabilizing algorithm for 3-edge-connectivity. The result of the computation is kept in a distributed fashion by assigning, upon stabilization of the algorithm, a component identifier to each processor which uniquely identifies the 3-edge-connected component to which the processor belongs. Furthermore, the algorithm is designed in such a way that its time complexity is dominated by that of the self-stabilizing depth-first search spanning tree construction in the sense that any improvement made in the latter automatically implies improvement in the time complexity of the algorithm.

  • articleNo Access

    ASSUME-GUARANTEE REASONING WITH LOCAL SPECIFICATIONS

    We investigate assume-guarantee reasoning for global specifications consisting of conjunctions of local specifications. We present a sound and complete assume-guarantee methodology that enables us to establish properties of a composite system by checking local specifications of its individual modules. We illustrate our approach with an example from the field of network congestion control, where different agents are responsible for controlling packet flow across a shared infrastructure. In this context we derive an assume-guarantee system for network stability and show its efficiency to reason about any number of agents, any initial flow configuration, and any topology of bounded degree.

  • articleNo Access

    A THREE-ROUND ADAPTIVE DIAGNOSTIC ALGORITHM IN A DISTRIBUTED SYSTEM MODELED BY DUAL-CUBES

    Problem diagnosis in large distributed computer systems and networks is a challenging task that requires fast and accurate inferences from huge volumes of data. In this paper, the PMC diagnostic model is considered, based on the diagnostic approach of end-to-end probing technology. A probe is a test transaction whose outcome depends on some of the system's components; diagnosis is performed by selecting appropriate probes and analyzing the results. In the PMC model, every computer can execute a probe to test a dedicated system's components. Furthermore, any test result reported by a faulty probe station is unreliable and the test result reported by fault-free probe station is always correct. The aim of the diagnosis is to locate all faulty components in the system based on collection of the test results. A dual-cube DC(n) is an (n + 1)-regular spanning subgraph of a (2n + 1)-dimensional hypercube. It uses n-dimensional hypercubes as building blocks and returns the main desirable properties of the hypercube so that it is suitable as a topology for distributed systems. In this paper, we first show that the diagnosability of DC(n) is n + 1 and then show that adaptive diagnosis is possible using at most 22n+1 + n tests for a 22n+1-node distributed system modeled by dual-cubes DC(n) in which at most n + 1 processes are faulty. Furthermore, we propose an adaptive diagnostic algorithm for the DC(n) and show that it diagnoses the DC(n) in three testing rounds and at most 22n+1 + O(n3) tests, where each node is scheduled for at most one test in each round.

  • articleNo Access

    A NOTE ON LINEARIZABILITY AND THE GLOBAL TIME AXIOM

    The assumption of the existence of global time, which significantly simplifies the analysis of distributed systems, is generally safe since most of the conclusions obtained under the global time axiom can be transferred to the frame where no such assumption is made. In this note, we show that the compositionality of the well-known correctness condition for concurrent objects called linearizability does not satisfy this simplification rule: we present a simple non-linearizable system composed of two objects which are individually linearizable.

  • articleNo Access

    CLOUDS: A NEW PLAYGROUND FOR THE XTREEMOS GRID OPERATING SYSTEM

    The emerging cloud computing model has recently gained a lot of interest both from commercial companies and from the research community. XtreemOS is a distributed operating system for large-scale wide-area dynamic infrastructures spanning multiple administrative domains. XtreemOS, which is based on the Linux operating system, has been designed as a Grid operating system providing native support for virtual organizations. In this paper, we discuss the positioning of XtreemOS technologies with regard to cloud computing. More specifically, we investigate a scenario where XtreemOS could help users take full advantage of clouds in a global environment including their own resources and cloud resources. We also discuss how the XtreemOS system could be used by cloud service providers to manage their underlying infrastructure. This study shows that the XtreemOS distributed operating system is a highly relevant technology in the new era of cloud computing where future clouds seamlessly span multiple bare hardware providers and where customers extend their IT infrastructure by provisioning resources from different cloud service providers.

  • articleNo Access

    Analysis of Distributed Token Circulation Algorithm with Faulty Random Number Generator

    Randomization is a technique to improve efficiency and computability of distributed computing. In this paper, we investigate fault tolerance of distributed computing against faults of random number generators. We introduce an RNG (Random Number Generator)-fault as a new class of faults; a random number generator on an RNG-faulty process outputs the same number deterministically. This paper is the first work that considers faults of randomness in distributed computing.

    We investigate the role of randomization by observing the impact of RNG-faults on performance of a self-stabilizing token circulation algorithm on unidirectional n-node ring networks. In the analysis, we assume there exist nf (0 ≤ nf ≤ n−1) RNG-faulty nodes and each RNG-faulty node always transfers a token to the next node. Our results are threefold: (1) We derive the upper bound on the expected convergence time in the case of nf = n − 1. (2) Our simulation result shows that the expected convergence time is maximum when nf = n − 1. (3) We derive the expected token circulation time for each nf (0 ≤ nf ≤ n − 1).

  • articleNo Access

    Efficient Communication Induced Checkpointing Protocol for Broadcast Network-based Distributed Systems

    This paper proposes an enhanced Fully Informed Communication-Induced Checkpointing (FI-CIC) protocol to highly improve the possibility of detecting Z-cycle free patterns with no extra control message by utilizing the advantageous feature of the broadcast network in an effective way compared with the original FI-CIC protocol. Experimental results show that our protocol outperforms the previous one in terms of the number of forced checkpoints per process.

  • articleNo Access

    Scalable Sender-Based Message Logging Protocol with Little Communication Overhead for Distributed Systems

    The inherent shortcoming of the conventional Sender-Based Message Logging (SBML) protocols is to require additional control message interactions per application message to satisfy the always-no-orphans condition in case of sequential failures. In this paper, a scalable SBML protocol is introduced to lower the communication overhead by handling a sequence of messages consecutively received by each process before sending as a party. The protocol enables the process to delay the update of their receive sequence numbers to their senders until there comes out the first message it is willing to send, and then perform the collective filling out task with each sender requiring only one control message exchange. Experimental results show that our protocol outperforms the previous one in terms of the number of control messages generated.

  • articleNo Access

    MPCNet: Smart Contract-Based Multiparty Computing Network for Federated Learning

    Stepping into the era of big data, with more resources shared, the machine learning algorithms are more likely to derive a better solution, and those complicated computations can be finished in a shorter time. The existing works about multiparty computing mainly focus on how to perform the computation when the involved partners are given, but hardly consider the process during which the partners find each other. In this work, we proposed a framework of the multiparty computing network (MPCNet) for the agents propose and collaborate, where R3 Corda is harnessed to establish a blockchain platform where the convener is able to look for some other partners, and a crowdsourcing process is performed to verify the validity of the conveners proposal and the partners applications. Furthermore, a reward mechanism is proposed in order to motivate the verifiers to participate. Once all the agents joining the computing task are confirmed, they communicate with each other to perform the computing task, following the plan that is mentioned in the proposed smart contract. Experimental results demonstrated the feasibility, usability, and scalability of our proposed approach.

  • articleNo Access

    STING Algorithm Used English Sentiment Classification in a Parallel Environment

    Sentiment classification is significant in everyday life of everyone, in political activities, activities of commodity production, commercial activities. In this research, we propose a new model for Big Data sentiment classification in the parallel network environment. Our new model uses STING Algorithm (SA) (in the data mining field) for English document-level sentiment classification with Hadoop Map (M)/Reduce (R) based on the 90,000 English sentences of the training data set in a Cloudera parallel network environment — a distributed system. In the world there is not any scientific study which is similar to this survey. Our new model can classify sentiment of millions of English documents with the shortest execution time in the parallel network environment. We test our new model on the 25,000 English documents of the testing data set and achieved on 61.2% accuracy. Our English training data set includes 45,000 positive English sentences and 45,000 negative English sentences.

  • articleOpen Access

    Application of Numerical Inverse Laplace Transform Methods for Simulation of Distributed Systems with Fractional-Order Elements

    The paper presents a computationally efficient method for modeling and simulating distributed systems with lossy transmission line (TL) including multiconductor ones, by a less conventional method. The method is devised based on 1D and 2D Laplace transforms, which facilitates the possibility of incorporating fractional-order elements and frequency-dependent parameters. This process is made possible due to the development of effective numerical inverse Laplace transforms (NILTs) of one and two variables, 1D NILT and 2D NILT. In the paper, it is shown that in high frequency operating systems, the frequency dependencies of the system ought to be included in the model. Additionally, it is shown that incorporating fractional-order elements in the modeling of the distributed parameter systems compensates for losses along the wires, provides higher degrees of flexibility for optimization and produces more accurate and authentic modelling of such systems. The simulations are performed in the Matlab environment and are effectively algorithmized.

  • articleNo Access

    An On-Board Task Scheduling Method Based on Evolutionary Optimization Algorithm

    In order to meet the requirements of task scheduling in on-board distributed computing environment, an evolutionary optimization scheduling method for on-board tasks was proposed based on the on-board dynamic heterogeneous computing resource model and task model. In this method, task scheduling priority coding was used to adapt to dynamic changes of computing resources. Heuristic critical path for comprehensive index balance was adopted to implement the evaluation, so as to achieve a balance between task makespan, power consumption and reliability. Multi-group neighborhood search strategy was applied to avoid the algorithm falling into local optimization and simplify the complexity of scheduling algorithm. Fault-tolerant strategy of convergence monitoring and stop loss reconfiguration were designed for scheduling unit fault tolerance. Scheduling strategy based on active and backup redundant subtasks was used for computing unit fault tolerance. The simulation results showed the efficiency of this method to deal with the scenario of on-board distributed computing resources and implement the optimal scheduling of on-board tasks.

  • articleNo Access

    Maximum Load Consumption Capacity Maintenance of Distributed Storage Devices Based on Time-Varying Neurodynamic Algorithm

    A charge and discharge management scheme is proposed. The stored electric energy in distributed storage devices will converge to the consistent. The consistency of the stored electric energy of the devices helps to maintain the maximum load capacity and maximum consumption capacity of distributed storage devices. The charging and discharging process is constructed as a time-varying optimization problem, and the proposed algorithm can respond to the time-varying parameters of the distributed storage devices in real time. The time-varying neurodynamic algorithms can obtain time-varying optimal solution trajectories to give the optimal charging and discharging strategy in real time. In addition, the proposed approach in this paper focuses on the privacy protection of device data. Each device can calculate the power of discharging or charging by communicating with the partially connected nodes. Numerical simulations of the proposed scheme in the paper are given to verify the effectiveness of the scheme. Numerical simulations show that our scheme can make the electric energy stored in each storage device converge and maintain the maximum load capacity or maximum consumption capacity of the whole distributed storage device.

  • articleNo Access

    TOWARDS WEB-BASED COMPUTING

    In a problem solving environment for geometric computing, a graphical user interface, or GUI, for visualization has become an essential component for geometric software development. In this paper we describe a visualization system, called GeoJAVA, which consists of a GUI and a geometric visualization library that enables the user or algorithm designer to (1) execute and visualize an existing algorithm in the library or (2) develop new code over the Internet. The library consists of geometric code written in C/C++. The GUI is written using the Java programming language. Taking advantage of the socket classes and system-independent application programming interfaces (API's) provided with the Java language, GeoJAVA offers a platform independent environment for distributed geometric computing that combines Java and C/C++. Users may remotely join a "channel" or discussion group in a location transparent manner to do collaborative research. The visualization of an algorithm, a C/C++ program located locally or remotely and controlled by a "floor manager", can be viewed by all the members in the channel through a visualization sheet called GeoJAVASheet. A chat box is also provided to enable dialogue among the members. Furthermore, this system not only allows visualization of pre-compiled geometric code, but also serves as a web-based programming environment where the user may submit a geometric code, compile it with the libraries provided by the system, and visualize it directly over the web sharing it with other users immediately.

  • articleNo Access

    Dependability Analysis of Homogeneous Distributed Software/Hardware Systems

    With the increasing demand for high availability in safety-critical systems such as banking systems, military systems, nuclear systems, aircraft systems to mention a few, reliability analysis of distributed software/hardware systems continue to be the focus of most researchers. The reliability analysis of a homogeneous distributed software/hardware system (HDSHS) with k-out-of-n : G configuration and no load-sharing nodes is analyzed. However, in practice the system load is shared among the working nodes in a distributed system. In this paper, the dependability analysis of a HDSHS with load-sharing nodes is presented. This distributed system has a load-sharing k-out-of-(n + m) : G configuration. A Markov model for HDSHS is developed. The failure time distribution of the hardware is represented by the accelerated failure time model. The software faults are detected during software testing and removed upon failure. The Jelinski–Moranda software reliability model is used. The maintenance personal can repair the system up on both software and hardware failure. The dependability measures such as reliability, availability and mean time to failure are obtained. The effect of load-sharing hosts on system hazard function and system reliability is presented. Furthermore, an availability comparison of our results and the results in the literature is presented.

  • articleNo Access

    An Unsupervised Gradient-Based Approach for Real-Time Log Analysis From Distributed Systems

    We consider the problem of real-time log anomaly detection for distributed system with deep neural networks by unsupervised learning. There are two challenges in this problem, including detection accuracy and analysis efficacy. To tackle these two challenges, we propose GLAD, a simple yet effective approach mining for anomalies in distributed systems. To ensure detection accuracy, we exploit the gradient features in a well-calibrated deep neural network and analyze anomalous pattern within log files. To improve the analysis efficacy, we further integrate one-class support vector machine (SVM) into anomalous analysis, which significantly reduces the cost of anomaly decision boundary delineation. This effective integration successfully solves both accuracy and efficacy in real-time log anomaly detection. Also, since anomalous analysis is based upon unsupervised learning, it significantly reduces the extra data labeling cost. We conduct a series of experiments to justify that GLAD has the best comprehensive performance balanced between accuracy and efficiency, which implies the advantage in tackling practical problems. The results also reveal that GLAD enables effective anomaly mining and consistently outperforms state-of-the-art methods on both recall and F1 scores.

  • articleNo Access

    Loosely-Specified Query Processing in Large-Scale Information Systems

    Challenging issues for processing queries specified over large-scale information spaces (for example, Digital Libraries or the World Wide Web) include the diversity of the information sources in terms of their structures, query interfaces and search capabilities, as well as the dynamics of sources continuously being added, removed or upgraded. In this paper, we give an innovative solution for query planning in such environments. The foundation of our solution is the Dynamic Information Integration Model (DIIM) which supports the specification of not only content but also capabilities of resources without requiring the establishment of a uniform integration schema. Besides the development of the DIIM model, contributions of this paper include:

    (1) the introduction of the notion of fully specified queries that are semantically equivalent to a loosely-specified query;

    (2) a translation algorithm of a loosely-specified query into a set of semantically equivalent feasible query plans that are consistent with the binding patterns of query templates of the individual sources (capability descriptions in DIIM) and with interrelationships between information sources (expressed as join constraints in DIIM); and

    (3) a search restriction algorithm for optimizing query processing by pruning the search space into the relevant subspace of a query. The plans obtained by the proposed query planning process which is composed of the search restriction and translation algorithms can be shown to correspond to query plans semantically equivalent to the initial loosely-specified input query.

  • articleNo Access

    LOOSELY-SPECIFIED QUERY PROCESSING IN LARGE-SCALE INFORMATION SYSTEMS

    Challenging issues for processing queries specified over large-scale information spaces (e.g., Digital Libraries or the World Wide Web) include the diversity of the information sources in terms of their structures, query interfaces and search capabilities, as well as the dynamics of sources continuously being added, removed or upgraded. In this paper, we give an innovative solution for query planning in such environments. The foundation of our solution is the Dynamic Information Integration Model (DIIM) which supports the specification of not only content but also capabilities of resources without requiring the establishment of a uniform integration schema. Besides the development of the DIIM model, contributions of this paper include: (1) the introduction of the notion of fully specified queries that are semantically equivalent to a loosely-specified query; (2) a translation algorithm of a loosely-specified query into a set of semantically equivalent feasible query plans that are consistent with the binding patterns of query templates of the individual sources (capability descriptions in DIIM) and with interrelationships between informations sources (expressed as join constraints in DIIM); and (3) a search restriction algorithm for optimizing query processing by pruning the search space into the relevant subspace of a query. The plans obtained by the proposed query planning process which is composed of the search restriction and translation algorithms can be shown to correspond to query plans semantically equivalent to the initial loosely-specified input query.