Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    Monitor-Based Spiking Recurrent Network for the Representation of Complex Dynamic Patterns

    Neural networks are powerful computation tools for mimicking the human brain to solve realistic problems. Since spiking neural networks are a type of brain-inspired network, called the novel spiking system, Monitor-based Spiking Recurrent network (MbSRN), is derived to learn and represent patterns in this paper. This network provides a computational framework for memorizing the targets using a simple dynamic model that maintains biological plasticity. Based on a recurrent reservoir, the MbSRN presents a mechanism called a ‘monitor’ to track the components of the state space in the training stage online and to self-sustain the complex dynamics in the testing stage. The network firing spikes are optimized to represent the target dynamics according to the accumulation of the membrane potentials of the units. Stability analysis of the monitor conducted by limiting the coefficient penalty in the loss function verifies that our network has good anti-interference performance under neuron loss and noise. The results of solving some realistic tasks show that the MbSRN not only achieves a high goodness-of-fit of the target patterns but also maintains good spiking efficiency and storage capacity.

  • articleNo Access

    A MONTE CARLO STUDY OF THE STORAGE CAPACITY AND EFFECTS OF THE CORRELATIONS IN q-STATE POTTS NEURON SYSTEM

    The storage capacity of the Potts neural network is studied by using Monte Carlo techniques. It is observed that the critical storage capacity formula of Kanter fits well to our data. The increase of the correlation between the initial patterns reduces the storage capacity. This reduction in capacity is proportional to the percentage of correlations and inversely proportional to the number of orientations that the Potts neurons can possess

  • articleNo Access

    STORAGE CAPACITY OF EXTREMELY DILUTED HOPFIELD MODEL

    The storage capacity of the extremely diluted Hopfield Model is studied by using Monte Carlo techniques. In this work, instead of diluting the synapses according to a given distribution, the dilution of the synapses is obtained systematically by retaining only the synapses with dominant contributions. It is observed that by using the prescribed dilution method the critical storage capacity of the system increases with decreasing number of synapses per neuron reaching almost the value obtained from mean-field calculations. It is also shown that the increase of the storage capacity of the diluted system depends on the storage capacity of the fully connected Hopfield Model and the fraction of the diluted synapses.

  • articleNo Access

    CHARACTERIZING ONE-LAYER ASSOCIATIVE NEURAL NETWORKS WITH OPTIMAL NOISE-REDUCTION ABILITY

    In this paper, we describe an optimal learning algorithm for designing one-layer neural networks by means of global minimization. Taking the properties of a well-defined neural network into account, we derive a cost function to measure the goodness of the network quantitatively. The connection weights are determined by the gradient descent rule to minimize the cost function. The optimal learning algorithm is formed as either the unconstraint-based or the constraint-based minimization problem. It ensures the realization of each desired associative mapping with the best noise reduction ability in the sense of optimization.

    We also investigate the storage capacity of the neural network, the degree of noise reduction for a desired associative mapping, and the convergence of the learning algorithm in an analytic way. Finally, a large number of computer experimental results are presented.

  • articleNo Access

    IMPROVING DATA AVAILABILITY USING HYBRID REPLICATION TECHNIQUE IN PEER-TO-PEER ENVIRONMENTS

    Replication is an important technique in peer-to-peer environment, where it increases data availability and accessibility to users despite site or communication failure. However, determining the number of replication and where to replicate the data are the major issues. This paper proposes a hybrid replication model for fixed and mobile network in order to achieve high data availability. For the fixed network, a data will be replicated synchronously in a diagonal manner of logical grid structure, while for the mobile network, a data will be replicated asynchronously based on commonly visited sites for each user. In comparison to the previous techniques, diagonal replication technique (DRG) on fixed network requires lower communication cost for an operation, while providing higher data availability, which is preferred for large systems.

  • chapterNo Access

    A MODULAR APPROACH TO STORAGE CAPACITY

    Modularity is a valuable principle in analysing and synthesising large systems. This chapter gives an overview on how to apply such principle to the assessment of the storage capacity of RAM-based neural networks. The storage capacity of a network is a function of the storage capacity of its component neurons and subnetworks. This modular approach allows the independent treatment of storage capacity at different levels — a model for the single neuron and a model for each architecture. The technique is illustrated in the major architectures with limited storage capacity in use nowadays — general neural unit (GNU) and pyramid — and in a composition of them. The results fit well both with the existing architecture-dependent theories and with the experimental data currently available in the literature with the advantages of simplicity and flexibility for the modularity. This approach is based on collision of information during training taken as a probabilistic process.