Loading [MathJax]/jax/output/CommonHTML/jax.js
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    DOES ABNORMAL SPINAL RECIPROCAL INHIBITION LEAD TO CO-CONTRACTION OF ANTAGONIST MOTOR UNITS? A MODELING STUDY

    It is suggested that co-contraction of antagonist motor units perhaps due to abnormal disynaptic Ia reciprocal inhibition is responsible for Parkinsonian rigidity. A neural model of Parkinson's disease bradykinesia is extended to incorporate the effects of spindle feedback on key cortical cells and examine the effects of dopamine depletion on spinal activities. Simulation results show that although reciprocal inhibition is reduced in DA depleted case, it doesn't lead to co-contraction of antagonist motor neurons. Implications to Parkinsonian rigidity are discussed.

  • articleNo Access

    A Fly-Inspired Mushroom Bodies Model for Sensory-Motor Control Through Sequence and Subsequence Learning

    Classification and sequence learning are relevant capabilities used by living beings to extract complex information from the environment for behavioral control. The insect world is full of examples where the presentation time of specific stimuli shapes the behavioral response. On the basis of previously developed neural models, inspired by Drosophila melanogaster, a new architecture for classification and sequence learning is here presented under the perspective of the Neural Reuse theory. Classification of relevant input stimuli is performed through resonant neurons, activated by the complex dynamics generated in a lattice of recurrent spiking neurons modeling the insect Mushroom Bodies neuropile. The network devoted to context formation is able to reconstruct the learned sequence and also to trace the subsequences present in the provided input. A sensitivity analysis to parameter variation and noise is reported. Experiments on a roving robot are reported to show the capabilities of the architecture used as a neural controller.

  • articleOpen Access

    Evaluation of Spiking Neural Nets-Based Image Classification Using the Runtime Simulator RAVSim

    Spiking Neural Networks (SNNs) help achieve brain-like efficiency and functionality by building neurons and synapses that mimic the human brain’s transmission of electrical signals. However, optimal SNN implementation requires a precise balance of parametric values. To design such ubiquitous neural networks, a graphical tool for visualizing, analyzing, and explaining the internal behavior of spikes is crucial. Although some popular SNN simulators are available, these tools do not allow users to interact with the neural network during simulation. To this end, we have introduced the first runtime interactive simulator, called Runtime Analyzing and Visualization Simulator (RAVSim),a developed to analyze and dynamically visualize the behavior of SNNs, allowing end-users to interact, observe output concentration reactions, and make changes directly during the simulation. In this paper, we present RAVSim with the current implementation of runtime interaction using the LIF neural model with different connectivity schemes, an image classification model using SNNs, and a dataset creation feature. Our main objective is to primarily investigate binary classification using SNNs with RGB images. We created a feed-forward network using the LIF neural model for an image classification algorithm and evaluated it by using RAVSim. The algorithm classifies faces with and without masks, achieving an accuracy of 91.8% using 1000 neurons in a hidden layer, 0.0758 MSE, and an execution time of ∼10min on the CPU. The experimental results show that using RAVSim not only increases network design speed but also accelerates user learning capability.

  • articleNo Access

    A SIMPLE NEURON NETWORK BASED ON HEBB'S RULE

    A weighted mechanism in neural networks is studied. This paper focuses on the neuron's behaviors in an area of brain. Our model could regenerate the power-law behaviors and finite size effects of neural avalanche. The probability density functions (PDFs) for the neural avalanche size differing at different times (lattice size) have fat tails with a q-Gaussian shape and the same parameter value of q in the thermodynamical limit. Above two kinds of behaviors show that our neural model can well present self-organized critical behavior. The robustness of PDFs shows the stability of self-organized criticality. Meanwhile, the avalanche scaling relation of the waiting time has been found.