Please login to be able to save your searches and receive alerts for new content matching your search criteria.
It is suggested that co-contraction of antagonist motor units perhaps due to abnormal disynaptic Ia reciprocal inhibition is responsible for Parkinsonian rigidity. A neural model of Parkinson's disease bradykinesia is extended to incorporate the effects of spindle feedback on key cortical cells and examine the effects of dopamine depletion on spinal activities. Simulation results show that although reciprocal inhibition is reduced in DA depleted case, it doesn't lead to co-contraction of antagonist motor neurons. Implications to Parkinsonian rigidity are discussed.
Classification and sequence learning are relevant capabilities used by living beings to extract complex information from the environment for behavioral control. The insect world is full of examples where the presentation time of specific stimuli shapes the behavioral response. On the basis of previously developed neural models, inspired by Drosophila melanogaster, a new architecture for classification and sequence learning is here presented under the perspective of the Neural Reuse theory. Classification of relevant input stimuli is performed through resonant neurons, activated by the complex dynamics generated in a lattice of recurrent spiking neurons modeling the insect Mushroom Bodies neuropile. The network devoted to context formation is able to reconstruct the learned sequence and also to trace the subsequences present in the provided input. A sensitivity analysis to parameter variation and noise is reported. Experiments on a roving robot are reported to show the capabilities of the architecture used as a neural controller.
Spiking Neural Networks (SNNs) help achieve brain-like efficiency and functionality by building neurons and synapses that mimic the human brain’s transmission of electrical signals. However, optimal SNN implementation requires a precise balance of parametric values. To design such ubiquitous neural networks, a graphical tool for visualizing, analyzing, and explaining the internal behavior of spikes is crucial. Although some popular SNN simulators are available, these tools do not allow users to interact with the neural network during simulation. To this end, we have introduced the first runtime interactive simulator, called Runtime Analyzing and Visualization Simulator (RAVSim),a developed to analyze and dynamically visualize the behavior of SNNs, allowing end-users to interact, observe output concentration reactions, and make changes directly during the simulation. In this paper, we present RAVSim with the current implementation of runtime interaction using the LIF neural model with different connectivity schemes, an image classification model using SNNs, and a dataset creation feature. Our main objective is to primarily investigate binary classification using SNNs with RGB images. We created a feed-forward network using the LIF neural model for an image classification algorithm and evaluated it by using RAVSim. The algorithm classifies faces with and without masks, achieving an accuracy of 91.8% using 1000 neurons in a hidden layer, 0.0758 MSE, and an execution time of ∼10min on the CPU. The experimental results show that using RAVSim not only increases network design speed but also accelerates user learning capability.
A weighted mechanism in neural networks is studied. This paper focuses on the neuron's behaviors in an area of brain. Our model could regenerate the power-law behaviors and finite size effects of neural avalanche. The probability density functions (PDFs) for the neural avalanche size differing at different times (lattice size) have fat tails with a q-Gaussian shape and the same parameter value of q in the thermodynamical limit. Above two kinds of behaviors show that our neural model can well present self-organized critical behavior. The robustness of PDFs shows the stability of self-organized criticality. Meanwhile, the avalanche scaling relation of the waiting time has been found.