Please login to be able to save your searches and receive alerts for new content matching your search criteria.
We studied the dynamics of a neural network that has both recurrent excitatory and random inhibitory connections. Neurons started to become active when a relatively weak transient excitatory signal was presented and the activity was sustained due to the recurrent excitatory connections. The sustained activity stopped when a strong transient signal was presented or when neurons were disinhibited. The random inhibitory connections modulated the activity patterns of neurons so that the patterns evolved without recurrence with time. Hence, a time passage between the onsets of the two transient signals was represented by the sequence of activity patterns. We then applied this model to represent the trace eyeblink conditioning, which is mediated by the hippocampus. We assumed this model as CA3 of the hippocampus and considered an output neuron corresponding to a neuron in CA1. The activity pattern of the output neuron was similar to that of CA1 neurons during trace eyeblink conditioning, which was experimentally observed.
Neurons are the fundamental units of the brain and nervous system. Developing a good modeling of human neurons is very important not only to neurobiology but also to computer science and many other fields. The McCulloch and Pitts neuron model is the most widely used neuron model, but has long been criticized as being oversimplified in view of properties of real neuron and the computations they perform. On the other hand, it has become widely accepted that dendrites play a key role in the overall computation performed by a neuron. However, the modeling of the dendritic computations and the assignment of the right synapses to the right dendrite remain open problems in the field. Here, we propose a novel dendritic neural model (DNM) that mimics the essence of known nonlinear interaction among inputs to the dendrites. In the model, each input is connected to branches through a distance-dependent nonlinear synapse, and each branch performs a simple multiplication on the inputs. The soma then sums the weighted products from all branches and produces the neuron’s output signal. We show that the rich nonlinear dendritic response and the powerful nonlinear neural computational capability, as well as many known neurobiological phenomena of neurons and dendrites, may be understood and explained by the DNM. Furthermore, we show that the model is capable of learning and developing an internal structure, such as the location of synapses in the dendritic branch and the type of synapses, that is appropriate for a particular task — for example, the linearly nonseparable problem, a real-world benchmark problem — Glass classification and the directional selectivity problem.