Please login to be able to save your searches and receive alerts for new content matching your search criteria.
We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power — as the static analog neural networks — irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.
Spiking neural P systems (SNP systems) are a class of distributed and parallel computation models, which are inspired by the way in which neurons process information through spikes, where the integrate-and-fire behavior of neurons and the distribution of produced spikes are achieved by spiking rules. In this work, a novel mechanism for separately describing the integrate-and-fire behavior of neurons and the distribution of produced spikes, and a novel variant of the SNP systems, named evolution-communication SNP (ECSNP) systems, is proposed. More precisely, the integrate-and-fire behavior of neurons is achieved by spike-evolution rules, and the distribution of produced spikes is achieved by spike-communication rules. Then, the computational power of ECSNP systems is examined. It is demonstrated that ECSNP systems are Turing universal as number-generating devices. Furthermore, the computational power of ECSNP systems with a restricted form, i.e. the quantity of spikes in each neuron throughout a computation does not exceed some constant, is also investigated, and it is shown that such restricted ECSNP systems can only characterize the family of semilinear number sets. These results manifest that the capacity of neurons for information storage (i.e. the quantity of spikes) has a critical impact on the ECSNP systems to achieve a desired computational power.
Spiking neural P systems (abbreviated as SNP systems) are models of computation that mimic the behavior of biological neurons. The spiking neural P systems with communication on request (abbreviated as SNQP systems) are a recently developed class of SNP system, where a neuron actively requests spikes from the neighboring neurons instead of passively receiving spikes. It is already known that small SNQP systems, with four unbounded neurons, can achieve Turing universality. In this context, ‘unbounded’ means that the number of spikes in a neuron is not capped. This work investigates the dependency of the number of unbounded neurons on the computation capability of SNQP systems. Specifically, we prove that (1) SNQP systems composed entirely of bounded neurons can characterize the family of finite sets of numbers; (2) SNQP systems containing two unbounded neurons are capable of generating the family of semilinear sets of numbers; (3) SNQP systems containing three unbounded neurons are capable of generating nonsemilinear sets of numbers. Moreover, it is obtained in a constructive way that SNQP systems with two unbounded neurons compute the operations of Boolean logic gates, i.e., OR, AND, NOT, and XOR gates. These theoretical findings demonstrate that the number of unbounded neurons is a key parameter that influences the computation capability of SNQP systems.
In this paper the Bipolar Random Network is described, which constitutes an extension of the Random Neural Network model and exhibits autoassociative memory capabilities. This model is characterized by the existence of positive and negative nodes and symmetrical behavior of positive and negative signals circulating in the network. The network's ability of acting as autoassociative memory is examined and several techniques are developed concerning storage and reconstruction of patterns. These approaches are either based on properties of the network or constitute adaptations of existing neural network techniques. The performance of the network under the proposed schemes has been investigated through experiments showing very good storage and reconstruction capabilities. Moreover, the scheme exhibiting the best behavior seems to outperform other well-known associative neural network models, achieving capacities that exceed 0.5n where n is the size of the network.
The ability of nonlinear dynamical systems to process incoming information is a key problem of many fundamental and applied sciences. Information processing by computation with attractors (steady states, limit cycles and strange attractors) has been a subject of many publications. In this paper, we discuss a new direction in information dynamics based on neurophysiological experiments that can be applied for the explanation and prediction of many phenomena in living biological systems and for the design of new paradigms in neural computation. This new concept is the Winnerless Competition (WLC) principle. The main point of this principle is the transformation of the incoming identity or spatial inputs into identity-temporal output based on the intrinsic switching dynamics of the neural system. In the presence of stimuli the sequence of the switching, whose geometrical image in the phase space is a heteroclinic contour, uniquely depends on the incoming information. The key problem in the realization of the WLC principle is the robustness against noise and, simultaneously, the sensitivity of the switching to the incoming input. In this paper we prove two theorems about the stability of the sequential switching and give several examples of WLC networks that illustrate the coexistence of sensitivity and robustness.
Recently, it has been shown experimentally [Mainen & Sejnowski, 1995] that, in contrast to the lack of precision in spike timing associated with flat (dc) stimuli, neocortical neurons of rats respond reliably to weak input fluctuations resembling synaptic activity. This has led authors to suggest that, in spite of the high variability of interspike intervals found in cortical activity, the mechanism of spike generation in neocortical neurons has a low level of intrinsic noise. In this work we approach the problem of spike timing by using the well-known FitzHugh–Nagumo (FHN) model of neuronal dynamics and find that here also, fluctuating stimuli allow a more reliable temporal coding than constant suprathreshold signals. This result is associated with the characteristics of a phenomenological stochastic bifurcation taking place in the noisy FHN model.
Causal reasoning is a hard task that cognitive agents perform reliably and quickly. A particular class of causal reasoning that raises several difficulties is the cancellation class. Cancellation occurs when a set of causes (hypotheses) cancel each other's explanation with respect to a given effect (observation). For example, a cloudy sky may suggest a rainy weather; whereas a shiny sky may suggest the absence of rain. In this work we extend a recent neural model to handle cancellation interactions. Simulation results are very satisfactory and should encourage research.
Age-related macular degeneration and retinitis pigmentosa are two of the most common diseases that cause degeneration in the outer retina, which can lead to several visual impairments up to blindness. Vision restoration is an important goal for which several different research approaches are currently being pursued. We are concerned with restoration via retinal prosthetic devices. Prostheses can be implemented intraocularly and extraocularly, which leads to different categories of devices. Cortical Prostheses and Optic Nerve Prostheses are examples of extraocular solutions while Epiretinal Prostheses and Subretinal Prostheses are examples of intraocular solutions. Some of the prostheses that are successfully implanted and tested in animals as well as humans can restore basic visual functions but still have limitations. This paper will give an overview of the current state of art of Retinal Prostheses and compare the advantages and limitations of each type. The purpose of this review is thus to summarize the current technologies and approaches used in developing Retinal Prostheses and therefore to lay a foundation for future designs and research directions.