We investigate the memory structure and retrieval of the brain and propose a hybrid neural network of addressable and content-addressable memory which is a special database model and can memorize and retrieve any piece of information (a binary pattern) both addressably and content-addressably. The architecture of this hybrid neural network is hierarchical and takes the form of a tree of slabs which consist of binary neurons with the same array. Simplex memory neural networks are considered as the slabs of basic memory units, being distributed on the terminal vertexes of the tree. It is shown by theoretical analysis that the hybrid neural network is able to be constructed with Hebbian and competitive learning rules, and some other important characteristics of its learning and memory behavior are also consistent with those of the brain. Moreover, we demonstrate the hybrid neural network on a set of ten binary numeral patters.
This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed ∞OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally — a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too.
The well known Cohen-Grossberg network is modified to include second order neural interconnections and also to have a learning component. Sufficient conditions are obtained for the existence of a globally exponentially stable equilibrium. The model provides a two-fold generalization of the Cohen-Grossberg network in the sense if one removes the learning component, then one gets a network with second order synaptic interactions; if both the learning component and the second order interactions are removed, then the model reduces to the standard Cohen-Grossberg network.
We derive coupled on-line learning rules for the singular value decomposition (SVD) of a cross-covariance matrix. In coupled SVD rules, the singular value is estimated alongside the singular vectors, and the effective learning rates for the singular vector rules are influenced by the singular value estimates. In addition, we use a first-order approximation of Gram-Schmidt orthonormalization as decorrelation method for the estimation of multiple singular vectors and singular values. Experiments on synthetic data show that coupled learning rules converge faster than Hebbian learning rules and that the first-order approximation of Gram-Schmidt orthonormalization produces more precise estimates and better orthonormality than the standard deflation method.
In this research, a novel family of learning rules called Beta Hebbian Learning (BHL) is thoroughly investigated to extract information from high-dimensional datasets by projecting the data onto low-dimensional (typically two dimensional) subspaces, improving the existing exploratory methods by providing a clear representation of data’s internal structure. BHL applies a family of learning rules derived from the Probability Density Function (PDF) of the residual based on the beta distribution. This family of rules may be called Hebbian in that all use a simple multiplication of the output of the neural network with some function of the residuals after feedback. The derived learning rules can be linked to an adaptive form of Exploratory Projection Pursuit and with artificial distributions, the networks perform as the theory suggests they should: the use of different learning rules derived from different PDFs allows the identification of “interesting” dimensions (as far from the Gaussian distribution as possible) in high-dimensional datasets. This novel algorithm, BHL, has been tested over seven artificial datasets to study the behavior of BHL parameters, and was later applied successfully over four real datasets, comparing its results, in terms of performance, with other well-known Exploratory and projection models such as Maximum Likelihood Hebbian Learning (MLHL), Locally-Linear Embedding (LLE), Curvilinear Component Analysis (CCA), Isomap and Neural Principal Component Analysis (Neural PCA).
In this paper the Bipolar Random Network is described, which constitutes an extension of the Random Neural Network model and exhibits autoassociative memory capabilities. This model is characterized by the existence of positive and negative nodes and symmetrical behavior of positive and negative signals circulating in the network. The network's ability of acting as autoassociative memory is examined and several techniques are developed concerning storage and reconstruction of patterns. These approaches are either based on properties of the network or constitute adaptations of existing neural network techniques. The performance of the network under the proposed schemes has been investigated through experiments showing very good storage and reconstruction capabilities. Moreover, the scheme exhibiting the best behavior seems to outperform other well-known associative neural network models, achieving capacities that exceed 0.5n where n is the size of the network.
We study a learning and recalling model of phase patterns in a two- or three-coupled BVP oscillators system with a time delay δ. The coupling strengths are modulated by the Hebbian learning rule. Assuming the first-order approximation, we calculate the optimal condition of δ for exact recall by applying the phase dynamics theory. When α = 0, where α represents the coupling from activator to inhibitor, the correlation between the learning and the retrieval phase depends on δ. When α = 1, exact recall is achieved independent of δ. The results can be explained by the phase dynamics theory.
An example of homeostasis is temperature regulation at a desired level; this physiological process leads to the preservation of a stable biological environment. A control-theory–based model permits a biomedical engineer to understand the complex operation of thermoregulation, by converting general information to knowledge, and can be integrated to see how systemic parameters influence the entire system. The thermal inputs organized in the hypothalamus to activate thermoregulation responses to heat and cold stimuli, with the widely accepted set-point hypothesis for the regulation of body temperature from a control systems point of view, are, however, not entirely known. There are circumstances (e.g. fever) in which the presumed set-point mechanism appears to break down. This paper evaluates a novel set-level adaptive optimal thermal control paradigm inspired by Hebbian covariance synaptic adaptation, previously proposed based on its potential to predict the homeostatic respiratory system. It introduces a Hebbian feedback covariance learning (HFCL) concept in order to align a neuronal network into the analysis of the thermoregulation system. Hebbian theory is concerned with how neurons connect among themselves to become engrams. The passive-active mathematical model for simulating human thermoregulation during exercise was compared in cool, warm, and hot environments, and then was translated into MATLAB to predict thermoregulation. The two-node core and shell model predictions are comparable with observed thermoregulation responses from the existing literature. The thermoregulation changes with respect to proportionality constant and sensitivity of the receptors. A reasonably general agreement with the measured mean group data of earlier performed laboratory exercise studies was obtained for peak temperature, although it tended to overpredict the core body temperature.
One impact of the introduction of television, according to widely held views, is an undermining of traditional values and social organization. In this study, we simulate this process by representing social communication as a Random Boolean Network in which the individuals are nodes, and each node's state represents an opinion (yes/no) about some issue. Television is modelled as having a direct link to every node in the network. Two scenarios were considered. First, we found that, except in the most well connected networks, television rapidly breaks down cohesion (agreement in opinion). Second, the introduction of Hebbian learning leads to a polarizing effect : one subgroup strongly retains the original opinion, while a splinter group adopts the contrary opinion. The system displays criticality with respect to connectivity and the level of exposure to television. More generally, the results suggest that patterns of communication in networks can help to explain a wide variety of social phenomena.
We study an adaptive controller that adjusts its internal parameters by self-organization of its interaction with the environment. We show that the parameter changes that occur in this low-level learning process can themselves provide a source of information to a higher-level context-sensitive learning mechanism. In this way, the context is interpreted in terms of the concurrent low-level learning mechanism. The dual learning architecture is studied in realistic simulations of a foraging robot and of a humanoid hand that manipulated an object. Both systems are driven by the same low-level scheme, but use the second-order information in different ways. While the low-level adaptation continues to follow a set of rigid learning rules, the second-order learning modulates the elementary behaviors and affects the distribution of the sensory inputs via the environment.
Adaptability is one of the main characteristics of the bio-inspired control units for the anthropomorphic robotic hands. This characteristic provides the artificial hands with the ability to learn new motions and to improve the accuracy of the known ones. This paper presents a method to train spiking neural networks (SNNs) to control anthropomorphic fingers using proprioceptive sensors and Hebbian learning. Being inspired from physical guidance (PG), the proposed method eliminates the need for complex processing of the natural hand motions. To validate the proposed concept we implemented an electronic SNN that learns to control using the output of neuromorphic flexion and force sensors, two opposing actuated fingers actuated by shape memory alloys. Learning occurs when the untrained neural paths triggered by a command signal activate concurrently with the sensor specific neural paths that drive the motion detected by the flexion sensors. The results show that a SNN with a few neurons connects by synaptic potentiation the input neurons activated by the command signal to the output neurons which are activated during the passive finger motions. This mechanism is validated for grasping when the SNN is trained to flex simultaneously the index and thumb fingers if a push button is pressed. The proposed concept is suitable for implementing the neural control units of anthropomorphic robots which are able to learn motions by PG with proper sensors configuration.
Associative learning plays a major role in the formation of the internal dynamic engine of an adaptive system or a cognitive robot. Interaction with the environment can provide a sparse and discrete set of sample correlations of input–output incidences. These incidences of associative data points can provide useful hints for capturing underlying mechanisms that govern the system’s behavioral dynamics. In many approaches to solving this problem, of learning system’s input–output relation, a set of previously prepared data points need to be presented to the learning mechanism, as a training data, before a useful estimations can be obtained. Besides data-coding is usually based on symbolic or nonimplicit representation schemes. In this paper, we propose an incremental learning mechanism that can bootstrap from a state of complete ignorance of any representative sample associations. Besides, the proposed system provides a novel mechanism for data representation in nonlinear manner through the fusion of self-organizing maps and Gaussian receptive fields. Our architecture is based solely on cortically-inspired techniques of coding and learning as: Hebbian plasticity and adaptive populations of neural circuitry for stimuli representation.
We define a neural network that captures the problem’s data space components using emergent arrangement of receptive field neurons that self-organize incrementally in response to sparse experiences of system–environment interactions. These learned components are correlated using a process of Hebbian plasticity that relates major components of input space to those of the output space. The viability of the proposed mechanism is demonstrated through multiple experimental setups from real-world regression and robotic arm sensory-motor learning problems.
The influence of acute severe stress or extreme emotion based on a Network-Oriented modeling methodology has been addressed here. Adaptive temporal causal network model is an approach to address the phenomena with complexity which cannot be or hard to be explained in a real-world experiment. In the first phase, the suppression of the existing network connections as a consequence of the acute stress modeled and in the second phase relaxing the suppression by giving some time and starting a new learning of the decision making in accordance to presence of stress starts again.
In this paper, the challenge for dynamic network modeling is addressed how emerging behavior of an adaptive network can be related to characteristics of the adaptive network’s structure. By applying network reification, the adaptation structure is modeled in a declarative manner as a subnetwork of a reified network extending the base network. This construction can be used to model and analyze any adaptive network in a neat and declarative manner, where the adaptation principles are described by declarative mathematical relations and functions in reified temporal-causal network format. In different examples, it is shown how certain adaptation principles known from the literature can be formulated easily in such a declarative reified temporal-causal network format. The main focus of this paper on how emerging adaptive network behavior relates to network structure is addressed, among others, by means of a number of theorems of the format “properties of reified network structure characteristics imply emerging adaptive behavior properties”. In such theorems, classes of networks are considered that satisfy certain network structure properties concerning connectivity and aggregation characteristics. Results include, for example, that under some conditions on the network structure characteristics, all states eventually get the same value. Similar analysis methods are applied to reification states, in particular for adaptation principles for Hebbian learning and for bonding by homophily, respectively. Here results include how certain properties of the aggregation characteristics of the network structure of the reified network for Hebbian learning entail behavioral properties relating to the maximal final values of the adaptive connection weights. Similarly, results are discussed on how properties of the aggregation characteristics of the reified network structure for bonding by homophily entail behavioral properties relating to clustering and community formation in a social network.
Hebbian learning of synaptic weight Wij between the i-th and the j-th neurons lies in the heart of Artificial Neural Networks (ANN) whose collective dynamics is re-derived herewith in terms of ANN'S effect upon a single net input of the i-th neuron, ui(tn + 1) = K(tn) σ(ui(tn)) - θi, where we have defined ui(tn) = Σj Wij(tn) vj(tn), and threshold θi. This single neuron dynamics is similar to the classical technique used in solving the many-body problem with a single-body Greens function. However, the proposed model is nonlinear, exact and separable because of the product nature of Hebbian law Wij ≈ vi vj, where vi; is the output firing rate, vi = σ(ui), determined by the net input ui and a nonlinear sigmoidal function. Moreover, the derived equation is identical to an iterative map generalized the population logistic map. In case of ANN, however, every neuron map has a different slope function, K(tn)=Σj≠iσ±(ui(tn))vj(tn), reflecting the dynamic learning process. Such an ANN Green's function provides the exact diagnosis need, e.g. checking into the contrast reversal, limited cycles, chaos etc., by changing the (habituation) threshold value θi, when the N-shaped sigmoidal function [1,2,6] is used to generate chaotic ANN (CNN). We postulate that such a chaotic set of innumerable realization forms a fuzzy membership function (FMF) exemplified by fuzzy feature maps of eyes, nose, etc., for the invariant face classification. In addition, CNN has an interesting technology application. The image compression block-artifact can be quickly softened with a negative threshold value, viewed as a reward for higher firing rates.
Spike-timing-dependent plasticity (STDP) is a learning algorithm that is simple, biologically plausible, and powerful. Hence, one would expect STDP (likely in combination with other learning algorithms) to be a key component in cortical models of higher cognitive functions, such as language comprehension or production. Such models would need to involve multiple brain areas and bidirectional links between representations in those different areas. However, STDP is an asymmetrical learning algorithm (in contrast to classical Hebbian learning, which is symmetrical). This makes the acquisition of bilateral connections between two neurons almost impossible and bilateral connections between representations very challenging. Here, we propose a solution based on specific connectivity patterns. Then, using numerical simulations, we show that our approach allows STDP to create strong bidirectional links between representations. Finally, we compare our architecture to neuroanatomical data.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.