Recent experimental studies of hetero-synaptic interactions in various systems have shown the role of signaling in the plasticity, challenging the conventional understanding of Hebb's rule. It has also been found that activity plays a major role in plasticity, with neurotrophins acting as molecular signals translating activity into structural changes. Furthermore, role of synaptic efficacy in biasing the outcome of competition has also been revealed recently. Motivated by these experimental findings we present a model for the development of simple cell receptive field structure based on the competitive hetero-synaptic interactions for neurotrophins combined with cooperative hetero-synaptic interactions in the spatial domain. We find that with proper balance in competition and cooperation, the inputs from two populations (ON/OFF) of LGN cells segregate starting from the homogeneous state. We obtain segregated ON and OFF regions in simple cell receptive field. Our modeling study supports the experimental findings, suggesting the role of synaptic efficacy and the role of spatial signaling. We find that using this model we obtain simple cell RF, even for positively correlated activity of ON/OFF cells. We also compare different mechanism of finding the response of cortical cell and study their possible role in the sharpening of orientation selectivity. We find that degree of selectivity improvement in individual cells varies from case to case depending upon the structure of RF field and type of sharpening mechanism.
In biological systems, instead of actual encoders at different joints, proprioception signals are acquired through distributed receptive fields. In robotics, a single and accurate sensor output per link (encoder) is commonly used to track the position and the velocity. Interfacing bio-inspired control systems with spiking neural networks emulating the cerebellum with conventional robots is not a straight forward task. Therefore, it is necessary to adapt this one-dimensional measure (encoder output) into a multidimensional space (inputs for a spiking neural network) to connect, for instance, the spiking cerebellar architecture; i.e. a translation from an analog space into a distributed population coding in terms of spikes. This paper analyzes how evolved receptive fields (optimized towards information transmission) can efficiently generate a sensorimotor representation that facilitates its discrimination from other "sensorimotor states". This can be seen as an abstraction of the Cuneate Nucleus (CN) functionality in a robot-arm scenario. We model the CN as a spiking neuron population coding in time according to the response of mechanoreceptors during a multi-joint movement in a robot joint space. An encoding scheme that takes into account the relative spiking time of the signals propagating from peripheral nerve fibers to second-order somatosensory neurons is proposed. Due to the enormous number of possible encodings, we have applied an evolutionary algorithm to evolve the sensory receptive field representation from random to optimized encoding. Following the nature-inspired analogy, evolved configurations have shown to outperform simple hand-tuned configurations and other homogenized configurations based on the solution provided by the optimization engine (evolutionary algorithm). We have used artificial evolutionary engines as the optimization tool to circumvent nonlinearity responses in receptive fields.
Recently, deep convolutional neural networks have resulted in noticeable improvements in image classification and have been used to transfer artistic style of images. Gatys et al. proposed the use of a learned Convolutional Neural Network (CNN) architecture VGG to transfer image style, but problems occur during the back propagation process because there is a heavy computational load. This paper solves these problems, including the simplification of the computation of chains of derivatives, accelerating the computation of adjustments, and efficiently choosing weights for different energy functions. The experimental results show that the proposed solutions improve the computational efficiency and render the adjustment of weights for energy functions easier.
In modern convolutional neural network (CNN)-based object detector, the extracted features are not suitable for multi-scale detection and all the bounding boxes are simply ranked according to their classification scores in nonmaximum suppression (NMS). To address the above problems, we propose a novel one-stage detector named receptive field fusion RetinaNet. First, receptive field fusion module is proposed to extract richer multi-scale features by fusing feature maps of various receptive fields. Second, joint confidence guided NMS is proposed to optimize the post-processing process of object detection, which introduce location confidence in NMS and take joint confidence as the NMS rank basis. According to our experimental results, significant improvement in terms of mean of average precision (mAP) can be achieved on average compared with the state-of-the-art algorithm.
The brain processes information about the environment via neural codes. The neural ideal was introduced recently as an algebraic object that can be used to better understand the combinatorial structure of neural codes. Every neural ideal has a particular generating set, called the canonical form, that directly encodes a minimal description of the receptive field structure intrinsic to the neural code. On the other hand, for a given monomial order, any polynomial ideal is also generated by its unique (reduced) Gröbner basis with respect to that monomial order. How are these two types of generating sets — canonical forms and Gröbner bases — related? Our main result states that if the canonical form of a neural ideal is a Gröbner basis, then it is the universal Gröbner basis (that is, the union of all reduced Gröbner bases). Furthermore, we prove that this situation — when the canonical form is a Gröbner basis — occurs precisely when the universal Gröbner basis contains only pseudo-monomials (certain generalizations of monomials). Our results motivate two questions: (1) When is the canonical form a Gröbner basis? (2) When the universal Gröbner basis of a neural ideal is not a canonical form, what can the non-pseudo-monomial elements in the basis tell us about the receptive fields of the code? We give partial answers to both questions. Along the way, we develop a representation of pseudo-monomials as hypercubes in a Boolean lattice.
We propose that the Magno (M)-channel filter, belonging to the extended classical receptive field (ECRF) model, provides us with "vision at a glance", by performing smoothing with edge preservation. We compare the performance of the M-channel filter with the well-known bilateral filter in achieving such "vision at a glance" which is akin to image preprocessing in the computer vision domain. We find that at higher noise levels, the M-channel filter performs better than the bilateral filter in terms of reducing noise while preserving edge details. The M-channel filter is also significantly simpler and therefore faster than the bilateral filter. Overall, the M-channel filter enables us to model, simulate and arrive at a better understanding of some of the initial mechanisms in visual pathway, while simultaneously providing a fast, biologically inspired algorithm for digital image preprocessing.
Traffic sign recognition is a vital part for any driver assistance system which can help in making complex driving decision based on the detected traffic signs. Traffic sign detection (TSD) is essential in adverse weather conditions or when the vehicle is being driven on the hilly roads. Traffic sign recognition is a complex computer vision problem as generally the signs occupy a very small portion of the entire image. A lot of research is going on to solve this issue accurately but still it has not been solved till the satisfactory performance. The goal of this paper is to propose a deep learning architecture which can be deployed on embedded platforms for driver assistant system with limited memory and computing resources without sacrificing on detection accuracy. The architecture uses various architectural modification to the well-known Convolutional Neural Network (CNN) architecture for object detection. It uses a trainable Color Transformer Network (CTN) with the existing CNN architecture for making the system invariant to illumination and light changes. The architecture uses feature fusion module for detecting small traffic signs accurately. In the proposed work, receptive field calculation is used for choosing the number of convolutional layer for prediction and the right scales for default bounding boxes. The architecture is deployed on Jetson Nano GPU Embedded development board for performance evaluation at the edge and it has been tested on well-known German Traffic Sign Detection Benchmark (GTSDB) and Tsinghua-Tencent 100k dataset. The architecture only requires 11 MB for storage which is almost ten times better than the previous architectures. The architecture has one sixth parameters than the best performing architecture and 50 times less floating point operations per second (FLOPs). The architecture achieves running time of 220ms on desktop GPU and 578 ms on Jetson Nano which is also better compared to other similar implementation. It also achieves comparable accuracy in terms of mean average precision (mAP) for both the datasets.
Retinal amacrine cells regulate activities of retinal ganglion cells, the output neurons to higher visual centers, through cellular mechanism of lateral inhibition in the inner plexiform layer (IPL). Electrical properties of gap junction networks between amacrine cells in the IPL were investigated using combined techniques of intracellular recordings, Lucifer yellow and Neurobiotin injection, dual patch-clamp recordings and high voltage electron microscopy in isolated retinas of cyprinid fish. Six types of gap-junctionally connected amacrine cells were classified after their light-evoked responses to light flashes were recorded. Among them, gap junction networks of three types of amacrine cells were studied with structure-function correlation analysis. Cellular morphology of intercellular connections between three homologous cell classes was characterized. The interconnections between laterally extending dendrites in the IPL were localized at dendritic tip terminals. Three types of cells presented the dendrodendritic connections of tip-contact manner in the homologous cell population. High voltage as well as conventional electron microscopy revealed gap junctions between the dendritic tips of Neurobiotin-coupled cells. Receptive field properties of these amacrine cells were examined, displacing a slit of light along the distance from recording sites in the dorsal intermediate region of the retina. Receptive field size, space length constant, response latency and conduction velocity were measured. Spatial and temporal properties of receptive fields were symmetric along horizontally expanding dendrites in the dorsal retina. Simultaneous dual patch-clamp recordings revealed that the lateral gap junction connections between homologous amacrine cells expressed bidirectional electrical synapses passing Na+ spikes. These results demonstrate that bidirectional electrical transmission in gap junction networks of these amacrine cells is symmetric along the lateral gap junction connections between horizontally extending dendrites. Lateral inhibition regulated by amacrine cells in the IPL appears to be associated with the directional extension of the dendrites and the orientation of dendrodendritic gap junctions.
Gap junctions are intercellular channels composed of subunit protein connexin and subserve electrotonic transmission between connected neurons. Retinal amacrine cells, as well as horizontal cells of the same class, are homologously connected by gap junctions. The gap junctions between these neurons extend their receptive fields, and may increase the inhibitory postsynaptic effects in the retina. In the present study, we investigated whether gap junctions between the neurons are modulated by internal messengers. The permeability of gap junctions was examined by the diffusion of intracellularly injected biotinylated tracers, biocytin or Neurobiotin, into neighboring cells since gap junctions are permeable to these molecules freely. 4% Lucifer Yellow and 6% biocytin or Neurobiotin were injected intracellularly into horizontal cells and amacrine cells in isolated retinas of carp and goldfish and Japanese dace following electrophysiological identification. In the control condition, the tracer spread into many neighboring cells from the recorded cells. Superfusion of retinas with dopamine (100 μM) suppressed diffusion of the tracer into the neighboring horizontal cells, but not in the case of amacrine cells. Intracellular injection of cyclic AMP (300 mM) completely blocked diffusion of the tracer into neighboring horizontal cells and amacrine cells. However, superfusion of retinas with 8-bromo-cyclic AMP (2 mM), membrane permeable cyclic AMP analog, permitted the tracer to diffuse into the neighboring horizontal cells or amacrine cells. Intracellular injection of cyclic GMP (300 mM) blocked the diffusion between neighboring horizontal cells, but did not suppress the diffusion between amacrine cells. These results show that the permeability of gap junctions between amacrine cells is regulated by high concentration of intracellular cyclic AMP level, but not for intracellular cyclic GMP or applied dopamine or extracellularly applied low concentrations of intracellular cyclic AMP level. The present study suggests that these laterally oriented inhibitory interneurons, horizontal cells and amacrine cells, express different connexins which may be differentially regulated by intercellular messengers.
Responses from two types of orientation-selective units of retinal origin were recorded extracellularly from their axon terminals in the medial sublaminae of tectal retinorecipient layer of immobilized cyprinid fish Carassius gibelio. Excitatory and inhibitory interactions in the receptive field were analyzed with two narrow stripes of optimal orientation flashing synchronously, one in the center and the other in different parts of the periphery. The general pattern of results was that the influence of the remote peripheral stripe was inhibitory, irrespective of the polarity of each stripe (light or dark). In this regard, the orientation-selective ganglion cells of the fish retina differ from the classical orientation-selective complex cells of the mammalian cortex, where the remote paired stripes of the opposite polarity (one light and one dark) interact in a facilitatory fashion. The consequence of these differences may be a weaker lateral inhibition in the latter case in response to stimulation by periodic gratings, which may contribute to a better spatial frequency tuning in the visual cortex.
Associative learning plays a major role in the formation of the internal dynamic engine of an adaptive system or a cognitive robot. Interaction with the environment can provide a sparse and discrete set of sample correlations of input–output incidences. These incidences of associative data points can provide useful hints for capturing underlying mechanisms that govern the system’s behavioral dynamics. In many approaches to solving this problem, of learning system’s input–output relation, a set of previously prepared data points need to be presented to the learning mechanism, as a training data, before a useful estimations can be obtained. Besides data-coding is usually based on symbolic or nonimplicit representation schemes. In this paper, we propose an incremental learning mechanism that can bootstrap from a state of complete ignorance of any representative sample associations. Besides, the proposed system provides a novel mechanism for data representation in nonlinear manner through the fusion of self-organizing maps and Gaussian receptive fields. Our architecture is based solely on cortically-inspired techniques of coding and learning as: Hebbian plasticity and adaptive populations of neural circuitry for stimuli representation.
We define a neural network that captures the problem’s data space components using emergent arrangement of receptive field neurons that self-organize incrementally in response to sparse experiences of system–environment interactions. These learned components are correlated using a process of Hebbian plasticity that relates major components of input space to those of the output space. The viability of the proposed mechanism is demonstrated through multiple experimental setups from real-world regression and robotic arm sensory-motor learning problems.
Poggio and Edelman have shown that for each object there exists a smooth mapping from an arbitrary view to its standard view and that the mapping can be learnt from a sparse data set. They have demonstrated that such a mapping function can be well approximated by Gaussian GRBF(Genaralized Radial Basis Function) network. In this paper, we have applied this network to the recognition of three kinds of hand shape changes, such as grasp, stroke and flap. Also during learning stage, we have introduced a structural learning algorithm to GRBF network in order to explore the small-sized and essential network structure for the recognition. Finally, we have designed a system to capture motion of the hand with this GRBF by using data glove.
Classical conditioning rapidly produces enduring frequency-specific modification of receptive fields (RF) in the auditory cortex (ACx) which favor the processing of the frequency of the conditioned stimulus (CS). Responses to the CS are increased whereas responses to the pre-training best frequency (BF) and other frequencies are decreased; tuning is often completely shifted so that the frequency of the CS becomes the BF. Such plasticity is observed both for single tone and for two-tone discrimination training. CS-specific RF plasticity may be reversed by extinction training. Sensitization training produces only general increases in responsiveness. Habituation produces frequency-specific decreased responses in the RF. Tuning shifts similar to those produced by conditioning can be produced by iontophoretic application of muscarinic agonists or cholinesterase antagonists to the ACx and pairing one tone with application of ACh to the auditory cortex produces receptive field plasticity which is specific to the frequency of the paired tone. Dual medial geniculate (MG) input to the auditory cortex consists of a frequency-specific non-plastic nucleus (MGv) and a broadly-tuned plastic nucleus (MGm). A preliminary model of receptive field plasticity and behavioral learning is presented. It links MGv and MGm influences on auditory cortex with cholinergic neuromodulation, and makes several predictions, some of which have recently been supported.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.