FRACTAL TRANSITION IN CONTINUOUS RECURRENT NEURAL NETWORKS
Abstract
A theory for continuous dynamical systems stochastically excited by temporal external inputs has been presented. The theory suggests that the dynamics of continuous-time recurrent neural networks (RNNs) is generally characterized by a set of continuous trajectories with a fractal-like structure in hyper-cylindrical phase space. We refer to this dynamics as the fractal transition. In this paper, three types of numerical experiments are discussed in order to investigate the learning process and noise effects in terms of the fractal transition. First, to analyze how an RNN learns desired input–output transformations, a simple example with a single state was examined in detail. A fractal structure similar to a Cantor set was clearly observed in the learning process. This finding sheds light on the learning of RNNs, i.e. it suggests that the learning is a process of adjusting the fractal dimension. Second, input noise effects on the fractal structure were investigated. The results show that small-scale hierarchical structures are broken by noise. Third, using a network with twenty states, we show that fractal transition is a universal characteristic of RNNs driven by switching inputs.