Fully recurrent networks have proven themselves to be very useful as associative memories and as classifiers. However, they are generally based on units that have binary states. The effect of this is that data to be processed consisting of vectors in
have to be converted to vectors in {0, 1}m with m much larger than n since binary encoding based on positional notation is not feasible. This implies a large increase in number of components. This effect can be lessened by allowing more states for each unit in our network.
This paper describes two effective learning algorithms for a network whose units take the dot product of the input with a weight vector, followed by a tanh transformation and a discretization transformation in the form of rounding or truncation. The units have states that are in {0, 0.1, 0.2, …, 0.9, 1} rather than in {0, 1} or {-1, 1}. The result is a much larger state space given a particular number of units and size of connection matrix. Two convergent learning algorithms for training such a network to store fixed points or attractors are proposed. The network exhibits those properties that are desirable in an associative memory such as limit cycles of 1, attraction to the closest attractor and few transitions required to reach attractors. Since memories that are stored can be used to represent prototypes of patterns the network is useful for pattern classification. A pattern to be classified would be entered and its class would be the same as the class of the prototype to which it is attracted to which it is.