World Scientific
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

A DISCRETE FULLY RECURRENT NETWORK OF MAX PRODUCT UNITS FOR ASSOCIATIVE MEMORY AND CLASSIFICATION

    https://doi.org/10.1142/S0129065702001138Cited by:1 (Source: Crossref)

    This paper defines the truncated normalized max product operation for the transformation ofstates of a network and provides a method for solving a set of equations based on this operation. The operation serves as the transformation for the set of fully connected units in a recurrent network that otherwise might consist of linear threshold units. Component values of the state vector and ouputs of the units take on the values in the set {0, 0.1, …, 0.9, 1}. The result is a much larger state space given a particular number of units and size of connection matrix than for a network based on threshold units. Since the operation defined here can form the basis of transformations in a recurrent network with a finite number of states, fixed points or cycles are possible and the network based on this operation for transformations can be used as an associative memory or pattern classifier with fixed points taking on the role of prototypes. Discrete fully recurrent networks have proven themselves to be very useful as associative memories and as classifiers. However they are often based on units that have binary states. The effect of this is that the data to be processed consisting of vectors in ℜn have to be converted to vectors in {0, 1}m with m much larger than n since binary encoding based on positional notation is not feasible. This implies a large increase in the number of components. The effect can be lessened by allowing more states for each unit in our network. The network proposed demonstrates those properties that are desirable in an associative memory very well as the simulations show.