World Scientific
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

A GENERAL UPDATING RULE FOR DISCRETE HOPFIELD-TYPE NEURAL NETWORK WITH TIME-DELAY AND THE CORRESPONDING SEARCH ALGORITHM

    https://doi.org/10.1142/S1469026801000329Cited by:1 (Source: Crossref)

    In this paper, the Hopfield neural network with delay (HNND) is studied from the standpoint of regarding it as an optimizing computational model. Two general updating rules for networks with delay (GURD) are given based on Hopfield-type neural networks with delay for optimization problems and characterized by dynamic thresholds. It is proved that in any sequence of updating rule modes, the GURD monotonously converges to a stable state of the network. The diagonal elements of the connection matrix are shown to have an important influence on the convergence process, and they represent the relationship of the local maximum value of the energy function to the stable states of the networks. All the ordinary discrete Hopfield neural network (DHNN) algorithms are instances of the GURD. It can be shown that the convergence conditions of the GURD may be relaxed in the context of applications, for instance, the condition of nonnegative diagonal elements of the connection matrix can be removed from the original convergence theorem. A new updating rule mode and restrictive conditions can guarantee the network to achieve a local maximum of the energy function with a step-by-step algorithm. The convergence rate improves evidently when compared with other methods. For a delay item considered as a noise disturbance item, the step-by-step algorithm demonstrates its efficiency and a high convergence rate. Experimental results support our proposed algorithm.

    Remember to check out the Most Cited Articles!

    Check out these titles in artificial intelligence!