World Scientific
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

A HYBRID LEARNING RULE FOR A FEEDFORWARD NETWORK

    https://doi.org/10.1142/S0218213094000236Cited by:0 (Source: Crossref)

    In feedforward networks trained by the classic backpropagation algorithm, as introduced by Rumelhart et al., the weights are modified according to the method of steepest descent. The goal of this weight modification is to minimise the error in network-outputs for a given training set.

    Basing upon Jacobs’ work, we point out drawbacks of steepest descent and suggest improvements on it. These yield a feedforward network, which adjusts its weights according to a (globally convergent) parallel coordinate descent method.

    Then we combine this parallel coordinate descent with a hybrid learning rule from -rule and momentum version. For adjusting the parameters of this rule we use a Sugeno/Tagaki fuzzy controller. We only need four rules for the controlling process. Because of using the Sugeno/Takagi controller, no defuzzification has to be performed and the controller works faster than the familiar one.

    We conclude that this algorithm is very suitable for fast training and global convergence.