Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

Bestsellers

Handbook of Machine Learning
Handbook of Machine Learning

Volume 1: Foundation of Artificial Intelligence
by Tshilidzi Marwala
Handbook on Computational Intelligence
Handbook on Computational Intelligence

In 2 Volumes
edited by Plamen Parvanov Angelov

 

  • articleNo Access

    AN MLP TRAINING ALGORITHM TAKING INTO ACCOUNT KNOWN ERRORS ON INPUTS AND OUTPUTS

    A training algorithm is introduced that takes into account a priori known errors on both inputs and outputs in an MLP network. The new cost function introduced for this case is based on a linear approximation of the network function over the input distribution for a given input pattern. Update formulas, in the form of the gradient of the new cost function, is given for a MLP network, together with expressions for the Hessian matrix. This is later used to calculate error bars in a Bayesian framework. The error bars thus derived are discussed in relation to the more commonly used width of the target posterior predictive distribution. It will also be shown that the taking into account of known input uncertainties in the way suggested in this article will have a strong regularizing effect on the solution.

  • chapterNo Access

    Orthogonal Tree Codes for Communication in the Presence of White Gaussian Noise

    This paper describes a convolutional encoder for generating tree codes whose distinct codewords are orthogonal over the constraint length of the code. The performance of this class of codes is analyzed and the error probability is shown to decrease exponentially with the energy-to-noise ratio over the constraint length period of the code. The performance is compared with well-known results for orthogonal block codes and shown to be considerably superior to the latter. Asymptotic results are also obtained which coincide with results for the class of very noisy memoryless channels.