Please login to be able to save your searches and receive alerts for new content matching your search criteria.
A training algorithm is introduced that takes into account a priori known errors on both inputs and outputs in an MLP network. The new cost function introduced for this case is based on a linear approximation of the network function over the input distribution for a given input pattern. Update formulas, in the form of the gradient of the new cost function, is given for a MLP network, together with expressions for the Hessian matrix. This is later used to calculate error bars in a Bayesian framework. The error bars thus derived are discussed in relation to the more commonly used width of the target posterior predictive distribution. It will also be shown that the taking into account of known input uncertainties in the way suggested in this article will have a strong regularizing effect on the solution.