Please login to be able to save your searches and receive alerts for new content matching your search criteria.
A training algorithm is introduced that takes into account a priori known errors on both inputs and outputs in an MLP network. The new cost function introduced for this case is based on a linear approximation of the network function over the input distribution for a given input pattern. Update formulas, in the form of the gradient of the new cost function, is given for a MLP network, together with expressions for the Hessian matrix. This is later used to calculate error bars in a Bayesian framework. The error bars thus derived are discussed in relation to the more commonly used width of the target posterior predictive distribution. It will also be shown that the taking into account of known input uncertainties in the way suggested in this article will have a strong regularizing effect on the solution.
This paper describes a convolutional encoder for generating tree codes whose distinct codewords are orthogonal over the constraint length of the code. The performance of this class of codes is analyzed and the error probability is shown to decrease exponentially with the energy-to-noise ratio over the constraint length period of the code. The performance is compared with well-known results for orthogonal block codes and shown to be considerably superior to the latter. Asymptotic results are also obtained which coincide with results for the class of very noisy memoryless channels.