Extensions to N tuple Theory
The following set of chapters describe extensions to the methods used in RAM based systems. As with all neural network methods, there is a continual aim to improve the performance of methods and undertake comparisons with other techniques.
The first five chapters describe new methods for the analysis of N tuple systems that allow the networks to be used more effectively. The first chapter by Morciniec and Rohwer presents a thorough comparison of the RAM based methods and other neural networks which clearly demonstrates that RAM based networks are at least as good as a wide range of other networks and statistical methods on a range of complex and well known benchmark problems. The next chapter shows that RAM based networks, although commonly thought of as binary networks are capable of using continuous inputs in the domain of image processing. The chapter by Howells, Bisset and Fairhirst describes, in general terms, how RAM based networks that use the GSN learning methods may be compared with and integrated with other RAM based methods. Jorgensen, Christensen and Liisberg show how the well known cross validation methods and information techniques can be used to reduce the size of the RAM networks and in the process improving the accuracy of the networks. Finally a very valuable insight into calculation of the storage capacity of a wide section of RAM based networks is given by Adeodato and Taylor. The general solution permits the capacity of G-RAM, pRAM and GSN networks to be estimated.
The final three chapters in section 2 describe new RAM methods which extend the basic ability of the networks. The chapter by Morciniec and Rohwer shows how to deal with zero weighted locations in weighted forms of RAM based networks. Normally this these are dealt with in an ad-hoc fashion. Although a principled approach is presented (based on the Good-Turing density estimation method), it is shown that using very small default values is a good method. It also contrasts binary and weighted RAM based approaches. The next chapter by Neville shows how a version of the Back propagation algorithm can be used to train RAM networks, allowing the RAM methods to be closely related to weighted neural network systems, and showing how Back propagation methods can be accelerated using RAM based methods. The chapter by Jorgsnsen shows how the use of negative weights in the storage locations allows recognition success to be improved for handwritten text classification. Finally, Howells, Bisset, and Fairhurst explain how the BCN architecture can be improved by allowing each neuron to hold more information about patterns it is classifying (which results in the GCN architecture) and by the addition of a degree of confidence to be added (which results in the PCN architecture).