![]() |
RAM-based networks are a class of methods for building pattern recognition systems. Unlike other neural network methods, they train very rapidly and can be implemented in simple hardware. This important book presents an overview of the subject and the latest work by a number of researchers in the field of RAM-based networks.
https://doi.org/10.1142/9789812816849_fmatter
The following sections are included:
https://doi.org/10.1142/9789812816849_others01
This section introduces the reader to some of the major RAM based methods. The first paper presents an overview of the RAM based methods up to 1994. It covers the most well known methods in RAM based networks. This paper is followed by an overview of the MAGNUS system, introduced by Igor Aleksander who has been one of the major influences in the development of RAM based systems throughout the last 30 years. The paper shows how RAM based systems in the form of WISARD is related to MAGNUS, a system designed to explore possibility of systems that react in an intelligent way to sensory data. The paper by DeCarvaho, Fairhirst and Bisset describes a form of RAM learning called GSNf which allows training of multi-layered RAM based systems, and aims to compare the various forms of the GSN methods. The final paper generally introduces the AURA RAM based network, which extends the RAM based method for use in rule based systems, which is an unusual application for neural networks but exploits the speed and flexibility of the RAM based method for this task.
https://doi.org/10.1142/9789812816849_0001
This chapter describes the interrelationship between the different types of RAM based neural networks. From their origins in the N tuple networks of Bledsoe and Browning it describes how each network architecture differs and the basic function of each. It discusses the MRD, ADAM, AURA, PLN, pRAM, GSN and TIN architectures. As such, the chapter introduces many of the networks discussed later in the book.
https://doi.org/10.1142/9789812816849_0002
This chapter reviews a progression of weightless systems from the WISARD single-layer pattern recogniser to recent work on MAGNUS, a state machine designed to store sensory experience in its state structure. The stress is on algorithmic effects, that is, the effect of mapping these systems into conventional processors as virtual neural machines. The chapter briefly reviews the changes from the generalisation of discriminators in WISARD to generalising RAM (G-RAM) as currently used in MAGNUS systems. This leads to the introduction to MACCON (the Machine Consciousness Toolbox) a flexible version of MAGNUS which runs on current PC operating systems.
https://doi.org/10.1142/9789812816849_0003
GSNf is a Boolean Neural Network designed to be applied for pattern recognition tasks. This chapter presents the learning algorithms which have been proposed to train GSNf architectures and compare their performances as key parameters are changed. These algorithms are evaluated against each other by taking into account training time, saturation, learning conflicts and correct recognition rates.
https://doi.org/10.1142/9789812816849_0004
The ADAM binary neural network which has been used for image analysis applications, is constructed around a central component termed a Correlation Matrix Memory (CMM). A recent re-examination of the CMM has led to development of the Advanced Uncertain Reasoning Architecture (AURA). AURA inherits many useful characteristics from ADAM, but is intended for applications requiring the manipulation of symbolic knowledge. This chapter shows how the AURA architecture has been developed from ADAM and explains its method of operation. The chapter also outlines the use of AURA in symbolic processing applications, and highlights some of the ways in which the AURA approach is superior to other methods.
https://doi.org/10.1142/9789812816849_others02
The following set of chapters describe extensions to the methods used in RAM based systems. As with all neural network methods, there is a continual aim to improve the performance of methods and undertake comparisons with other techniques.
The first five chapters describe new methods for the analysis of N tuple systems that allow the networks to be used more effectively. The first chapter by Morciniec and Rohwer presents a thorough comparison of the RAM based methods and other neural networks which clearly demonstrates that RAM based networks are at least as good as a wide range of other networks and statistical methods on a range of complex and well known benchmark problems. The next chapter shows that RAM based networks, although commonly thought of as binary networks are capable of using continuous inputs in the domain of image processing. The chapter by Howells, Bisset and Fairhirst describes, in general terms, how RAM based networks that use the GSN learning methods may be compared with and integrated with other RAM based methods. Jorgensen, Christensen and Liisberg show how the well known cross validation methods and information techniques can be used to reduce the size of the RAM networks and in the process improving the accuracy of the networks. Finally a very valuable insight into calculation of the storage capacity of a wide section of RAM based networks is given by Adeodato and Taylor. The general solution permits the capacity of G-RAM, pRAM and GSN networks to be estimated.
The final three chapters in section 2 describe new RAM methods which extend the basic ability of the networks. The chapter by Morciniec and Rohwer shows how to deal with zero weighted locations in weighted forms of RAM based networks. Normally this these are dealt with in an ad-hoc fashion. Although a principled approach is presented (based on the Good-Turing density estimation method), it is shown that using very small default values is a good method. It also contrasts binary and weighted RAM based approaches. The next chapter by Neville shows how a version of the Back propagation algorithm can be used to train RAM networks, allowing the RAM methods to be closely related to weighted neural network systems, and showing how Back propagation methods can be accelerated using RAM based methods. The chapter by Jorgsnsen shows how the use of negative weights in the storage locations allows recognition success to be improved for handwritten text classification. Finally, Howells, Bisset, and Fairhurst explain how the BCN architecture can be improved by allowing each neuron to hold more information about patterns it is classifying (which results in the GCN architecture) and by the addition of a degree of confidence to be added (which results in the PCN architecture).
https://doi.org/10.1142/9789812816849_0005
The n-tuple recognition method was tested on 11 large real-word data sets and its performance compared to 23 other classification algorithms. On 7 of these, the results show no systematic performance gap between the n-tuple method and the others. Evidence was found to support a possible explanation for why the n-tuple method yields poor results for certain datasets. Preliminary empirical results of a study of the confidence interval (the difference between the two highest scores) are also reported. These suggest a counter-intuitive correlation between the confidence interval distribution and the overall classification performance of the system.
https://doi.org/10.1142/9789812816849_0006
Weightless neural networks have been used in pattern recognition vision systems for many years. The operation of these networks requires that binary values be produced from the input data, and the simplest method of achieving this is to generate a logic '1' if a given sample from the input data exceeds some threshold value, and a logic '0' otherwise. If, however, the lighting of the scene being observed changes, then the input data 'appears' very different. Various methods have been proposed to overcome this problem, but so far there have been no detailed comparisons of these methods indicating their relative performance and practicalities. In this chapter the results are given of some initial tests of the different methods using real world data.
https://doi.org/10.1142/9789812816849_0007
A Framework is introduced for reasoning about RAM-based weightless neural networks using networks created using the Goal-Seeking Neuron (GSN) as an example. The framework is then generalised to allow it to encompass all architectures within the RAM-based neural network paradigm. The framework may form the basis of a simple logic for RAM-based networks allowing formal comparison of the performance of various architectures and the development of provably optimal solutions within given constraints.
https://doi.org/10.1142/9789812816849_0008
It is shown that it is simple to perform a cross-validation test on a training set when using RAM based Neural Networks. This is relevant for measuring the network generalisation capability (robustness). An information measure combining an extended concept of cross-validation with Shannon information is proposed. We describe how this measure can be used to select the input connections of the network. The task of recognising handwritten digits is used to demonstrate the capability of the selection strategy.
https://doi.org/10.1142/9789812816849_0009
Modularity is a valuable principle in analysing and synthesising large systems. This chapter gives an overview on how to apply such principle to the assessment of the storage capacity of RAM-based neural networks. The storage capacity of a network is a function of the storage capacity of its component neurons and subnetworks. This modular approach allows the independent treatment of storage capacity at different levels — a model for the single neuron and a model for each architecture. The technique is illustrated in the major architectures with limited storage capacity in use nowadays — general neural unit (GNU) and pyramid — and in a composition of them. The results fit well both with the existing architecture-dependent theories and with the experimental data currently available in the literature with the advantages of simplicity and flexibility for the modularity. This approach is based on collision of information during training taken as a probabilistic process.
https://doi.org/10.1142/9789812816849_0010
We present results concerning the application of the Good-Turing (GT) estimation method to the frequentist n-tuple system. We show that the Good-Turing method can, to a certain extent, rectify the Zero Frequency Problem by providing, within a formal framework, improved estimates of small tallies. We also show that it leads to better tuple system performance than Maximum Likelihood Estimation (MLE). However, preliminary experimental results suggest that replacing zero tallies with an arbitrary constant close to zero before MLE yields better performances than those of a GT system.
https://doi.org/10.1142/9789812816849_0011
The chapter outlines recent research that enables one to train digital "Higher Order" sigma-pi artificial neural networks using pre-calculated constrained look-up tables of Backpropagation delta changes. By utilising these digital units that have sets of quantised sitevalues (i.e. weights) one may also quantise the sigmoidal activation-output function and then the output function may also be pre-calculated. The research presented shows that by utilising weights quantised to 128 levels these units can achieve accuracy's of better than one percent for target output functions in the range Y ∈ [0,1]. This is equivalent to an average Mean Square Error (MSE) over all training vectors of 0.0001 or an error modulus of 0.01. The sigma-pi are RAM based and as such are hardware realisable units which may be implemented in Microelectronic technology. The article present a development of a sigma-pi node which enables one to provide high accuracy outputs utilising the cubic node's methodology of storing quantised weights (site-values) in locations that are stored in RAM-based units. The networks presented are trained with the Backpropagation training regime that may be implemented on- line in hardware. One of the novelties of this work is that it shows how one may utilise the bounded quantised site-values (weights) of sigma-pi nodes to enable training of these Neurocom-puting systems to be relatively simple and very fast.
https://doi.org/10.1142/9789812816849_0012
A strategy for adding inhibitory weights to RAM based net has been developed. As a result a more robust net with lower error rates can be obtained. In the chapter we describe how the inhibition factors can be learned with a one shot learning scheme. The main strategy is to obtain inhibition values that minimise the error-rate obtained in a cross-validating test performed on the training set. The inhibition technique has been tested on the task of recognising handwritten digits. The results obtained matches the best error rates reported in the literature.
https://doi.org/10.1142/9789812816849_0013
This chapter introduces a novel networking strategy for RAM-based Neurons which significantly improves the training and recognition performance of such networks whilst maintaining the generalisation capabilities achieved in previous network configurations. A number of different architectures are introduced each using the same underlying principles.
https://doi.org/10.1142/9789812816849_others03
For any approach to be worth while studying, demonstrable proof of its utility on practical problems is essential. This section contains a number of practical studies. All the applications are found in image processing, the traditional area for the successful use of RAM based methods (as in its original use). The main reason for this is that the methods scale well to the large input data sizes needed for image analysis problems. The final paper examines the implementation of ADAM, a RAM based network for image analysis, on a parallel system of transputers.
The first paper by O'Keefe and Austin shows there use in finding features in fax images. A problem that makes use of the potentially fast processing and noise tolerant properties as it is applied to faxes that are sent via typical fax machines. In addition it illustrates how RAM based methods compare with traditional object recognition methods.
Texture recognition is examined by Hepplewhite and Stonham, how introduce a novel pre-processing method and compare a number of existing N tuple pre-processing methods for this task.
RAM based networks are particularly suitable for small mobile robots as shown by Bishop, Keating and Mitchell, who demonstrate that a compound, insect like, eye can be created and used to control a simple robot.
Feature analysis is a vital part of machine vision explored by Clarkson and Ding. They show how a pRAM based network can be used to find features in a fingerprint recognition system. In addition, they show how noise injection can be used to improve performance of the method.
The use of colour in the detection of danger labels is investigated by Linneberg, Andersen, Jorgensen and Chistensen where the power of the N tuple method to solve real problems is demonstrated.
The problems involved in exploring complex images using saccadic image scanning methods is explored by Ntourntoufis and Stonham. They extend the MAGNUS network presented in section one to dealing with multiple objects in a 'Kitchen World' scene. Illustrating that iconic internal representations used in MAGNUS can be used to control image understanding systems.
Finally, hand written text is examined in the chapter by De Carvalho and Bisset, where the SOFT and GSN RAM based methods are combined in a modular approach to a difficult classification problem.
https://doi.org/10.1142/9789812816849_0014
An essential part of image analysis is the location and identification of objects within the image. Noise and clutter make this identification problematic, and the size of the image may present computational difficulties. To overcome these problems, a window onto the image is used to focus onto small areas. Conventionally, it is still necessary to know the size of the object to be searched for in order to select a window of the correct size. A method is described for object location and classification which allows the use of a small window to identify large objects in the image. The window focusses on features in the image, and an associative memory recalls evidence for objects from these features, avoiding the necessity of knowing the dimensions of the objects to be detected.
https://doi.org/10.1142/9789812816849_0015
A novel approach to real time texture recognition, derived from the n-tuple method of Bledsoe and Browning, is presented for use in industrial applications. A wealth of texture recognition methods are currently available, however few have the computational tractability needed in an automated environment. Methods based on the nth order co-occurrence spectrum are discussed together with their shortcomings before a new method which uses nth order co-occurrence methods to describe texture edge maps instead of pixel intensity values is presented. The resulting co-occurrence representation of the texture can be classified by established statistical methods or weightless neural networks. Finally the new method is applied to the problems of texture classification and segmentation.
https://doi.org/10.1142/9789812816849_0016
The Department of Cybernetics has recently developed some simple robot insects which can move around an environment they perceive through simple sensors. Suitable sensors currently implemented include proximity switches, active and passive infra red detectors, ultrasonics and a simple compound eye. This chapter describes how a such an eye linked to a simple weightless neural network can be used to give an estimate of position within a complex environment. Such information could be used by the insect to generate more intricate behaviours.
https://doi.org/10.1142/9789812816849_0017
A fingerprint processing and identification system based on pRAM neural networks is described. Firstly, a condensed numerical representation of a fingerprint is obtained which comprises a set of local directional images in two dimensions. Secondly, the core and delta points are identified, which are used as points of registration for the matching process. Finally, the input fingerprint can be matched with a set of reference fingerprints and the system displays the 10 best matching samples. The fingerprint is converted into a matrix of 17 × 17 directional images which are quantised to 8 levels. A two layer 6-pRAM pyramidal neural network was designed and trained by a reinforcement self-organising algorithm with an adaptive learning rate and Gaussian noise injection. The recognition rate is 86.4% for the best match. However, the system displays the 10 best matching samples so that these samples can then be manually inspected which is optimum for practical applications.
https://doi.org/10.1142/9789812816849_0018
The detection of spatial and temporal relations in a scene is best illustrated by considering a mobile robot which can change the visual image it receives by moving the position of its visual sensor. The problem is then to show how such a robot can utilise its actions, moving the sensor, to get proper indexing of the visual information and so encode spatial relationships linguistically. The preliminary problem addressed in this chapter is a simpler one. The robot's visual sensor is directed to specific locations in a two-dimensional scene where various objects are located. For each object, a linguistic label is provided, describing the class of the object in question. The robot's task is to learn to search in the scene for objects by name. A novel two-phase configuration of a MAGNUS (Multi-Automata of General Neural UnitS) weightless neural system is used to carry out the investigation. A training procedure which enables the network to perform the given task optimally is presented.
https://doi.org/10.1142/9789812816849_0019
This chapter describes and evaluates a completely integrated Boolean neural network architecture, where a self-organising Boolean neural network (SOFT) is used as a front-end processor to a feedforward Boolean neural network based on goal-seeking principles (GSNf). For such, it will discuss the advantages of the integrated SOFT-GSNf over GSNf by showing its increased effectiveness in the classification of postcode numerals extracted from mail envelopes.
https://doi.org/10.1142/9789812816849_0020
An image processing system that automatically locates danger labels on the rear of containers is presented. The system uses RAM based neural networks to locate and classify labels after a pre-processing step involving non-linear filtering and RGB to HSV conversion. Results on images recorded at the container terminal in Esbjerg are presented.
https://doi.org/10.1142/9789812816849_0021
This text proposes an efficient method for the implementation of the ADAM binary neural network on a message passing parallel computer. Performance examples obtained by testing an implementation on a Transputer based system are presented. It is shown that simulation of an ADAM network can be efficiently sped-up by parallel computation. It is also shown how the overlapping of the computation and communication positively influences performance issues.
https://doi.org/10.1142/9789812816849_0022
This chapter describes techniques for the hardware implementation of a Correlation Matrix Memory (CMM), a fundamental element of a binary neural network. The training and recall of a CMM based system is explained, prior to the hardware description of the dedicated processing platform for binary neural networks, C-NNAP. The C-NNAP architecture provides processing rates nearly eight times faster than a modern 64-bit workstation. It hosts a dedicated FPGA processor that performs the recall operation. The data flow through a multiple board system is also described, which will provide an even more powerful processing platform.
https://doi.org/10.1142/9789812816849_bmatter
The following sections are included: