Processing math: 100%
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    Design and Implementation of a Spiking Neural Network with Integrate-and-Fire Neuron Model for Pattern Recognition

    In contrast to the previous artificial neural networks (ANNs), spiking neural networks (SNNs) work based on temporal coding approaches. In the proposed SNN, the number of neurons, neuron models, encoding method, and learning algorithm design are described in a correct and pellucid fashion. It is also discussed that optimizing the SNN parameters based on physiology, and maximizing the information they pass leads to a more robust network. In this paper, inspired by the “center-surround” structure of the receptive fields in the retina, and the amount of overlap that they have, a robust SNN is implemented. It is based on the Integrate-and-Fire (IF) neuron model and uses the time-to-first-spike coding to train the network by a newly proposed method. The Iris and MNIST datasets were employed to evaluate the performance of the proposed network whose accuracy, with 60 input neurons, was 96.33% on the Iris dataset. The network was trained in only 45 iterations indicating its reasonable convergence rate. For the MNIST dataset, when the gray level of each pixel was considered as input to the network, 600 input neurons were required, and the accuracy of the network was 90.5%. Next, 14 structural features were used as input. Therefore, the number of input neurons decreased to 210, and accuracy increased up to 95%, meaning that an SNN with fewer input neurons and good skill was implemented. Also, the ABIDE1 dataset is applied to the proposed SNN. Of the 184 data, 79 are used for healthy people and 105 for people with autism. One of the characteristics that can differentiate between these two classes is the entropy of the existing data. Therefore, Shannon entropy is used for feature extraction. Applying these values to the proposed SNN, an accuracy of 84.42% was achieved by only 120 iterations, which is a good result compared to the recent results.

  • articleNo Access

    A FUZZY RECURRENT ARTIFICIAL NEURAL NETWORK (FRANN) FOR PATTERN CLASSIFICATION

    This paper proposes a recurrent neural network of fuzzy units, which may be used for approximating a hetero-associative mapping and also for pattern classification. Since classification is concerned with set membership, and objects generally belong to sets to various degrees, a fuzzy network seems a natural for doing classification. In the network proposed here each fuzzy unit defines a fuzzy set. The fuzzy unit in the network determines the degree to which the input vector to the unit lies in that fuzzy set. The fuzzy unit may be compared to a perceptron in which case the input vector is compared to the weighting vector associated with the unit by taking the dot product. The resulting membership value in case of the fuzzy unit is compared to a threshold. Training of a fuzzy unit is based on an algorithm for solving linear inequalities similar to the method used for Ho-Kashyap recording. Training of the whole network is done by training each unit separately. The training algorithm is tested by trying the algorithm out on representations of letters of the alphabet with their noisy versions. The results obtained by the simulation are very promising.

  • articleNo Access

    ROBUST FUZZY REGRESSION ANALYSIS USING NEURAL NETWORKS

    Some neural network related methods have been applied to nonlinear fuzzy regression analysis by several investigators. The performance of these methods will significantly worsen when the outliers exist in the training data set. In this paper, we propose a training algorithm for fuzzy neural networks with general fuzzy number weights, biases, inputs and outputs for computation of nonlinear fuzzy regression models. First, we define a cost function that is based on the concept of possibility of fuzzy equality between the fuzzy output of fuzzy neural network and the corresponding fuzzy target. Next, a training algorithm is derived from the cost function in a similar manner as the back-propagation algorithm. Last, we examine the ability of our approach by computer simulations on numerical examples. Simulation results show that the proposed algorithm is able to reduce the outlier effects.

  • articleNo Access

    AN INTELLIGENT SOFT MEASUREMENT METHOD FOR PREDICTING PARAMETERS

    An intelligent soft measurement and information processing method for predicting parameters of process control system was proposed. Process neural network (PNN) is a new configuration of artificial neural network put forward in recent years. Some algorithms of PNN were discussed, and these algorithms were based on function orthogonal basis expansion, yet the convergence rate was comparatively low. An improved algorithm for raising training speed based on function orthogonal basis expansion in PNN for soft measurement was researched. After increasing the normalizing rule on the original algorithm, and introducing function momentum adjustment item and learning rate automatically adjustment method for network weight function, the training time of learning algorithm for PNN was reduced, and a good result was represented by simulation in wastewater treatment system.

  • articleOpen Access

    A Framework for Selection of Training Algorithm of Neuro-Statistic Model for Prediction of Pig Breeds in India

    Various training algorithms are used in artificial neural networks for updating the weights during training the network. But, the selection of the appropriate training algorithm is dependent on the input–output mapping of dataset for which the network is constructed. In this paper, a framework has been proposed consisting of five modules to select the optimal training algorithm for predicting pig breeds from their images. The individual pig images from five pig-breeds have been captured using inbuilt camera of mobile phone and the contour of pig has been segmented from each captured image by HUE-based segmentation algorithm. In Statistical Parameter and Color Component retrieval module, parameters like entropy, standard deviation, variance, mean, median, and mode and color properties like hue, saturation, value (HSV) extracted from the content of each segmented image. Values of all extracted parameters have been transferred into Training Algorithm Selection Module. In this module, a fitting neural network with different numbers of hidden neurons has been executed by feeding all extracted values from pig images for mapping their breeds. Ten training algorithms have been applied on the same extracted dataset separately for five epochs each keeping other network parameters constants. The mean square error (MSE) and correlation coefficient (R) for the validation set have been calculated after adjustment of weights and biases in each connection of the neurons. One training algorithm among 10 and its suitable number of hidden neurons has been selected based on comparative analysis for getting lower MSE and higher R in the validation set. Then, the fitting network with selected training algorithm has been run on the same extracted datasets until the stopping condition is reached. Then the test set images are fed into the network and the network output has been categorized to class which has been assigned to each breed of pig in Breed Prediction Module. The proposed framework has been able to predict breeds with 96.00% accuracy, achieved by the trial with 50 images of the test set. It may be concluded that the Neuro Statistic Neural Network model may be used for breed prediction of pigs by using images of individual pigs.

  • chapterNo Access

    Genetic Algorithm-based Predictive Control for Nonlinear Processes

    GAs are known to be capable of finding an optimal value with better probability than the descent-based nonlinear programming methods for optimization problems. As such, a GA-based optimization technique is adopted in the paper to obtain optimal future control inputs for predictive control systems. For reliable future predictions of a process, we identify the underlying process with an NNARX model structure that consists of a regressor vector and a set of parameters containing all the weights of the neural network. To reduce the volume of neural network, we determine the elements of the regresssor vector based on the Lipschitz index and a criterion. The Gauss-Newton based Levenberg-Marquardt method is used to estimate the parameters because of its robustness and superlinear rate of convergence. Since most industrial processes are subject to their constraints, we deal with the input-output constraints by modifying some genetic operators and/or using a penalty strategy in the GA-based predictive control. Furthermore, we extend the control scheme to multi-input, multi-output nonlinear dynamical systems. Some computer simulations are given to show the effectiveness of the GA-based predictive control method compared with the adaptive GPC algorithm.