In order to improve the prediction accuracy of college students’ Internet+mass entrepreneurship and innovation practice effect, a prediction method based on an evolutionary algorithm is proposed. By comprehensively considering the family, school, personal, and social backgrounds of college students, a comprehensive evaluation index is constructed, and an objective function is set to minimize the distance between the questionnaire weight and the ideal weight. To avoid the evolutionary algorithm getting stuck in local optima, a simulated annealing strategy is introduced to optimize the mutation process, enhancing the algorithm’s performance. Experimental verification shows that this method not only selects evaluation indicators accurately and has high prediction accuracy, but also has a fast solving speed. It has been successfully applied in multiple universities, providing an effective tool for evaluating and improving the effectiveness of college students’ entrepreneurship and innovation practices.
In this paper, we present a novel approach of implementing a combination methodology to find appropriate neural network architecture and weights using an evolutionary least square based algorithm (GALS).1 This paper focuses on aspects such as the heuristics of updating weights using an evolutionary least square based algorithm, finding the number of hidden neurons for a two layer feed forward neural network, the stopping criterion for the algorithm and finally some comparisons of the results with other existing methods for searching optimal or near optimal solution in the multidimensional complex search space comprising the architecture and the weight variables. We explain how the weight updating algorithm using evolutionary least square based approach can be combined with the growing architecture model to find the optimum number of hidden neurons. We also discuss the issues of finding a probabilistic solution space as a starting point for the least square method and address the problems involving fitness breaking. We apply the proposed approach to XOR problem, 10 bit odd parity problem and many real-world benchmark data sets such as handwriting data set from CEDAR, breast cancer and heart disease data sets from UCI ML repository. The comparative results based on classification accuracy and the time complexity are discussed.
The k-nearest neighbor method is a classifier based on the evaluation of the distances to each pattern in the training set. The edited version of this method consists of the application of this classifier with a subset of the complete training set in which some of the training patterns are excluded, in order to reduce the classification error rate. In recent works, genetic algorithms have been successfully applied to determine which patterns must be included in the edited subset. In this paper we propose a novel implementation of a genetic algorithm for designing edited k-nearest neighbor classifiers. It includes the definition of a novel mean square error based fitness function, a novel clustered crossover technique, and the proposal of a fast smart mutation scheme. In order to evaluate the performance of the proposed method, results using the breast cancer database, the diabetes database and the letter recognition database from the UCI machine learning benchmark repository have been included. Both error rate and computational cost have been considered in the analysis. Obtained results show the improvement achieved by the proposed editing method.
The construction of a Spiking Neural Network (SNN), i.e. the choice of an appropriate topology and the configuration of its internal parameters, represents a great challenge for SNN based applications. Evolutionary Algorithms (EAs) offer an elegant solution for these challenges and methods capable of exploring both types of search spaces simultaneously appear to be the most promising ones. A variety of such heterogeneous optimization algorithms have emerged recently, in particular in the field of probabilistic optimization. In this paper, a literature review on heterogeneous optimization algorithms is presented and an example of probabilistic optimization of SNN is discussed in detail. The paper provides an experimental analysis of a novel Heterogeneous Multi-Model Estimation of Distribution Algorithm (hMM-EDA). First, practical guidelines for configuring the method are derived and then the performance of hMM-EDA is compared to state-of-the-art optimization algorithms. Results show hMM-EDA as a light-weight, fast and reliable optimization method that requires the configuration of only very few parameters. Its performance on a synthetic heterogeneous benchmark problem is highly competitive and suggests its suitability for the optimization of SNN.
Artificial Neuron–Glia Networks (ANGNs) are a novel bio-inspired machine learning approach. They extend classical Artificial Neural Networks (ANNs) by incorporating recent findings and suppositions about the way information is processed by neural and astrocytic networks in the most evolved living organisms. Although ANGNs are not a consolidated method, their performance against the traditional approach, i.e. without artificial astrocytes, was already demonstrated on classification problems. However, the corresponding learning algorithms developed so far strongly depends on a set of glial parameters which are manually tuned for each specific problem. As a consequence, previous experimental tests have to be done in order to determine an adequate set of values, making such manual parameter configuration time-consuming, error-prone, biased and problem dependent. Thus, in this paper, we propose a novel learning approach for ANGNs that fully automates the learning process, and gives the possibility of testing any kind of reasonable parameter configuration for each specific problem. This new learning algorithm, based on coevolutionary genetic algorithms, is able to properly learn all the ANGNs parameters. Its performance is tested on five classification problems achieving significantly better results than ANGN and competitive results with ANN approaches.
We consider hexagonal cellular automata with immediate cell neighbourhood and three cell-states. Every cell calculates its next state depending on the integral representation of states in its neighbourhood, i.e., how many neighbours are in each one state. We employ evolutionary algorithms to breed local transition functions that support mobile localizations (gliders), and characterize sets of the functions selected in terms of quasi-chemical systems. Analysis of the set of functions evolved allows to speculate that mobile localizations are likely to emerge in the quasi-chemical systems with limited diffusion of one reagent, a small number of molecules are required for amplification of travelling localizations, and reactions leading to stationary localizations involve relatively equal amount of quasi-chemical species. Techniques developed can be applied in cascading signals in nature-inspired spatially extended computing devices, and phenomenological studies and classification of non-linear discrete systems.
In the last years artificial intelligence has achieved great successes, mainly in the field of expert systems and neural networks. Nevertheless the road to truly intelligent systems is still obscured. Artificial intelligence systems with a broad range of cognitive abilities are not within sight. The limited competence of such systems (brittleness) is identified as a consequence of the top-down design process. The evolution principle of nature on the other hand shows an alternative and elegant way to build intelligent systems. We propose to take an evolution engine as the driving force for the bottom-up development of knowledge bases and for the optimization of the problem-solving process. A novel data analysis system for the high energy physics experiment DELPHI at CERN shows the practical relevance of this idea. The system is able to reconstruct the physical processes after the collision of particles by making use of the underlying standard model of elementary particle physics. The evolution engine acts as a global controller of a population of inference engines working on the reconstruction task. By implementing the system on the Connection Machine (Model CM-2) we use the full advantage of the inherent parallelization potential of the evolutionary approach.
Recently, the research interest in multi-objective optimization has increased remarkably. Most of the proposed methods use a population of solutions that are simultaneously improved trying to approximate them to the Pareto-optimal front. When the population size increases, the quality of the solutions tends to be better, but the runtime is higher. This paper presents how to apply parallel processing to enhance the convergence to the Pareto-optimal front, without increasing the runtime. In particular, we present an island-based parallelization of five multi-objective evolutionary algorithms: NSGAII, SPEA2, PESA, msPESA, and a new hybrid version we propose. Experimental results in some test problems denote that the quality of the solutions tends to improve when the number of islands increases.
In this paper, we present a novel sequential sampling methodology for solving multi-objective optimization problems. Random sequential sampling is performed using the information from within the non-dominated solution set generated by the algorithm, while resampling is performed using the extreme points of the non-dominated solution set. The proposed approach has been benchmarked against well-known multi-objective optimization algorithms that exist in the literature through a series of problem instances. The proposed algorithm has been demonstrated to perform at least as good as the alternatives found in the literature in problems where the Pareto front presents convexity, nonconvexity, or discontinuity; while producing very promising results in problem instances where there is multi-modality or nonuniform distribution of the solutions along the Pareto front.
Self-adaptation has been frequently employed in evolutionary computation. Angeline1 defined three distinct adaptive levels which are: population, individual and component levels. Cultural Algorithms have been shown to provide a framework in which to model self-adaptation at each of these levels. Here, we examine the role that different forms of knowledge can play in the self-adaptation process at the population level for evolution-based function optimizers. In particular, we compare the relative performance of normative and situational knowledge in guiding the search process. An acceptance function using a fuzzy inference engine is employed to select acceptable individuals for forming the generalized knowledge in the belief space. Evolutionary programming is used to implement the population space. The results suggest that the use of a cultural framework can produce substantial performance improvements in execution time and accuracy for a given set of function minimization problems over population-only evolutionary systems.
This paper presents a structural distance-based crossover for neural network classifiers, which is applied as part of a Memetic Algorithm (MA) for evolving simultaneously the structure and weights of neural network models applied to multiclass problems. Previous researchers have shown that this simultaneous evolution is a way to avoid the noisy fitness evaluation. The MA incorporates a crossover operator that shows to be useful for ameliorating the permutation problem of the network representation (i.e. different genotypes can be used to represent the same neural network phenotype), increasing the structural diversity of the individuals and improving the accuracy of the results. Instead of a recombination probability, the crossover operator considers a similarity parameter (the minimum structural distance), which allows to maintain a trade-off between global and local search. The neural network models selected in this work are the product-unit neural networks (PUNNs), due to their increasing relevance in those classification problems which show a high order relationship between the input variables. The proposed MA is intended to reduce the possible overtraining problems which can raise in some datasets for this kind of models. The evolutionary system is applied to eight classification benchmarks and the results of an analysis of variance contrast (ANOVA) show the effectiveness of the structural-based crossover operator and the capacity of our algorithm to obtain evolved PUNNs with a higher classification accuracy than those obtained using other evolutionary techniques. On the other hand, the results obtained are compared with popular effective machine learning classification methods, resulting in a competitive performance.
This paper, according to the best of our knowledge, provides the very first solution to the hardware implementation of the complete decision tree inference algorithm. Evolving decision trees in hardware is motivated by a significant improvement in the evolution time compared to the time needed for software evolution and efficient use of decision trees in various embedded applications (robotic navigation systems, image processing systems, etc.), where run-time adaptive learning is of particular interest. Several architectures for the hardware evolution of single oblique or nonlinear decision trees and ensembles comprised from oblique or nonlinear decision trees are presented. Proposed architectures are suitable for the implementation using both Field Programmable Gate Arrays (FPGA) and Application Specific Integrated Circuits (ASIC). Results of experiments obtained using 29 datasets from the standard UCI Machine Learning Repository database suggest that the FPGA implementations offer significant improvement in inference time when compared with the traditional software implementations. In the case of single decision tree evolution, FPGA implementation of H_DTS2 architecture has on average 26 times shorter inference time when compared to the software implementation, whereas FPGA implementation of H_DTE2 architecture has on average 693 times shorter inference time than the software implementation.
An important part of the integrated circuit design process is the channel routing stage, which determines how to interconnect components that are arranged in sets of rows. The channel routing problem has been shown to be NP-complete, thus this problem is often solved using genetic algorithms. The traditional objective for most channel routers is to minimize total area required to complete routing. However, another important objective is to minimize signal propagation delays in the circuit. This paper describes the development of a genetic channel routing algorithm that uses a Pareto-optimal approach to accommodate both objectives. When compared to the traditional channel routing approach, the new channel router produced layouts with decreased signal delay, while still minimizing routing area.
Current works on generation of combinational logic circuits (CLC) using evolutionary algorithms (EA) propose solutions using field-programmable gate array (FPGA) to accelerate the process of combinational circuit simulation, a step needed in order to evaluate the level of correctness of each individual circuit. However, the current works fail to separate the two distinct problems: the EA and the circuit simulator. The insistence of treating both problem as a single one results in works that fail to address either properly, restricting solutions to simple circuits and to topologically restrictive circuit simulators, while providing very limited data on the results. In this work, we address the circuit simulator problem exclusively, where we propose an architecture for fast simulation of n-LUT CLC of arbitrary topology. The proposed architecture is modular and makes no assumptions on the specific EA to be used with. We provide detailed performance results for varying circuit dimensions, and those results show that our architecture is able to surpass other works both in terms of performance and topological flexibility.
Particle swarm optimization (PSO) is introduced to implement a new constructive learning algorithm for training generalized cellular neural networks (GCNNs) for the identification of spatio-temporal evolutionary (STE) systems. The basic idea of the new PSO-based learning algorithm is to successively approximate the desired signal by progressively pursuing relevant orthogonal projections. This new algorithm will thus be referred to as the orthogonal projection pursuit (OPP) algorithm, which is in mechanism similar to the conventional projection pursuit approach. A novel two-stage hybrid training scheme is proposed for constructing a parsimonious GCNN model. In the first stage, the orthogonal projection pursuit algorithm is applied to adaptively and successively augment the network, where adjustable parameters of the associated units are optimized using a particle swarm optimizer. The resultant network model produced at the first stage may be redundant. In the second stage, a forward orthogonal regression (FOR) algorithm, aided by mutual information estimation, is applied to refine and improve the initially trained network. The effectiveness and performance of the proposed method is validated by applying the new modeling framework to a spatio-temporal evolutionary system identification problem.
This paper compares the performance of Differential Evolution (DE) with Self-Organizing Migrating Algorithm (SOMA) in the task of optimization of the control of chaos. The main aim of this paper is to show that evolutionary algorithms like DE are capable of optimizing chaos control, leading to satisfactory results, and to show that extreme sensitivity of the chaotic environment influences the quality of results on the selected EA, construction of cost function (CF) and any small change in the CF design. As a model of deterministic chaotic system, the two-dimensional Henon map is used and two complex targeting cost functions are tested. The evolutionary algorithms, DE and SOMA were applied with different strategies. For each strategy, repeated simulations demonstrate the robustness of the used method and constructed CF. Finally, the obtained results are compared with previous research.
Predicting native conformations using computational protein models requires a large number of energy evaluations even with simplified models such as hydrophobic-hydrophilic (HP) models. Clearly, energy evaluations constitute a significant portion of computational time. We hypothesize that given the structured nature of algorithms that search for candidate conformations such as stochastic methods, energy evaluation computations can be cached and reused, thus saving computational time and effort. In this paper, we present a caching approach and apply it to 2D triangular HP lattice model. We provide theoretical analysis and prediction of the expected savings from caching as applied this model. We conduct experiments using a sophisticated evolutionary algorithm that contains elements of local search, memetic algorithms, diversity replacement, etc. in order to verify our hypothesis and demonstrate a significant level of savings in computational effort and time that caching can provide.
This paper presents a parallel framework for the solution of multi-objective optimization problems. The framework implements some of the best known multi-objective evolutionary algorithms. The plugin-based architecture of the framework minimizes the end user effort required to incorporate their own problems and evolutionary algorithms, and facilitates tool maintenance. A wide variety of configuration options can be specified to adapt the software behavior to many different parallel models. An innovation of the framework is that it provides a self-adaptive parallel model that is based on the cooperation of a set of evolutionary algorithms. The aim of the new model is to raise the level of generality at which most current evolutionary algorithms operate. This way, a wider range of problems can be tackled since the strengths of one algorithm can compensate for the weaknesses of another. The model proposed is a hybrid algorithm that combines a parallel island-based scheme with a hyperheuristic approach. The model grants more computational resources to those algorithms that show a more promising behavior. The flexibility and efficiency of the framework were tested and demonstrated by configuring standard and self-adaptive models for test problems and real-world applications.
As faults are unavoidable in large scale multiprocessor systems, it is important to be able to determine which units of the system are working and which are faulty. System-level diagnosis is a long-standing realistic approach to detect faults in multiprocessor systems. Diagnosis is based on the results of tests executed on the system units. In this work we evaluate the performance of evolutionary algorithms applied to the diagnosis problem. Experimental results are presented for both the traditional genetic algorithm (GA) and specialized versions of the GA. We then propose and evaluate specialized versions of Estimation of Distribution Algorithms (EDA) for system-level diagnosis: the compact GA and Population-Based Incremental Learning both with and without negative examples. The evaluation was performed using four metrics: the average number of generations needed to find the solution, the average fitness after up to 500 generations, the percentage of tests that got to the optimal solution and the average time until the solution was found. An analysis of experimental results shows that more sophisticated algorithms converge faster to the optimal solution.
Polynomial mutation is widely used in evolutionary optimization algorithms as a variation operator. In previous work on the use of evolutionary algorithms for solving multi-objective problems, two versions of polynomial mutations were introduced. The first is non-highly disruptive that is not prone to local optima and the second is highly disruptive polynomial mutation. This paper looks at the two variants and proposes a dynamic version of polynomial mutation. The experimental results show that the proposed adaptive algorithm is doing well for three evolutionary multiobjective algorithms on well known multiobjective optimization problems in terms of convergence speed, generational distance and hypervolume performance metrics.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.