Please login to be able to save your searches and receive alerts for new content matching your search criteria.
In this paper the performance of the Cultural Algorithms-Iterated Local Search (CA-ILS), a new continuous optimization algorithm, is empirically studied on multimodal test functions proposed in the Special Session on Real-Parameter Optimization of the 2005 Congress on Evolutionary Computation. It is compared with state-of-the-art methods attending the Session to find out whether the algorithm is effective in solving difficult problems. The test results show that CA-ILS may be a competitive method, at least in the tested problems. The results also reveal the classes of problems where CA-ILS can work well and/or not well.
Self-adaptation has been frequently employed in evolutionary computation. Angeline1 defined three distinct adaptive levels which are: population, individual and component levels. Cultural Algorithms have been shown to provide a framework in which to model self-adaptation at each of these levels. Here, we examine the role that different forms of knowledge can play in the self-adaptation process at the population level for evolution-based function optimizers. In particular, we compare the relative performance of normative and situational knowledge in guiding the search process. An acceptance function using a fuzzy inference engine is employed to select acceptable individuals for forming the generalized knowledge in the belief space. Evolutionary programming is used to implement the population space. The results suggest that the use of a cultural framework can produce substantial performance improvements in execution time and accuracy for a given set of function minimization problems over population-only evolutionary systems.
Evolutionary computation has been successfully applied in a variety of problem domains and applications. In this paper we discuss the use of a specific form of evolutionary computation known as Cultural Algorithms to improve the efficiency of the subsumption algorithm in semantic networks. We identify two complementary methods of using Cultural Algorithms to solve the problem of re-engineering large-scale dynamic semantic networks in order to optimize the efficiency of subsumption: top-down and bottom-up.
The top-down re-engineering approach improves subsumption efficiency by reducing the number of attributes that need to be compared for every node without impacting the results. We demonstrate that a Cultural Algorithm approach can be used to identify these defining attributes that are most significant for node retrieval. These results are then utilized within an existing vehicle assembly process planning application that utilizes a semantic network based knowledge base to improve the performance and reduce complexity of the network. It is shown that the results obtained by Cultural Algorithms are at least as good, and in most cases better, than those obtained by the human developers. The advantage of Cultural Algorithms is especially pronounced for those classes in the network that are more complex.
The goal of bottom-up approach is to classify the input concepts into new clusters that are most efficient for subsumption and classification. While the resultant subsumption efficiency for the bottom-up approach exceeds that for the top-down approach, it does so by removing structural relationships that made the network understandable to human observers. Like a Rete network in expert systems, it is a compilation of only those relationships that impact subsumption. A direct comparison of the two approaches shows that bottom-up semantic network re-engineering creates a semantic network that is approximately 5 times more efficient than the top-down approach in terms of the cost of subsumption. In conclusion, we will discuss these results and show that some knowledge that is useful to the system users is lost during the bottom-up re-engineering process and that the best approach for re-engineering a semantic network requires a combination of both of these approaches.
Program understanding plays a very important role in the software engineering field as an essential part of the widely accepted reuse process. Unfortunately, the lack of general tools for this purpose often prevents users from effectively retrieving software modules for reuse. The primary difficulty is in determining which of the already-available modules performs the desired function. This paper explores the task of extracting functional knowledge from a program object (i.e., coded module) using an evolutionary-learning technique, cultural algorithms, to automatically learn classification rules for a reuse library. The modules are stored in a reuse library in the PM system using a faceted classification scheme. Therefore, the goal is to learn a set of rules to characterize the semantics of code modules based on their syntactic structure. The most difficult part of the process is to identify those parts of the program that provide specific evidence of the presence of a concept. Here, evolutionary-learning techniques are employed to interactively identify lines in a module that are "believed" to provide evidence for the presence of a facet value (concept) in the code by a user. A prototype is used to learn concepts relating to the "list" and "stack" programming concepts.
Regional Knowledge is useful in identifying patterns of relationships between variables, and it is particularly important in solving constrained global optimization problems. However, regional knowledge is generally unavailable prior to the optimization search. The questions here are: 1) Is it possible for an evolutionary system to learn regional knowledge during the search instead of having to acquire it beforehand? and 2) How can this regional knowledge be used to expedite evolutionary search? This paper defines regional schemata to provide an explicit mechanism to support the acquisition, storage and manipulation of regional knowledge. In a Cultural Algorithm framework, the belief space "contains" a set of these regional schemata, arranged in a hierarchical architecture, to enable the knowledge-based evolutionary system to learn regional knowledge during the search and apply the learned knowledge to guide the search. This mechanism can be used to guide the optimization search in a direct way, by "pruning" the infeasible regions and "promoting" the promising regions. Engineering problems with nonlinear constraints are tested and the results are discussed. It shows that the proposed mechanism is potential to solve complicated non-linear constrained optimization problems, and some other hard problems, e.g. the optimization problems with "ridges" in landscapes.
In this paper we extend the cultural framework previously developed for the Village multi-agent simulation in Swarm to include the emergence of a hub network from two base networks. The first base network is kinship, over which generalized reciprocal exchange is defined, and the second is the economic network where agents carry out balanced reciprocal exchange. Agents, or households, are able to procure several resources. We use Cultural Algorithms as a framework for the emergence of social intelligence at both individual and cultural levels. Successful agents in both networks can promote themselves to be included in the hub network where they can develop exchange links to other hubs. The collective effect of the hub network is representative of the quality of life in the population and serves as an indicator for motives behind the mysterious emigration from the region. Knowledge represents the development and use of exchange relationships between agents. The presence of defectors in the hub network improved resilience of the social system while maintaining the population size at that observed where no defectors were present.
Cultural Algorithms are computational self-adaptive models which consist of a population and a belief space. The problem-solving experience of individuals selected from the population space by the acceptance function is generalized and stored in the belief space. This knowledge can then control the evolution of the population component by means of the influence function. Here, we examine the role that different forms of knowledge can play in the self-adaptation process within cultural systems. In particular, we compare various approaches that use normative and situational knowledge in different ways to guide the function optimization process.
The results in this study demonstrate that Cultural Algorithms are a naturally useful framework for self-adaptation and that the use of a cultural framework to support self-adaptation in Evolutionary Programming can produce substantial performance improvements over population-only systems as expressed in terms of (1) systems success ratio, (2) execution CPU time, and (3) convergence (mean best solution) for a given set of 34 function minimization problems. The nature of these improvements and the type of knowledge that is most effective in producing them depend on the problem's functional landscape. In addition, it was found that the same held true for the population-only self-adaptive EP systems. Each level of self-adaptation (component, individual, and population) outperformed the others for problems with particular landscape features.
One of the major challenges facing Artificial Intelligence in the future is the design of trustworthy algorithms. The development of trustworthy algorithms will be a key challenge in Artificial Intelligence for years to come. Cultural Algorithms (CAs) are viewed as one framework that can be employed to produce a trustable evolutionary algorithm. They contain features to support both sustainable and explainable computation that satisfy requirements for trustworthy algorithms proposed by Cox [Nine experts on the single biggest obstacle facing AI and algorithms in the next five years, Emerging Tech Brew, January 22, 2021]. Here, two different configurations of CAs are described and compared in terms of their ability to support sustainable solutions over the complete range of dynamic environments, from static to linear to nonlinear and finally chaotic. The Wisdom of the Crowds method was selected for the one configuration since it has been observed to work in both simple and complex environments and requires little long-term memory. The Common Value Auction (CVA) configuration was selected to represent those mechanisms that were more data centric and required more long-term memory content.
Both approaches were found to provide sustainable performance across all the dynamic environments tested from static to chaotic. Based upon the information collected in the Belief Space, they produced this behavior in different ways. First, the topologies that they employed differed in terms of the “in degree” for different complexities. The CVA approach tended to favor reduced “indegree/outdegree”, while the WM exhibited a higher indegree/outdegree in the best topology for a given environment. These differences reflected the fact the CVA had more information available for the agents about the network in the Belief Space, whereas the agents in the WM had access to less available knowledge. It therefore needed to spread the knowledge that it currently had more widely throughout the population.
The goal of this paper is to investigate the applicability of evolutionary algorithms to the design of real-time industrial controllers. Present-day “deep learning” (DL) is firmly established as a useful tool for addressing many practical problems. This has spurred the development of neural architecture search (NAS) methods in order to automate the model search activity. CATNeuro is a NAS algorithm based on the graph evolution concept devised by Neuroevolution of Augmenting Topologies (NEAT) but propelled by cultural algorithm (CA) as the evolutionary driver. The CA is a network-based, stochastic optimization framework inspired by problem solving in human cultures. Knowledge distribution (KD) across the network of graph models is a key to problem solving success in CAT systems. Two alternative mechanisms for KD across the network are employed. One supports cooperation (CATNeuro) in the network and the other competition (WM). To test the viability of each configuration prior to use in the industrial setting, they were applied to the design of a real-time controller for a two-dimensional fighting game. While both were able to beat the AI program that came with the fighting game, the cooperative method performed statistically better. As a result, it was used to track the motion of a trailer (in lateral and vertical directions) using a camera mounted on the tractor vehicle towing the trailer. In this second real-time application (trailer motion), the CATNeuro configuration was compared to the original NEAT (elitist) method of evolution. CATNeuro is found to perform statistically better than NEAT in many aspects of the design including model training loss, model parameter size, and overall model structure consistency. In both scenarios, the performance improvements were attributed to the increased model diversity due to the interaction of CA knowledge sources both cooperatively and competitively.