We propose Multi-Strategy Coevolving Aging Particles (MS-CAP), a novel population-based algorithm for black-box optimization. In a memetic fashion, MS-CAP combines two components with complementary algorithm logics. In the first stage, each particle is perturbed independently along each dimension with a progressively shrinking (decaying) radius, and attracted towards the current best solution with an increasing force. In the second phase, the particles are mutated and recombined according to a multi-strategy approach in the fashion of the ensemble of mutation strategies in Differential Evolution. The proposed algorithm is tested, at different dimensionalities, on two complete black-box optimization benchmarks proposed at the Congress on Evolutionary Computation 2010 and 2013. To demonstrate the applicability of the approach, we also test MS-CAP to train a Feedforward Neural Network modeling the kinematics of an 8-link robot manipulator. The numerical results show that MS-CAP, for the setting considered in this study, tends to outperform the state-of-the-art optimization algorithms on a large set of problems, thus resulting in a robust and versatile optimizer.
In this paper, we extend the work of [Tong, J, J Hu and J Hu (2017). Computing equilibrium prices for a capital asset pricing model with heterogeneous beliefs and margin-requirement constraints. European Journal of Operational Research, 256(1), 24–34] and develop various asynchronous algorithms to calculate the equilibrium asset prices in a heterogeneous capital asset pricing model. These asynchronous algorithms are based on different asynchronous updating schemes such as delayed updating, cyclic updating, fixed-length updating and random updating. In addition to potential benefits of improving computational efficiency, these asynchronous updating schemes also reflect several scenarios in financial markets in which investors may receive asset pricing information with various degrees of delays and their preferences on how and when to rebalance their portfolios may also be different. The proofs for the convergence of these algorithms are given. Numerical experiments are also provided to compare these algorithms and they show that these asynchronous algorithms work quite well.
In this paper, we propose a uniform enhancement approach called smoothing function method, which can cooperate any optimization algorithm and improve its performance. The method has two phases. In the first phase, a smoothing function is constructed by using a properly truncated Fourier series. It can preserve the overall shape of the original objective function but eliminate many of its local optimal points, thus it can well approach the objective function. Then, the optimal solution of the smoothing function is searched by an optimization algorithm (e.g. traditional algorithm or evolutionary algorithm) so that the search becomes much easier. In the second phase, we switch to optimize the original function for some iterations by using the best solution(s) obtained in phase 1 as an initial point (population). Thereafter, the smoothing function is updated in order to approximate the original function more accurately.
These two phases are repeated until the best solutions obtained in several successively second phases cannot be improved obviously. In this manner, any optimization algorithm will become much easier in searching optimal solution. Finally, we use the proposed approach to enhance two typical optimization algorithms: Powell direct algorithm and a simple genetic algorithm. The simulation results on ten challenging benchmarks indicate the proposed approach can effectively improve the performance of these two algorithms.
The analog-integrated circuits industry is exerting increasing pressure to shorten the analog circuit design time. This pressure is put primarily on the analog circuit designers that in turn demand automated circuit design tools evermore vigorously. Such tools already exist in the form of circuit optimization software packages but they all suffer a common ailment — slow convergence. Even taking into account the increasing computational power of modern computers the convergence times of such optimization tools can range from a few days to even weeks. Different authors have tried diverse approaches for speeding up the convergence with varying success. In this paper authors propose a combined optimization algorithm that attempts to improve the speed of convergence by exploiting the positive properties of the underlying optimization methods. The proposed algorithm is tested on a number of test cases and the convergence results are discussed.
This research paper presents a new evolutionary technique named vortex search optimization (VSO) to design digital 2D finite impulse response (FIR) filter for improved performance both in pass-band and stop-band regions. Optimum filter coefficients are calculated by minimizing the deviation of actual frequency response from specified or desired response. Efficiency of the designed filter is measured by several parameters, such as maximum pass-band ripple, maximum stop-band ripple, mean attenuation in stop band and time taken, to execute the code. Analysis of the performance of designed filter is correlated with various different algorithms like real coded genetic algorithm, particle swarm optimization, genetic search algorithm and hybrid particle swarm optimization gravitational algorithm. Comparative study shows significant reduction in pass-band error, stop-band error and execution time.
This paper proposes a novel algorithm that combines symbolic execution and data flow testing to generate test cases satisfying multiple coverage criteria of critical software applications. The coverage criteria considered are data flow coverage as the primary criterion, software safety requirements, and equivalence partitioning as sub-criteria. black The characteristics of the subjects used for the study include high-precision floating-point computation and iterative programs. The work proposes an algorithm that aids the tester in automated test data generation, satisfying multiple coverage criteria for critical software. The algorithm adapts itself and selects different heuristics based on program characteristics. The algorithm has an intelligent agent as its decision support system to accomplish this adaptability. Intelligent agent uses the knowledge base to select different low-level heuristics based on the current state of the problem instance during each generation of genetic algorithm execution. The knowledge base mimics the expert’s decision in choosing the appropriate heuristics. black The algorithm outperforms by accomplishing 100% data flow coverage for all subjects. In contrast, the simple genetic algorithm, random testing and a hyper-heuristic algorithm could accomplish a maximum of 83%, 67% and 76.7%, respectively, for the subject program with high complexity. black The proposed algorithm covers other criteria, namely equivalence partition coverage and software safety requirements, with fewer iterations. black The results reveal that test cases generated by the proposed algorithm are also effective in fault detection, with 87.2% of mutants killed when compared to a maximum of 76.4% of mutants killed for the complex subject with test cases of other methods.
In recent years, many intelligent optimization algorithms have been applied to the class integration and test order (CITO) problem. These algorithms also have been proved to be able to efficiently solve the problem. Here, the design of fitness function is a key task to generate the optimal solution. To better solve the class integration and test order problem, we propose a new fitness function to generate the optimal solution that achieves a balanced compromise between the different measures (objectives) such as the total number of stubs and the total stubbing complexity in this paper. We used some programs to compare and evaluate the different approaches. The experimental results show that our proposed approach is encouraging to some extent in solving the class integration and test order problem.
Collision detection optimization in an event-driven simulation of a multi-particle system is one of the crucial tasks, determining the efficiency of the simulation. We present the event-driven simulation algorithm that employs dynamic computational geometry data structures as a tool for collision detection optimization (CDO). The first successful application of the dynamic generalized Voronoi diagram method for collision detection optimization in a system of moving particles is discussed. A comprehensive comparision of four kinetic data structures in d-dimensional space, performed in a framework of an event-driven simulation of a granular-type materials system, is supported by the experimental results.
In this study, we compare the use of genetic algorithms (GAs) and other forms of heuristic search in the cryptanalysis of short cryptograms. This paper expands on the work presented at FLAIRS-2003, which established the feasibility of a word-based genetic algorithm (GA) for analyzing short cryptograms.1 In this study the following search heuristics are compared both theoretically and experimentally: hill-climbing, simulated annealing, word-based and frequency-based genetic algorithms. Although the results reported apply to substitution ciphers in general, we focus in particular on short substitution cryptograms, such as the kind found in newspapers and puzzle books. Short cryptograms present a more challenging form of the problem. The word-based approach uses a relatively small dictionary of frequent words. The frequency-based approaches use frequency data for 2-, 3- and 4-letter sequences. The study shows that all of the optimization algorithms are successful at breaking short cryptograms, but perhaps more significantly, the most important factor in their success appears to be the choice of fitness measure employed.
Collaborative Filtering (CF) is a popular technique employed by Recommender Systems, a term used to describe intelligent methods that generate personalized recommendations. Some of the most efficient approaches to CF are based on latent factor models and nearest neighbor methods, and have received considerable attention in recent literature. Latent factor models can tackle some fundamental challenges of CF, such as data sparsity and scalability. In this work, we present an optimal scaling framework to address these problems using Categorical Principal Component Analysis (CatPCA) for the low-rank approximation of the user-item ratings matrix, followed by a neighborhood formation step. CatPCA is a versatile technique that utilizes an optimal scaling process where original data are transformed so that their overall variance is maximized. We considered both smooth and non-smooth transformations for the observed variables (items), such as numeric, (spline) ordinal, (spline) nominal and multiple nominal. The method was extended to handle missing data and incorporate differential weighting for items. Experiments were executed on three data sets of different sparsity and size, MovieLens 100k, 1M and Jester, aiming to evaluate the aforementioned options in terms of accuracy. A combined approach with a multiple nominal transformation and a "passive" missing data strategy clearly outperformed the other tested options for all three data sets. The results are comparable with those reported for single methods in the CF literature.
The implication of firefly and fuzzy firefly optimization algorithms has been greatly witnessed in clustering techniques and extensively used in applications such as Image segmentation. Parameters such as step factor and attractiveness have been kept constant in these algorithms, which affect the convergence rate and accuracy of the clustering process. Though fuzzy adaptive firefly algorithm tackled this problem by making those parameters an adaptive one, issues such as low convergence rate, and provision of non-optimal solutions are still there. To tackle these issues, this paper proposed a novel fuzzy adaptive fuzzy firefly algorithm that significantly improves the accuracy and convergence rate while comparing with the existing optimization algorithms. Further, the proposed algorithm fused with existing hybrid clustering algorithms involving fuzzy set, intuitionistic fuzzy set, and rough set resulted in eight novel hybrid clustering algorithms which lead to better performance in optimizing the selection of initial centroids. To validate the proposal, experimental studies have been conducted on datasets found in bench-marking data repositories such as UCI, and Kaggle. The performance and accuracy evaluation of proposed algorithms have been carried out with the aid of seven accuracy measures. Results clearly indicate the improved accuracy and convergence rate of the proposed algorithms.
Recently, technologies based on neural networks (NNs) and deep learning have improved in different areas of Science such as wireless communications. This study demonstrates the applicability of NN-based receivers for detecting and decoding sparse code multiple access (SCMA) codewords. The simulation results reveal that the proposed receiver provides highly accurate predictions based on new data. Moreover, the performance analysis results of the primary optimization algorithms used in machine learning are presented in this study.
Due to the exponential rise in the usage of the internet and smart devices, there is a demand for enhanced network efficiency and user satisfaction in a cloud computing environment. Moreover, moving to the cloud systems, it mainly focuses on storage, computation and resources. Due to copious growth, there exist more challenges as well. Among those, resource allocation in cloud computing is the main study, which is essential to determine the QoS and improved performance concerning reliability, confidentiality, trust, security, user satisfaction, profits, etc. This paper plans to prepare a detailed review on trust-based resource allocation in the collaborative cloud. The cloud industry has been assessed in terms of trust-based and other important factors to produce a road plan for resource allocation. Many papers are reviewed here and give a substantial evaluation of cloud resources and their resource allocation models using machine learning and optimization models. First, this survey provides an elaborated study concerning the various cloud resources considering the performance and QoS. Eventually, it extends the research based on trust-based approaches, with the intention of motivating the researchers to focus on trust-based resource allocation on collaborative cloud computing (CCC) atmosphere.
Incremental software development replaces monolithic-type development by offering a series of releases with additive functionality. To create optimal value under existing project constraints, the question is what should be done when? Release planning is giving the answer. It determines proper priorities and assigns features to releases. Comprehensive stakeholder involvement ensures a high degree of applicability of the results. The formal procedure of release planning is able to consider different criteria (urgency, importance) and to bring them together in a balanced way. Release planning is based on (estimates of) the implementation effort. In addition, constraints related to risk, individual resources necessary to implement the proposed features, money, or technological dependencies can be easily adopted into the release planning approach presented in this article.
Releases are known to be new versions of an evolving product. However, the idea of a release is not restricted to this, but can be applied to any type of periodic development where a release would correspond to an annual or quarterly time period. The special case of one release called prioritization is of even larger applicability wherever competing items have been selected under additional constraints.
An informal and later a formal problem description of the release planning problem is given. Ad hoc or just experience-based planning techniques are not able to accommodate size, complexity and the high degree of uncertainty of the problem. Plans generated in this way will typically result in unsatisfied customers, time and budget overruns, and a loss in market share. As a consequence of the analysis of the current state-of-the practice, we propose a more advanced approach based on the strengths of intelligent software engineering decision support.
Existing release planning methods and tool support are analyzed. An intelligent tool support called ReleasePlanner® is presented. The web-based tool is based on an iterative and evolutionary solution procedure and combines the computational strength of specialized optimization algorithms with the flexibility of intelligent decision support. It helps to generate and evaluate candidate solutions. As a final result, a small number of most promising alternative release plans are offered to the actual decision-maker. Special emphasis is on facilitating what-if scenarios and on supporting re-planning. Different usage scenarios and a case study project are presented. Practical experience from industrial application of ReleasePlanner® is included as well. Future directions of research are discussed.
In this work we focus on the topic of the consistency analysis in scenario projects. We analyze the three most important problems posed to us by the scenario managers. For all three problems we determine the complexity according to the polynomial time hierarchy. We will prove that two of them are NP-complete and the third one is #P-complete. Therefore we cannot expect to find algorithms with a polynomially bounded run time for the problems. Nevertheless, we will present algorithms that solve all three problems very quickly when running on instances of practical relevance. Moreover, for instances occurring in practice, the solution times are usually in the order of some seconds on a state-of-the-art notebook thus allowing the use of our algorithms in real time scenario management systems.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.