Processing math: 100%
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

Bestsellers

Linear Algebra and Optimization with Applications to Machine Learning
Linear Algebra and Optimization with Applications to Machine Learning

Volume I: Linear Algebra for Computer Vision, Robotics, and Machine Learning
by Jean Gallier and Jocelyn Quaintance
Linear Algebra and Optimization with Applications to Machine Learning
Linear Algebra and Optimization with Applications to Machine Learning

Volume II: Fundamentals of Optimization Theory with Applications to Machine Learning
by Jean Gallier and Jocelyn Quaintance

 

  • articleNo Access

    COMPOUND DIVERSITY FUNCTIONS FOR ENSEMBLE SELECTION

    An effective way to improve a classification method's performance is to create ensembles of classifiers. Two elements are believed to be important in constructing an ensemble: (a) the performance of each individual classifier; and (b) diversity among the classifiers. Nevertheless, most works based on diversity suggest that there exists only weak correlation between classifier performance and ensemble accuracy. We propose compound diversity functions which combine the diversities with the performance of each individual classifier, and show that there is a strong correlation between the proposed functions and ensemble accuracy. Calculation of the correlations with different ensemble creation methods, different problems and different classification algorithms on 0.624 million ensembles suggests that most compound diversity functions are better than traditional diversity measures. The population-based Genetic Algorithm was used to search for the best ensembles on a handwritten numerals recognition problem and to evaluate 42.24 million ensembles. The statistical results indicate that compound diversity functions perform better than traditional diversity measures, and are helpful in selecting the best ensembles.

  • articleNo Access

    Enhanced Parameter-Free Diversity Discriminant Preserving Projections for Face Recognition

    The manifold-based learning methods have recently drawn more and more attention in dimension reduction. In this paper, a novel manifold-based learning method named enhanced parameter-free diversity discriminant preserving projections (EPFDDPP) is presented, which effectively avoids the neighborhood parameter selection and characterizes the manifold structure well. EPFDDPP redefines the weighted matrices, the discriminating similarity matrix and the discriminating diversity matrix, respectively. The weighted matrices are computed by the cosine angle distance between two data points and take special consideration of both the local information and the class label information, which are parameterless and favorable for face recognition. After characterizing the discriminating similarity scatter matrix and the discriminating diversity scatter matrix, the novel feature extraction criterion is derived based on maximum margin criterion. Experimental results on the Wine data set, Olivetti Research Laboratory (ORL); AR (face database created by Aleix Martinez and Robert Benavente); and Pose, Illumination, and Expression (PIE) face databases show the effectiveness of the proposed method.

  • articleNo Access

    A New Dominance Method Based on Expanding Dominated Area for Many-Objective Optimization

    The performance of the traditional Pareto-based evolutionary algorithms sharply reduces for many-objective optimization problems, one of the main reasons is that Pareto dominance could not provide sufficient selection pressure to make progress in a given population. To increase the selection pressure toward the global optimal solutions and better maintain the quality of selected solutions, in this paper, a new dominance method based on expanding dominated area is proposed. This dominance method skillfully combines the advantages of two existing popular dominance methods to further expand the dominated area and better maintain the quality of selected solutions. Besides, through dynamically adjusting its parameter with the iteration, our proposed dominance method can timely adjust the selection pressure in the process of evolution. To demonstrate the quality of selected solutions by our proposed dominance method, the experiments on a number of well-known benchmark problems with 5–25 objectives are conducted and compared with that of the four state-of-the-art dominance methods based on expanding dominated area. Experimental results show that the new dominance method not only enhances the selection pressure but also better maintains the quality of selected solutions.

  • articleNo Access

    Evolutionary Algorithm with Diversity-Reference Adaptive Control in Dynamic Environments

    Evolutionary algorithms (EAs) can be used to find solutions in dynamic environments. In such cases, after a change in the environment, EAs can either be restarted or they can take advantage of previous knowledge to resume the evolutionary process. The second option tends to be faster and demands less computational effort. The preservation or growth of population diversity is one of the strategies used to advance the evolutionary process after modifications to the environment. We propose a new adaptive method to control population diversity based on a model-reference. The EA evolves the population whereas a control strategy, independently, handles the population diversity. Thus, the adaptive EA evolves a population that follows a diversity-reference model. The proposed model, called the Diversity-Reference Adaptive Control Evolutionary Algorithm (DRAC), aims to maintain or increase the population diversity, thus avoiding premature convergence, and assuring exploration of the solution space during the whole evolutionary process. We also propose a diversity models based on the dynamics of heterozygosity of the population, as models to be tracked by the diversity control. The performance of DRAC showed promising results when compared with the standard genetic algorithm and six other adaptive evolutionary algorithms in 14 different experiments with three different types of environments.

  • articleNo Access

    Selecting and Combining Classifiers Based on Centrality Measures

    Centrality measures have been helping to explain the behavior of objects, given their relation, in a wide variety of problems, since sociology to chemistry. This work considers these measures to assess the importance of every classifier belonging to an ensemble of classifiers, aiming to improve a Multiple Classifier System (MCS). Assessing the classifier’s importance by employing centrality measures, inspired two different approaches: one for selecting classifiers and another for fusion. The selection approach, called Centrality Based Selection (CBS), adopts a trade-off between the classifier’s accuracy and their diversity. The sub-optimal selected subset presents good results against selection methods from the literature, being superior in 67.22% of the cases. The second approach, the integration, is named Centrality Based Fusion (CBF). This approach is a weighted combination method, which is superior to literature in 70% of the cases.

  • articleNo Access

    An Evolutionary Approach for Time Dependent Optimization

    Many real-world problems involve measures of objectives that may be dynamically optimized. The application of evolutionary algorithms, such as genetic algorithms, in time dependent optimization is currently receiving growing interest as potential applications are numerous ranging from mobile robotics to real time process command. Moreover, constant evaluation functions skew results relative to natural evolution so that it has become a promising gap to combine effectiveness and diversity in a genetic algorithm. This paper features both theoretical and empirical analysis of the behavior of genetic algorithms in such an environment. We present a comparison between the effectivenss of traditional genetic algorithm and the dual genetic algorithm which has revealed to be a particularly adaptive tool for optimizing a lot of diversified classes of functions. This comparison has been performed on a model of dynamical environments which characteristics are analyzed in order to establish the basis of a testbed for further experiments. We also discuss fundamental properties that explain the effectiveness of the dual paradigm to manage dynamical environments.

  • articleNo Access

    An Improved Particle Swarm Optimization Algorithm with Adaptive Inertia Weights

    The particle swarm optimization (PSO) algorithm is simple to implement and converges quickly, but it easily falls into a local optimum; on the one hand, it lacks the ability to balance global exploration and local exploitation of the population, and on the other hand, the population lacks diversity. To solve these problems, this paper proposes an improved adaptive inertia weight particle swarm optimization (AIWPSO) algorithm. The AIWPSO algorithm includes two strategies: (1) An inertia weight adjustment method based on the optimal fitness value of individual particles is proposed, so that different particles have different inertia weights. This method increases the diversity of inertia weights and is conducive to balancing the capabilities of global exploration and local exploitation. (2) A mutation threshold is used to determine which particles need to be mutated. This method compensates for the inaccuracy of random mutation, effectively increasing the diversity of the population. To evaluate the performance of the proposed AIWPSO algorithm, benchmark functions are used for testing. The results show that AIWPSO achieves satisfactory results compared with those of other PSO algorithms. This outcome shows that the AIWPSO algorithm is conducive to balancing the abilities of the global exploration and local exploitation of the population, while increasing the diversity of the population, thereby significantly improving the optimization ability of the PSO algorithm.

  • articleNo Access

    Pruning High-Similarity Clusters to Optimize Data Diversity when Building Ensemble Classifiers

    Diversity is a key component for building a successful ensemble classifier. One approach to diversifying the base classifiers in an ensemble classifier is to diversify the data they are trained on. While sampling approaches such as bagging have been used for this task in the past, we argue that since they maintain the global distribution, they do not create diversity. Instead, we make a principled argument for the use of k-means clustering to create diversity. Expanding on previous work, we observe that when creating multiple clusterings with multiple k values, there is a risk of different clusterings discovering the same clusters, which would in turn train the same base classifiers. This would bias the ensemble voting process. We propose a new approach that uses the Jaccard Index to detect and remove similar clusters before training the base classifiers, not only saving computation time, but also reducing classification error by removing repeated votes. We empirically demonstrate the effectiveness of the proposed approach compared to the state of the art on 19 UCI benchmark datasets.

  • chapterNo Access

    A two-sided matching and diversity-enhanced method for job recommendation with employer behavioral data

    As a new channel of job seeking, online recruitment platforms and their job recommender systems have shown importance to applicants. However, existing recommendation methods endure limitation in effectiveness for their lack of consideration for employers feedback and behavioral information. Taking two-sided matching and diversity into account, this paper proposes a machine-learning based job recommendation method, namely Job-PI, to synthetically optimize both applicant preferences and employer interests. Experiments on both simulation and real-world data show the effectiveness and superiority of Job-PI over other methods.