This paper addresses profit maximization in location routing problem (LRP) where backordering is allowed and serving all customers is not mandatory. The problem is motivated by the practical case of distribution companies using temporary multi-user warehouses to serve their customers. We propose a multi-period novel mixed integer linear programming (MILP) model for the problem where demand is price sensitive. The model seeks to find optimal locations of warehouses over the planning horizon, set of orders to serve, to backorder or to reject, and routing decisions giving optimal profit. The commercial solver (CPLEX) is used to solve the model on small instances. For medium and large instances, we propose a solution approach that incorporates the Relax and Fix (R&F), Fix and Optimize (F&O) procedures combined with local search heuristics. Computational experiments and economic analysis show the relevance of the proposed approach and the impact of parameters on generated profit.
The main contribution of this paper is to show a new approach for FM screening which we call Local Exhaustive Search (LES) method, and to present ways to accelerate the computation using an FPGA. FM screening, as opposed to conventional AM screening, keeps unit dot size when converting an original gray-scale image into the binary image for printing. FM screening pays great attention to generate moiré-free binary images reproducing continuous-tone and fine details of original photographic images. Our basic approach for FM screening is to generate a binary image whose projected image onto human eyes is very close to the original image. The projected image is computed by applying a Gaussian filter to the binary image. LES performs an exhaustive search for each of the small square subimages in the binary image and replaces the subimage by the best binary pattern. The exhaustive search is repeated until no more improvement is possible. The experimental results show that LES produces a high quality and sharp binary image. We also implemented LES on an FPGA to accelerate the computation and achieved a speedup factor of up to 51 over the software implementations.
Membrane algorithms with subalgorithms inspired by simulated annealing are treated in this paper. Simulated annealing is inherently a kind of local search but it modifies a solution to a worse one with a probability determined by "temperature". The temperature of simulated annealing is changed according to "cooling schedule". On the other hand, the subalgorithm introduced here has constant temperature which is determined by the region where the subalgorithm is. It is called Brownian subalgorithm since the subalgorithm incorporates "thermal movement" of a solution in the search space but does not simulate "annealing".
Computer simulations show that a membrane algorithm which has three regions and has a Brownian subalgorithm in each region can obtain very good approximate solutions for several benchmark problems of the traveling salesman problem. However, the algorithm, occasionally, gets quite bad solutions (twice as large as the optimum) for a problem. A membrane algorithm which has both Brownian and genetic subalgorithms never gets such a bad solution (only 8% worse than the optimum) for all problems examined, although, in average, it is not as good as the algorithm with Brownian only. The result indicates that membrane algorithm with subalgorithms under different approximate mechanisms may be robust under a wide range of problems.
Screening is an important task to convert a continuous-tone image into a binary image with pure black and white pixels. The main contribution of this paper is to show a new algorithm for cluster-dot screening using a local exhaustive search. Our new algorithm generates 2-cluster, 3-cluster, and 4-cluster binary images, in which all dots have at least 2, 3, and 4 pixels, respectively. The key idea of our new screening method is to repeat a local exhaustive search that finds the best binary pattern in small windows of size k × k in a binary image. The experimental results show that the local exhaustive search produces high quality and sharp cluster-dot binary images. We also present an hardware algorithm to accelerate the computation. Our hardware algorithm for a round of the local exhaustive search runs O(k2) clock cycles while the software implementation runs in O(2k2 w2) time, where (2w + 1) × (2w + 1) is the size of Gaussian filter. Thus, from theoretical point of view, our hardware algorithm achieves a speedup factor of O(w2). To show that our hardware algorithm is practically fast, we have implemented it on an FPGA. Our hardware algorithm achieved a speedup factor of up to 229 over the software implementation.
The LOCAL(A, B) randomized task scheduling algorithm is proposed for fully connected multiprocessors. It combines two given task scheduling algorithms (A, and B) using local neighborhood search to give a hybrid of the two given algorithms. Objective is to show that such type of hybridization can give much better performance results in terms of parallel execution times. Two task scheduling algorithms are selected: DSC (Dominant Sequence Clustering as algorithm A), and CPPS (Cluster Pair Priority Scheduling as algorithm B) and a hybrid is created (the LOCAL(DSC, CPPS) or simply the LOCAL task scheduling algorithm). The LOCAL task scheduling algorithm has time complexity O(|V||E|(|V |+|E|)), where V is the set of vertices, and E is the set of edges in the task graph. The LOCAL task scheduling algorithm is compared with six other algorithms: CPPS, DCCL (Dynamic Computation Communication Load), DSC, EZ (Edge Zeroing), LC (Linear Clustering), and RDCC (Randomized Dynamic Computation Communication). Performance evaluation of the LOCAL task scheduling algorithm shows that it gives up to 80.47 % improvement of NSL (Normalized Schedule Length) over other algorithms.
Clustering is an effective technique that can be used to analyze and extract useful information from large biological networks. Popular clustering solutions often require user input for several algorithm options that can seem very arbitrary without experimentation. These algorithms can provide good results in a reasonable time period but they are not above improvements. We present a local search based clustering algorithm free of such required input that can be used to improve the cluster quality of a set of given clusters taken from any existing algorithm or clusters produced via any arbitrary assignment. We implement this local search using a modern GPU based approach to allow for efficient runtime. The proposed algorithm shows promising results for improving the quality of clusters. With already high quality input clusters we can achieve cluster rating improvements upto to 33%.
This paper deals with the probabilistic multi-vehicle pickup and delivery problem. We develop an efficient neighborhood evaluation procedure which allows to reduce the computational complexity by two orders of magnitude with respect to a straightforward approach. The numerical experiments indicate that, if incorporated in a local search strategy, our neighborhood evaluation technique, provides very good results in terms of computation time reduction and equity of the workload distribution among the available vehicles.
This paper proposes a heuristic with stochastic neighborhood structures (SNS) to solve two-stage and three-stage two-dimensional guillotine bin packing and cutting stock problems. A solution is represented as a sequence of items which are packed into existing or new stacks, shelves or bins according to previously defined criteria. Moreover, SNS comprises three random neighborhood structures based on modifying the current sequence of items. These are called cut-and-paste, split, and swap blocks and are applied one by one in a fixed order to try to improve the quality of the current solution. Both benchmark instances and real-world instances provided by furniture companies were utilized in the computational tests. Particularly, all benchmark instances are bin packing instances (i.e., the demand of each item type is small), and all real-world instances are classified into bin packing instances and cutting stock instances (i.e., the demand of each item type is large). The computational results obtained by the proposed method are compared with lower bounds and with the solutions obtained by a deterministic Variable Neighborhood Descent (VND) meta-heuristic. The SNS provide solutions within a small percentage of the optimal values, and generally make significant improvements in cutting stock instances and slight to moderate improvements in bin packing instances over the VND approach.
In this paper, we consider the spherical k-means problem with penalties, a robust model of spherical clusterings that requires identifying outliers during clustering to improve the quality of the solution. Each outlier will incur a specified penalty cost. In this problem, one should detect the outliers and propose a k-clustering for the given data set so as to minimize the sum of the clustering and penalty costs. As our main contribution, we present a (16+8√3)-approximation via single-swap local search and an (8+2√7+ε)-approximation via multi-swap local search.
In this paper, multi-objective flexible job shop scheduling problem (MOFJSP) was studied with the objects to minimize makespan, total workload and critical workload. A variable neighborhood evolutionary algorithm (VNEA) was proposed to obtain a set of Pareto optimal solutions. First, two novel crowded operators in terms of the decision space and object space were proposed, and they were respectively used in mating selection and environmental selection. Then, two well-designed neighborhood structures were used in local search, which consider the problem characteristics and can hold fast convergence. Finally, extensive comparison was carried out with the state-of-the-art methods specially presented for solving MOFJSP on well-known benchmark instances. The results show that the proposed VNEA is more effective than other algorithms in solving MOFJSP.
This paper proposes an efficient approach for human face detection and exact facial features location in a head-and-shoulder image. This method searches for the eye pair candidate as a base line by using the characteristic of the high intensity contrast between the iris and the sclera. To discover other facial features, the algorithm uses geometric knowledge of the human face based on the obtained eye pair candidate. The human face is finally verified with these unclosed facial features. Due to the merits of applying the Prune-and-Search and simple filtering techniques, we have shown that the proposed method indeed achieves very promising performance of face detection and facial feature location.
We propose a new computational approach to logic-based systems that should reason in a fast but logically sound and complete manner about large-scale complex critical devices and systems that can exhibit unexpected faulty behaviors. The approach is original from at least two points of view. First, it makes use of local search techniques while preserving logical deductive completeness. Second, it proves experimentally efficient for very large knowledge bases thanks to new heuristics in the use of local search techniques.
This paper presents a hybrid algorithm that combines local search and constraint programming techniques to solve a network routing problem. The problem considered is that of routing traffic demands from a set of requests over a network with limited capacity so as to minimise the cost of any unrouted demands. The hybridisation is twofold: pure local search is used to find a good cost bound for a subsequent branch-and-bound optimisation phase, with local search again applied at the nodes of the branch-and-bound search tree. Constraint propagation occurs in the search tree to reduce the domains of the decision variable, using a set of constraints that are independent of the action of local search at the nodes. In contrast to previous constraint programming/local search hybridisations, here local search is used to satisfy the hard problem constraints, while optimisation is handled in the framework of constraint programming. The resulting algorithm is incomplete, but is shown to compare favourably with a complete approach to this problem.
Propositional satisfiability (SAT) problem is fundamental to the theory of NP-completeness. Indeed, using the concept of "polynomial-time reducibility" all NP-complete problems can be polynomially reduced to SAT. Thus, any new technique for satisfiability problems will lead to general approaches for thousands of hard combinatorial problems. In this paper, we introduce the incremental propositional satisfiability problem that consists of maintaining the satisfiability of a propositional formula anytime a conjunction of new clauses is added. More precisely, the goal here is to check whether a solution to a SAT problem continues to be a solution anytime a new set of clauses is added and if not, whether the solution can be modified efficiently to satisfy the old formula and the new clauses. We will study the applicability of systematic and approximation methods for solving incremental SAT problems. The systematic method is based on the branch and bound technique while the approximation methods rely on stochastic local search and genetic algorithms. Experimental tests, conducted on randomly generated SAT instances, demonstrate the efficiency in time of the approximation methods over the branch and bound algorithm. However these approximation methods do not always guarantee the completeness of the solution returned. We show that a method we propose that uses non systematic search in a limited form together with branch and bound has the best compromise, in practice, between time and quality of the solution returned (success ratio).
Standard tabu search methods are based on the complete exploration of current solution neighborhood. However, for some problems due to the neighborhood size or to the fitness evaluation complexity, the total exploration of the neighborhood is impractical. The main purpose of this paper is to propose a local search method with no enumeration procedure. In other words, a single solution is examined at each iteration and retained for the future iterations. The idea is to randomly select one solution among a sub-set of the neighborhood of the current one. An adaptive exploration of neighborhood, using extension and restriction mechanisms represented by loop detection and tabu list structure, determines this sub-set. This approach is applied to the K-coloring problem and evaluated on standard benchmarks like DIMACS. The objective is to show how a generic method, without full neighborhood exploration, degradation control and problem-oriented operators, provides a very competitive results comparing to the best dedicated algorithms for graph coloring problems published in the literature.
When handling 2D packing problems, numerous incomplete and complete algorithms maintain a so-called bottom-left (BL) property: no rectangle placed in a container can be moved more left or bottom. While it is easy to make a rectangle BL when it is added in a container, it is more expensive to maintain all the placed pieces BL when a rectangle is removed. This prevents researchers from designing incremental moves for metaheuristics or efficient complete optimization algorithms. This paper investigates the possibility of violating the BL property. Instead, we propose to maintain the set of maximal holes, which allows incremental additions and removals of rectangles.
To validate our alternative approach, we have designed an incremental move, maintaining maximal holes, for the strip packing problem, a variant of the famous 2D bin-packing. We have also implemented a metaheuristic, with no user-defined parameter, using this move and standard greedy heuristics. We have finally designed two variants of this incomplete method. In the first variant, a better first layout is provided by a hyperheuristic proposed by some of the authors. In the second variant, a fast repacking procedure recovering the BL property is occasionally called during the local search.
Experimental results show that the approach is competitive with the best known incomplete algorithms.
In this paper, we introduce Lamarckian learning theory into the Clonal Selection Algorithm and propose a sort of Lamarckian Clonal Selection Algorithm, termed as LCSA. The major aim is to utilize effectively the information of each individual to reinforce the exploitation with the help of Lamarckian local search. Recombination operator and tournament selection operator are incorporated into LCSA to further enhance the ability of global exploration. We compare LCSA with the Clonal Selection Algorithm in solving twenty benchmark problems to evaluate the performance of LCSA. The results demonstrate that the Lamarckian local search makes LCSA more effective and efficient in solving numerical optimization problems.
Many real world industrial applications involve the Traveling Salesman Problem (TSP), which is a problem that finds a Hamiltonian path with minimum cost. Examples of problems that belong to this category are transportation routing problem, scan chain optimization and drilling problem in integrated circuit testing and production. This paper presents a Bee Colony Optimization (BCO) algorithm for symmetrical TSP. The BCO model is constructed algorithmically based on the collective intelligence shown in bee foraging behaviour. The algorithm is integrated with a fixed-radius near neighbour 2-opt (FRNN 2-opt) heuristic to further improve prior solutions generated by the BCO model. To limit the overhead incurred by the FRNN 2-opt, a frequency-based pruning strategy is proposed. The pruning strategy allows only a subset of the promising solutions to undergo local optimization. Experimental results comparing the proposed BCO algorithm with existing approaches on a set of benchmark problems are presented. For 84 benchmark problems, the BCO algorithm is able to obtain an overall average solution quality of 0.31% from known optimum. The results also show that it is comparable to other algorithms such as Ant Colony Optimization and Particle Swarm Optimization.
Boolean Satisfiability (SAT) solvers have been successfully applied to a wide range of practical applications, including hardware model checking, software model finding, equivalence checking, and planning, among many others. SAT solvers are also the building block of more sophisticated decision procedures, including Satisfiability Modulo Theory (SMT) solvers. The large number of applications of SAT yields ever more challenging problem instances, and motivate the development of more efficient algorithms. Recent work studied hybrid approaches for SAT, which involves integrating incomplete and complete SAT solvers. This paper proposes a number of improvements to hybrid SAT solvers. Experimental results demonstrate that the proposed optimizations are effective. The resulting algorithms in general perform better and, more importantly, are significantly more robust.
An approach to detect communities in signed networks that combines Genetic Algorithms and local search is proposed. The method optimizes the concepts of modularity and frustration in order to find network divisions far from random partitions, and having positive and dense intra-connections, while sparse and negative inter-connections. A local search strategy to improve the network division is performed by moving nodes having positive connections with nodes of other communities, to neighboring communities, provided that there is an increase in signed modularity. An extensive experimental evaluation on randomly generated networks for which the ground-truth division is known proves that the method is competitive with a state-of-art approach, and it is capable to find accurate solutions. Moreover, a comparison on a real life signed network shows that our approach obtains communities that minimize the positive inter-connections and maximize the negative intra-connections better than the contestant methods.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.