Please login to be able to save your searches and receive alerts for new content matching your search criteria.
We present a novel interactive edge detection algorithm that combines A* search with low-level adaptive image processing. The algorithm models the semantically driven interpretation that we hypothesize to occur between the mind and visual cortex in the human brain. The basic idea is that oriented Gabor sub-bands are used to model grating cells in the mammalian visual system. These sub-bands are used during the search for a path to a marker in an image. A domain expert uses image markers to select edges of interest.
We demonstrate the system in several image domains. Examples are shown in the areas of photo-interpretation, medical imaging, path planning and general edge finding. The A* search finds a suboptimal result, but executes in a time that is typically 10 to 1,000 times faster than the dynamic programming approach currently used for this type of edge detection.
Stacking velocity is a very important parameter in seismic data processing. Until now the determination of stacking velocity has been done manually. This article proposes an automatic algorithm for picking stacking velocity. The algorithm uses artificial intelligence and pattern recognition techniques.
Industrial vision systems should be capable of recognising noisy objects, partially occluded objects and randomly located and/or oriented objects. This paper considers the problem of recognition of partially occluded planar shapes using contour segment-based features. None of the techniques suggested in the literature for solving the above problem guarantee reliable results for problem instances which require memory in excess of what is available. In this paper, a heuristic search-based recognition algorithm is presented, which guarantees reliable recognition results even when memory is limited. This algorithm identifies an object, the maximum portion of whose contour is visible in a conglomerate of objects. For increasing efficiency of the method, a two-stage recognition scheme has been designed. In the first phase, a relevant subset of the known model shapes is chosen and in the second stage, matching between the unknown shape and elements of the relevant subset is attempted using the above approach. The technique is general in the sense that it can be used with any kind of contour features. To evaluate the efficiency of the method, experimentation was carried out using polygonal approximations of the object contours. Results are cited for establishing the effectiveness of the approach.
Bad smells represent imperfection in the design of the software system and trigger the urge to refactor the source code. The quality of object-oriented software has always been a major concern for the developer team and refactoring techniques help them to focus on this aspect by transforming the code in a way such that the behavior of the software can be preserved. Rigorous research has been done in this field to improve the quality of the software using various techniques. But, one of the issues still remains unsettled, i.e. the overhead effort to refactor the code in order to yield the maximum maintainability value. In this paper, a quantitative evaluation method has been proposed to improve the maintainability value by identifying the most optimum refactoring dependencies in advance with the help of various meta-heuristic algorithms, including A*, AO*, Hill-Climbing and Greedy approaches. A comparison has been done between the maintainability values of the software used, before and after applying the proposed methodology. The results of this study show that the Greedy algorithm is the most promising algorithm amongst all the algorithms in determining the most optimum refactoring sequence resulting in 18.56% and 9.90% improvements in the maintainability values of jTDS and ArtOfIllusion projects, respectively. Further, this study would be beneficial for the software maintenance team as refactoring sequences will be available beforehand, thereby helping the team in maintaining the software with much ease to enhance the maintainability of the software. The proposed methodology will help the maintenance team to focus on a limited portion of the software due to prioritization of the classes, in turn helping them in completing their work within the budget and time constraints.
The naïve Bayes classifier is built on the assumption of conditional independence between the attributes given the class. The algorithm has been shown to be surprisingly robust to obvious violations of this condition, but is is natural to ask if it is possible to further improve the accuracy by relaxing this assumption. We examine an approach where naïve Bayes is augmented by the addition of correlation arcs between attributes. We explore two methods for finding the set of augmenting arcs, a greedy hill-climbing search, and a novel, more computationally efficient algorithm that we call SuperParent. We compare these methods to TAN; a state-of the-art distribution-based approach to finding the augmenting arcs.
In this study, we compare the use of genetic algorithms (GAs) and other forms of heuristic search in the cryptanalysis of short cryptograms. This paper expands on the work presented at FLAIRS-2003, which established the feasibility of a word-based genetic algorithm (GA) for analyzing short cryptograms.1 In this study the following search heuristics are compared both theoretically and experimentally: hill-climbing, simulated annealing, word-based and frequency-based genetic algorithms. Although the results reported apply to substitution ciphers in general, we focus in particular on short substitution cryptograms, such as the kind found in newspapers and puzzle books. Short cryptograms present a more challenging form of the problem. The word-based approach uses a relatively small dictionary of frequent words. The frequency-based approaches use frequency data for 2-, 3- and 4-letter sequences. The study shows that all of the optimization algorithms are successful at breaking short cryptograms, but perhaps more significantly, the most important factor in their success appears to be the choice of fitness measure employed.
Principal Component Analysis (PCA) is a classical dimensionality reduction technique that computes a low rank representation of the data. Recent studies have shown how to compute this low rank representation from most of the data, excluding a small amount of outlier data. We show how to convert this problem into graph search, and describe an algorithm that solves this problem optimally by applying a variant of the A* algorithm to search for the outliers. The results obtained by our algorithm are optimal in terms of accuracy, and are shown to be more accurate than results obtained by the current state-of-the- art algorithms which are shown not to be optimal. This comes at the cost of running time, which is typically slower than the current state of the art. We also describe a related variant of the A* algorithm that runs much faster than the optimal variant and produces a solution that is guaranteed to be near the optimal. This variant is shown experimentally to be more accurate than the current state-of-the-art and has a comparable running time.
Location-Routing Problem (LRP) is a challenging problem in logistics, which combines two types of decision: facility location and vehicle routing. In this paper, we focus on LRP with multiple capacitated depots and one uncapacitated vehicle per depot, which has practical applications such as mail delivery and waste collection. We propose a simple iterated variable neighborhood search with an effective perturbation strategy to solve the LRP variant. The experiments show that the algorithm is efficient and can compute better solutions than previous algorithms on tested instances.
In this paper, we model search algorithms with meta-control, allowing resource constraints, approximation, and parallel processing to be incorporated easily in the search process. The basic building block of the model is a hierarchical search process (HSP) consisting of context-free and context-sensitive grammars classified according to problem-independent and problem-dependent parts. The context-sensitive components are used mainly for evaluating decision parameters and in ordering production rules in the context-free grammar. The execution of the grammars for given initial conditions may invoke other HSPs already defined in the system. We describe ISE (acronym for Integrated Search Environment), a tool that implements hierarchical searches with meta-control. By separating the problem-dependent and problem-independent components in ISE, new search methods based on a combination of existing methods can be developed easily by coding a single master control program. Further, new applications solved by searches can be developed by coding the problem-dependent parts and reusing the problem-independent parts already developed. We describe the organization of ISE and present experiments carried out on the system.
The constrained bipartite matching (CBM) problem is a variant of the classical bipartite matching problem that has been well studied in the Combinatorial Optimization community. The input to CBM is an edge-weighted complete bipartite graph in which there are a same number of vertices on both sides and vertices on one side are sequentially ordered while vertices on the other side are partitioned and connected into disjoint directed paths. In a feasible matching, a path must be mapped to consecutive vertices on the other side. The optimization goal is to find a maximum or a minimum weight perfect matching. Such an optimization problem has its applications to scheduling and protein Nuclear Magnetic Resonance peak assignment. It has been shown to be NP-hard and MAX SNP-hard if the perfectness requirement is dropped. In this paper, more results on the inapproximability are presented and IDA*, a memory efficient variant of the well known A* search algorithm, is utilized to solve the problem. Accordingly, search heuristics and a set of heuristic evaluation functions are developed to assist the search, whose effectiveness is demonstrated by a simulation study using real protein NMR backbone resonance assignment instances.
Monte-Carlo Tree Search (MCTS) is a new best-first search guided by the results of Monte-Carlo simulations. In this article, we introduce two progressive strategies for MCTS, called progressive bias and progressive unpruning. They enable the use of relatively time-expensive heuristic knowledge without speed reduction. Progressive bias directs the search according to heuristic knowledge. Progressive unpruning first reduces the branching factor, and then increases it gradually again. Experiments assess that the two progressive strategies significantly improve the level of our Go program Mango. Moreover, we see that the combination of both strategies performs even better on larger board sizes.
In the last four decades, scheduling problems have received much attention by researchers. Recently, the Just-in-Time concept has inspired a renewed interest in scheduling, especially among industry practitioners. Although a number of papers have reviewed this field, this paper presents an easy-to-use reference guide of static scheduling problems with complexity results for practitioners, students, and others. Every attempt has been made to be complete; however, this survey is not exhaustive. This paper includes both regular and non-regular measures of performance, separate sections on dual criteria and multicriteria problems, and a discussion of recent developments in heuristic search approaches to these problems.