Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Given a set of items, and a conflict graph defined on the item set, the problem of bin packing with conflicts asks for a partition of items into a minimum number of independent sets so that the total size of items in each independent set does not exceed the bin capacity. As a generalization of both classic bin packing and classic vertex coloring, it is hard to approximate the problem on general graphs. We present new approximation algorithms for bipartite graphs and split graphs. The absolute approximation ratios are shown to be 53 and 2 respectively, both improving the existing results.
We discuss the property of (almost) complete intersection of LSS-ideals of graphs of some special forms, like trees, unicyclic and bicyclic graphs. Further, we give a sufficient condition for the complete intersection property of twisted LSS-ideals in terms of a new graph theoretical invariant called twisted positive matching decomposition number denoted by tpmd. We also study the regularity of powers of LSS-ideals and make observations related to the Koszul property of the quotients of the same.
Given an undirected graph G and a real edge-weight vector, the connected subgraph problem consists of finding a maximum-weight subset of edges which induces a connected subgraph of G. In this paper, we establish a link between the complexity of the connected subgraph problem and the matching number. We study the separation problem associated with the so-called matching-partition inequalities, which were introduced by Didi Biha, Kerivin and Ng [Polyhedral study of the connected subgraph problem, Discr. Math.338 (2015) 80–92] for the connected subgraph polytope.
In this work, we consider term rewriting from a functional point of view. A rewrite rule is a function that can be applied to a term using an explicit application function. From this starting point, we show how to build more elaborated functions, describing first rewrite derivations, then sets of derivations. These functions, that we call strategies, can themselves be defined by rewrite rules and the construction can be iterated leading to higher-order strategies. Furthermore, the application function is itself defined using rewriting in the same spirit. We present this calculus and study its properties. Its implementation in the language is used to motivate and exemplify the whole approach. The expressiveness of
is illustrated by examples of polymorphic functions and strategies.
We present a formulation and numerical solution procedure for heterogeneous atomistic-continuum representations of fluid flows. The ingredients from atomistic and continuum perspectives are non-equilibrium molecular dynamics and spectral element, respectively; the matching is provided by a classical procedure, the Schwarz alternating method with overlapping subdomains. The technique is applied to microscale flow of a dense fluid (supercritical argon) in a complex two-dimensional channel.
In an online problem, the input is revealed one piece at a time. In every time step, the online algorithm has to produce a part of the output, based on the partial knowledge of the input. Such decisions are irrevocable, and thus online algorithms usually lead to nonoptimal solutions. The impact of the partial knowledge depends strongly on the problem. If the algorithm is allowed to read binary information about the future, the amount of bits read that allow the algorithm to solve the problem optimally is the so-called advice complexity. The quality of an online algorithm is measured by its competitive ratio, which compares its performance to that of an optimal offline algorithm.
In this paper we study online bipartite matchings focusing on the particular case of bipartite matchings in regular graphs. We give tight upper and lower bounds on the competitive ratio of the online deterministic bipartite matching problem. The competitive ratio turns out to be asymptotically equal to the known randomized competitive ratio. Afterwards, we present an upper and lower bound for the advice complexity of the online deterministic bipartite matching problem.
The matching preclusion number of a graph is the minimum number of edges whose deletion results in a graph that has neither perfect matchings nor almost perfect matchings. As a generalization, Liu and Liu introduced the concept of fractional matching preclusion number in 2017. The Fractional Matching Preclusion Number (FMP number) of G is the minimum number of edges whose deletion leaves the resulting graph without a fractional perfect matching. The Fractional Strong Matching Preclusion Number (FSMP number) of G is the minimum number of vertices and/or edges whose deletion leaves the resulting graph without a fractional perfect matching. In this paper, we obtain the FMP number and the FSMP number for (n, k)-star graphs. In addition, all the optimal fractional strong matching preclusion sets of these graphs are categorized.
The strong matching preclusion number of a graph is the minimum number of vertices and edges whose deletion results in a graph that has neither perfect matchings nor almost perfect matchings. The strong matching preclusion is a well-studied measure for the network invulnerability in the event of edge failure. In this paper, we obtain the strong matching preclusion number for a class of arrangement graphs and categorize their the strong matching preclusion set, which are a supplement of known results.
An edge subset F of G is a fractional matching preclusion set (FMP set for short) if G−F has no fractional perfect matchings. The fractional matching preclusion number (FMP number for short) of G, denoted by fmp(G), is the minimum size of FMP sets of G. A set F of edges and vertices of G is a fractional strong matching preclusion set (FSMP set for short) if G−F has no fractional perfect matchings. The fractional strong matching preclusion number (FSMP number for short) of G, denoted by fsmp(G), is the minimum size of FSMP sets of G. Data center networks have been proposed for data centers as a server-centric interconnection network structure, which can support millions of servers with high network capacity by only using commodity switches. In this paper, we obtain the FMP number and the FSMP number for data center networks Dn,k, and show that fmp(Dn,k)=n+k−1 for n≥2, k≥0 and fsmp(Dn,k)=n+k−1 for n≥6, k≥1. In addition, all the optimal fractional strong matching preclusion sets of these graphs are categorized.
An edge subset F of G is a fractional matching preclusion set (FMP set for short) if G−F has no fractional perfect matchings. The fractional matching preclusion number (FMP number for short) of G, denoted by fmp(G), is the minimum size of FMP sets of G. A set F of edges and vertices of G is a fractional strong matching preclusion set (FSMP set for short) if G−F has no fractional perfect matchings. The fractional strong matching preclusion number (FSMP number for short) of G, denoted by fsmp(G), is the minimum size of FSMP sets of G. Data center networks have been proposed for data centers as a server-centric interconnection network structure, which can support millions of servers with high network capacity by only using commodity switches. In this paper, we obtain the FMP number and the FSMP number for data center networks Dn,k, and show that fmp(Dn,k)=n+k−1 for n≥2, k≥0 and fsmp(Dn,k)=n+k−1 for n≥6, k≥1. In addition, all the optimal fractional strong matching preclusion sets of these graphs are categorized.
In this paper, I investigate the effects of changes in demand structure caused by population aging on the Japanese economy using a multi-sector new Keynesian model with job creation/destruction analysis. I consider upward revisions in forecast for the speed of Japanese population aging as unexpected shocks to its demand structure. I find that the shocks caused around 0.3% point deflationary pressure on year-to-year inflation, 0.3% to 0.4% point increase in unemployment rates, and 1.8% point decrease in real GDP from the early 1990s to the 2000s in Japan. I also find that the repetition of such upward revisions made those effects look more persistent.
This paper deals with the multiple-rule problem which arises when several decision rules (of different classes) match ("fire" for) an input to-be-classified (unseen) object. The paper focuses on formal aspects and theoretical methodology for the above problem.
The general definitions of the notions of a Designer, Learner and Classifier are presented in a formal matter, including parameters that are usually attached to the above concepts such as rule consistency, completeness, quality, matching rate, etc. We thus provide the minimum-requirement definitions as necessary conditions for these concepts. Any designer (decision-system builder) of a new multiple-rule system may start with these minimum requirements.
We only expect that the Classifier makes its decisions according to its decision scheme induced as a knowledge base (theory, model, concept description). Also, two case studies are discussed. We conclude with a general flow chart for a decision-system builder. He/she can just pursue it and select parameters of a Learner and Classifier, following the minimum requirements provided.
This paper deals with the problem of matching curves and structures extracted from 2D images that are subject to translation, rotation, scaling and other geometric transformations. We present in this paper a novel approach, Connected Equi-Length Line Segments (CELLS), for curve representation and matching. In our framework, a curve is represented by a number of connected equi-length line segments and a new matrix called Orientation Difference Matrix (ODM) is constructed for the curve, which reflects the distribution of the rest of the line segments with respect to the current one using orientation differences between them. The representation is invariant to rotation, scaling and translation. The problem of structure matching is also considered in this paper and is solved based on CELLS. The matching of structures is performed by (1) detecting tri-junctions and quad-junctions on the structures, (2) representing each arch using CELLS. A practical use of the proposed approach is demonstrated by registering a SAR image of a certain area to a map.
In stereo research the construction of a dense disparity map is a complicated task when the scene contains a lot of occlusions. In this case in the neighborhood of occlusions, we could consider that the images have a non-stationary behavior. In this paper we propose a new method for computing a dense disparity map using the decomposition of the 2D wavelet transform in four quarters allowing us to find a corresponding pixel in the case of occlusion as well. Our algorithm constructs in each pixel of two images four estimators corresponding to each quarter. The matching of the four wavelet coefficient estimators in the right image with the other four in the left image allow us to construct a dense map disparity map in each pixel of an image.
The identification of fingerprints and palmprints is considered a challenging research line in Biometrics. Nowadays, the accuracy of these techniques highly depends on the quality of the involved impressions, specially if the matching is performed in latent cases. In the present work, a new algorithm that reports a high accuracy in both cases is presented. This proposal requires a minimum amount of manually marked information in latent impressions and deals successfully with problems of missing and spurious minutiae. Moreover, we improved a verification algorithm based on a previously introduced feature model. The algorithm uses a strategy for finding adaptable local matches between substructures obtained from images. The experimental results show that our proposal achieves high accuracy for all cases, despite the major differences that exist between a palmprint and a fingerprint. For latent fingerprint identification our approach shows its robustness in retrieving 258 latents from different size scale of the background dataset, achieving a rank-1 identification rate over 57% in all cases. Carrying out a similar experimentation for latent palmprint identification, our approach achieved a rank-1 identification rate over 75%.
To carry out an effective recognition for palmprint, this paper presents an algorithm of image segmentation of region of interest (ROI), extracts the ROI of a palmprint image and studies the composing features of palmprint. This paper constructs a coordinate by making use of characteristic points in the palm geometric contour, improves the algorithm of ROI extraction and provides a positioning method of ROI. Moreover, this paper uses the wavelet transform to divide up ROI, extracts the energy feature of wavelet, gives an approach of matching and recognition to improve the correctness and efficiency of existing main recognition approaches, and compares it with existing main approaches of palmprint recognition by experiments. The experiment results show that the approach in this paper has the better recognition effect, the faster matching speed, and the higher recognition rate which is improved averagely by 2.69% than those of the main recognition approaches.
This paper discusses detection and matching of arbitrary image features or patterns. The common characteristics of feature extraction and matching are summarized which show that they can be considered as special cases of a more general problem—signal detection. However, the existing signal detection theories do not solve feature extraction and matching problems readily. Therefore, a general formulation of feature extraction and matching as a problem of signal detection is presented. This formulation unifies feature extraction and matching into a more general framework so that the two can be better integrated to form an automatic system for image matching or object recognition. Following this formulation, guidelines for designing algorithms for detection or matching of arbitrary image features or patterns which can be easily implemented or reconfigurated for many practical applications are derived. Sample algorithms resulting from this formulation and the associated experimental results with real image data are provided which demonstrate the performance and robustness of the methods.
Simple inductive learning algorithms assume that all attribute values are available. The well-known Quinlan's paper1 discusses quite a few routines for the processing of unknown attribute values in the TDIDT family and analyzes seven of them.
This paper introduces five routines for the processing of unknown attribute values that have been designed for the CN4 learning algorithm, a large extension of the well-known CN2. Both algorithms CN2 and CN4 induce lists of decision rules from examples applying the covering paradigm.
CN2 offers two ways for the processing of unknown attribute values. The CN4's five routines differ in style of matching complexes with examples (objects) that involve unknown attribute values. The definition of matching is discussed in detail in the paper. The strategy of unknown value processing is described both for learning and classification phases in individual routines.
The results of experiments with various percentages of unknown attribute values on real-world (mostly medical) data are presented and performances of all five routines are compared.
Medical imaging is a powerful mean to access dynamic function of 3D deformable organs of the body. Due to the flow of incoming data, multiresolution methods on parallel computers is the only way to achieve complex processings in reasonable time. We present an active pyramid to model dynamic volumes. This pyramid is built on the first volume of a sequence and contains a binary model of the objects of interest. Previous knowledge is introduced within this binary model. The structure of the pyramid is rigid, but its main interest is that its components are deformed to fit the data using energetical constraints. A multiresolution algorithm based on self-organizing maps is then applied to deform the model through time. This algorithm matches the different levels of the pyramid in a coarse to fine approach. The output of the matching process is the field of deformation, modeling the transformations. This pyramid is applied to real data in the result section. The rigid structure of the pyramid is suitable for massively parallel architectures.
This paper presents a new approach for the estimation of motion trajectories from image sequences. This approach is seen to be similar to both "token tracking" feature-based methods, and "block based" region-matching approaches. Initially, a parametric motion trajectory model is formulated, and the motion trajectory estimation problem is conveniently cast into a Markov Random Field (MRF) framework which allows a priori motion constraints to be expressed. A three-stage optimization algorithm is presented, and a recursive estimation approach is developed. A method for the robust detection of areas of occlusion is presented, as are comparative results to show the relative success of this motion estimation approach. Although robustly computed motion trajectories show great promise in many motion-related areas of computer vision, this paper illustrates their use in the context of motion compensated prediction as part of a video compression algorithm.