We deal with a very general problem: a given graph G is to be "packed" into a host graph H, and we are asked about some natural optimization questions concerning this packing. The problem has never been investigated before in this general form. The input of the problem is a simple graph G = (V, E) with lower and upper bounds on its edges and weights on its vertices. The vertices correspond to items which have to be packed into the vertices (bins) of a host graph, such that each host vertex can accommodate at most L weight in total, and if two items are adjacent in G, then the distance of their host vertices in H must be between the lower and upper bounds of the edge joining the two items. Special cases are bin packing with conflicts, chromatic number, and many more. We give some general structure statements, treat some special cases, and investigate the performance guarantee of polynomial-time algorithms both in the offline and online setting.
Almost all the decision questions concerning the resource requirements of a computational device are undecidable. Here we want to understand the exact boundary that separates the undecidable from the decidable cases of such problems by considering the time complexity of very simple devices that include NFAs (1-way and 2-way), PDAs and PDAs augmented with counters - and their unambiguous restrictions. We consider several variations - based on whether the bound holds exactly or as an upper-bound and show decidability as well as undecidability results. In the case of decidable problems, we also attempt to determine more precisely the complexity class to which the problem belongs.
Although NP-Complete problems are the most difficult decisional problems, it is possible to discover in them polynomial (or easy) observables. We study the Graph Partitioning Problem showing that it is possible to recognize in it two correlated polynomial observables. The particular behavior of one of them with respect to the connectivity of the graph suggests the presence of a phase transition in partitionability.
The fractality of complex networks is studied by estimating the correlation dimensions of the networks. Comparing with the previous algorithms of estimating the box dimension, our algorithm achieves a significant reduction in time complexity. For four benchmark cases tested, that is, the Escherichia coli (E. Coli) metabolic network, the Homo sapiens protein interaction network (H. Sapiens PIN), the Saccharomyces cerevisiae protein interaction network (S. Cerevisiae PIN) and the World Wide Web (WWW), experiments are provided to demonstrate the validity of our algorithm.
We consider the problem of establishing gravity in cellular automata. In particular, when cellular automata states can be partitioned into empty, particle, and wall types, with the latter enclosing rectangular areas, we desire rules that will make the particles fall down and pile up on the bottom of each such area. We desire the rules to be both simple and time-efficient. We propose a block rule, and prove that it piles up particles on a grid of height h in time at most 3*h.
Given a mathematical expression in LaTeX or MathML format, retrieval algorithm extracts similar expressions from a database. In our previous work, we have used Longest Common Subsequence (LCS) algorithm to match two expressions of lengths, m and n, which takes O(mn) time complexity. If there are T database expressions, total complexity is O(Tmn), and an increase in T also increases this complexity. In the present work, we propose to use parallel LCS algorithm in our retrieval process. Parallel LCS has O(max(m,n)) time complexity with max(m,n) processors and total complexity can be reduced to O(Tmax(m,n)). For our experimentation, OpenMP based implementation has been used on Intel i3 processor with 4 cores. However, for smaller expressions, parallel version takes more time as the implementation overhead dominates the algorithmic improvement. As such, we have proposed to use parallel version, selectively, only on larger expressions, in our retrieval algorithm to achieve better performance. We have compared the sequential and parallel versions of our ME retrieval algorithm, and the performance results have been reported on a database of 829 mathematical expressions.
This paper deals with the problem of finding the K smallest elements out of a totally ordered but non-sorted set of N elements. This problem, called K-selection, arises often in statistics, image processing and distributed computing. We propose two algorithms to solve this problem in hypercubes. Our first algorithm is asymptotically optimal when K = O((log N)β), for any constant β. The second enlarges the range of optimality to K = N∊, ∊ < 1, using a recursive strategy. These are major improvements on previous results.
An efficient Hough transform algorithm on a reconfigurable mesh is proposed in this paper. For a problem with N edge pixels and an n×n parameter space, our algorithm runs in constant time on a 4-dimensional N×log2N×n×n reconfigurable mesh. The previous best algorithm for the same problem runs in a constant time on a 4-dimensional n×N×N×N reconfigurable mesh. Since n is always smaller than N in real world (in fact, n is in the order of N1/2), our algorithm reduces the number of processors used drastically while keeping the same time complexity.
This paper is concerned with a new variant of the inverse 1-median location problem in which the aim is to modify the customer weights such that a predetermined facility location becomes a 1-median location and the total profit obtained via the weight improvements is maximized. We develop novel combinatorial approaches with linear time complexities for solving the problem on tree networks and in the plane under the rectilinear and Chebyshev norms.
This paper deals with the inverse eccentric vertex location problem on extended star networks in which the aim is to modify the edge lengths at the minimum overall cost within certain modification bounds so that a predetermined vertex becomes the farthest vertex from a given fixed vertex under the new edge lengths. Novel combinatorial algorithms with near-linear time complexities are developed for obtaining the optimal solution of the problem under different cost norms.
In this paper, a parallel decoder and a word group prediction module are proposed to speed up decoding and improve the effect of captions. The features of the image extracted by the encoder are linearly projected to different word groups, and then a unique relaxed mask matrix is designed to improve the decoding speed and the caption effect. First, since image captioning is composed of many words, sentences can also be broken down into word groups or words according to their syntactic structure, and we achieve this function through constituency parsing. Second, we make full use of the extracted features to predict the size of word groups. Then, a new embedding representing the information of the word is proposed based on word embedding. Finally, with the help of word groups, we design a mask matrix to modify the decoding process so that each step of the model can produce one or more words in parallel. Experiments on public datasets demonstrate that our method can reduce the time complexity while maintaining competitive performance.
Although dynamic programming (DP) is an optimization approach used to solve a complex problem fast, the time required to solve it is still not efficient and grows polynomially with the size of the input. In this contribution, we improve the computation time of the dynamic programming based algorithms by proposing a novel technique, which is called “SDP: Segmented Dynamic programming”. SDP finds the best way of splitting the compared sequences into segments and then applies the dynamic programming algorithm to each segment individually. This will reduce the computation time dramatically. SDP may be applied to any dynamic programming based algorithm to improve its computation time. As case studies, we apply the SDP technique on two different dynamic programming based algorithms; “Needleman–Wunsch (NW)”, the widely used program for optimal sequence alignment, and the LCS algorithm, which finds the “Longest Common Subsequence” between two input strings. The results show that applying the SDP technique in conjunction with the DP based algorithms improves the computation time by up to 80% in comparison to the sole DP algorithms, but with small or ignorable degradation in comparing results. This degradation is controllable and it is based on the number of split segments as an input parameter. However, we compare our results with the well-known heuristic FASTA sequence alignment algorithm, “GGSEARCH”. We show that our results are much closer to the optimal results than the “GGSEARCH” algorithm. The results are valid independent from the sequences length and their level of similarity. To show the functionality of our technique on the hardware and to verify the results, we implement it on the Xilinx Zynq-7000 FPGA.
To calculate the Minkowski-sum based similarity measure of two convex polyhedra, many relative orientations have to be considered. These relative orientations are characterized by the fact that some faces and edges of the polyhedra are parallel. For every relative orientation of the polyhedra, the volume or mixed volume of their Minkowski sum is evaluated. From the minimum of this volume, the similarity measure is calculated. In this article two issues are addressed. First, we propose and test a method to reduce the set of relative orientations to be considered by using geometric inequalities in the slope diagrams of the polyhedra. In this way, the time complexity of O(n6) is reduced to O(n4.5). Secondly, we determine which relative orientation problems are ill-posed and may be skipped because they do not maximize the similarity measure.
We show that the Voronoi diagram of a finite sequence of points in the plane which gives sorted order of the points with respect to two perpendicular directions can be computed in linear time. In contrast, we observe that the problem of computing the Voronoi diagram of a finite sequence of points in the plane which gives the sorted order of the points with respect to a single direction requires Ω(n log n) operations in the algebraic decision tree model. As a corollary from the first result, we show that the bounded Voronoi diagrams of simple n-vertex polygons which can be efficiently cut into the so called monotone histograms can be computed in o(n log n) time.
If u and v are two conjugate elements of a hyperbolic group then the length of a shortest conjugating element for u and v can be bounded by a linear function of the sum of their lengths, as was proved by Lysenok in [Some algorithmic properties of hyperbolic groups, Izv. Akad. Nauk SSSR Ser. Mat.53(4) (1989) 814–832, 912]. Bridson and Haefliger showed in [Metrics Spaces of Non-Positive Curvature (Springer-Verlag, Berlin, 1999)] that in a hyperbolic group the conjugacy problem can be solved in polynomial time. We extend these results to relatively hyperbolic groups. In particular, we show that both the conjugacy problem and the conjugacy search problem can be solved in polynomial time in a relatively hyperbolic group, whenever the corresponding problem can be solved in polynomial time in each parabolic subgroup. We also prove that if u and v are two conjugate hyperbolic elements of a relatively hyperbolic group then the length of a shortest conjugating element for u and v is linear in terms of their lengths.
Fuzzy graph theory is one of the most developing area of research, which has a variety of applications in different fields, including computer science, communication networks, biological sciences, social networks, decision-making and optimization problems. A wide variety of human decision making is based on double-sided or bipolar judgmental thinking on a positive side and a negative side. In this research article, we present some interesting applications of bipolar fuzzy competition graphs in business marketing, social network and wireless communication network. We design and implement algorithms for computing the strength of competition in the bipolar fuzzy graphs. We also compute time complexity of each algorithm.
A multi-locus likelihood of a genetic map is computed based on a mathematical model of chromatid exchange in meiosis that accounts for any type of bivalent configuration in a genetic interval in any specified order of genetic markers. The computational problem is to calculate the likelihood (L) and maximize L by choosing an ordering of genetic markers on the map and the recombination distances between markers. This maximum likelihood estimate (MLE) could be found either with a straightforward algorithm or with the proposed recursive linking algorithm that implements the likelihood computation process involving an iterative procedure is called Expectation Maximization (EM). The time complexity of the straightforward algorithm is exponential without bound in the number of genetic markers, and implementation of the model with a straightforward algorithm for more than seven genetic markers is not feasible, thus motivating the critical importance of the proposed recursive linking algorithm. The recursive linking algorithm decomposes the pool of genetic markers into segments and renders the model implementable for hundreds of genetic markers. The recursive algorithm is shown to reduce the order of time complexity from exponential to linear in the number of markers. The improvement in time complexity is shown theoretically by a worst-case analysis of the algorithm and supported by run time results using data on linkage group-II of the fungal genome Neurospora crassa.
To reduce the time complexity of quantum morphology operations, two kinds of improved quantum dilation and erosion operations are proposed. Quantum parallelism is well used in the design of these operations. Consequently, the time complexity is greatly reduced compared with the previous quantum dilation and erosion operations. The neighborhood information of each pixel is needed in the process of designing quantum dilation and erosion operations. In order to get the neighborhood information, quantum position shifting transformation is utilized, which can make the neighborhood information store in a quantum image set. In this image set, the neighborhood information of pixel at location (j, i) is stored at the same location (j, i) of other images in the image set. All the pixels will be processed simultaneously, which is the performance of quantum parallelism. The time complexity analysis shows that these quantum operations have polynomial-time complexity which is much lower than the exponential-time complexity of the previous version.
In our daily life problem we face uncertainties in making right decisions. In this study, we propose two different decision-making problems in medical field. The first problem is fever diagnosing and second problem is mouth cancer risk analysis. In the first problem, we use fuzzy soft similarity measures and fuzzy soft matrix operations to diagnose the type of fever. We consider a hypothetical case study and manipulate similarity measures on it. Our work diagnoses different patients having similar symptoms. We also develop a small application using JAVA. In the second problem, we perform risk analysis of mouth cancer. The proposed fuzzy soft expert system takes two biochemical parameters as inputs that is, serum total malondialdehyde (MDA), and serum proton donors capacity (donors_protons) and determines the risk of mouth cancer. Our study facilitates doctors by diagnosing mouth cancer at its earlier stages. There are four main components of our fuzzy soft expert system. The first component is named as fuzzification which converts crisp input into linguistic variables and formulates fuzzy sets. The second component transforms fuzzy sets into their respective fuzzy soft sets. The third component determines indispensable parameters and performs parameter reduction. The fourth component performs risk analysis by using algorithm. We use exemplary dataset and run all the components of fuzzy soft expert system to compute cancer risk.
This paper determines the risk for cardiovascular diseases (CVDs), and nutrition level in infants aged 0–6 months using Fuzzy Cognitive Maps (FCMs). The aim of this study is to facilitates the medical experts to early detects these diseases with accuracy, so that overall death ratio can be reduced. Firstly, we have introduced the concepts of FCMs and briefly refer to the applications of these methods in medical. After that, two intelligent decision support systems for cardiovascular and malnutrition are developed using FCMs. The proposed cardiovascular risk assessment system takes six inputs: chest pain, cholesterol, heart rate, blood pressure, blood sugar, and old peak and determines CVDs risk. The second decision support system of malnutrition diagnosis takes twelve inputs: breastfeeding, daily income, maternal education, colostrum intake, energy intake, protein intake, vitamin A intake, iron intake, family size, height, weight, head circumference, and skin fold thickness and diagnoses the nutrition level in infants. We have explained the working of both decision support systems using case studies.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.