We consider a generalization of the min-cut partitioning problem where we partition a graph G=(V,E) into two sets V1 and V2 such that |V1∩V2|≤d, d<|V|, and such that |{(u, v)|u∈V1−V2, v∈V2−V1}| is minimized. The problem is trivially solvable using flow techniques for any fixed d, but we show that it is NP-hard for integer values of d. It remains NP-hard if we impose restrictions on the size of V1, i.e., |V1|=k, k∈Z+. The latter problem variation may apply in VLSI layout and hypertext partitioning. We present polynomial time algorithms for the special cases of solid grids and series-parallel graphs. Series-parallel graphs find applications in hypertext partitioning whereas grid graphs model the mapping of a class of Partial Differential Equation computations into parallel machines.
Reconfigurable communication networks for massively parallel multiprocessor systems offer the possibility to realize a number of application demands like special communication patterns or real-time requirements. This paper presents the design principle of a reconfigurable network which is able to realize any graph of maximal degree four. The architecture is based on a special multistage Clos network, constructed out of a number of static routing switches of equal size. Upper bounds on the cut size of 4-regular graphs, if split into a number of clusters, allow minimizing the number of switches and connections while still offering the desired reconfiguration capabilities as well as large scalability and flexible multi-user access. Efficient algorithms configuring the architecture are based on an old result by Petersen27 about the decomposition of regular graphs.
The concept presented here is the basis for the Parsytec SC series of reconfigurable MPP-systems. The currently largest realization with 320 processors is presented in greater detail.
A Monte Carlo algorithm for partitioning graphs is presented. The algorithm is based on the self-organizing map, an unsupervised, competitive neural network. A mean-field analysis suggests that the complexity of the algorithm is at most is on the order of O(n3/|E|), where n is the number of vertices of the graph, and |E| the number of edges. This prediction is tested on a class of random graphs. Scaling laws that deviate from the mean-field prediction are obtained. Although the origin of these scaling laws is unclear, their consequences are discussed.
Collins, Dolinar, McEliece and Pollara recently gave a method for designing VLSI “building block” chips, based on structured subgraphs of the deBruijn graph, that can be wired together to construct arbitrarily large deBruijn graphs. In such a setting, it is clearly preferable to have as many deBruijn graph edges as possible residing on the individual chips, as opposed to being implemented as connections between chips. In this paper, we show that their method can be used to design building blocks that asymptotically require the smallest possible fraction of the deBruijn graph wires to be implemented as inter-chip connections, thus keeping the largest possible fraction of the wires internal to the chips.
In today’s world scenario, many of the real-life problems and application data can be represented with the help of the graphs. Nowadays technology grows day by day at a very fast rate; applications generate a vast amount of valuable data, due to which the size of their representation graphs is increased. How to get meaningful information from these data become a hot research topic. Methodical algorithms are required to extract useful information from these raw data. These unstructured graphs are not scattered in nature, but these show some relationships between their basic entities. Identifying communities based on these relationships improves the understanding of the applications represented by graphs. Community detection algorithms are one of the solutions which divide the graph into small size clusters where nodes are densely connected within the cluster and sparsely connected across. During the last decade, there are lots of algorithms proposed which can be categorized into mainly two broad categories; non-overlapping and overlapping community detection algorithm. The goal of this paper is to offer a comparative analysis of the various community detection algorithms. We bring together all the state of art community detection algorithms related to these two classes into a single article with their accessible benchmark data sets. Finally, we represent a comparison of these algorithms concerning two parameters: one is time efficiency, and the other is how accurately the communities are detected.
We combine the techniques of almost invariant sets (using tree structured box elimination and graph partitioning algorithms) with invariant manifold and lobe dynamics techniques. The result is a new computational technique for computing key dynamical features, including almost invariant sets, resonance regions as well as transport rates and bottlenecks between regions in dynamical systems. This methodology can be applied to a variety of multibody problems, including those in molecular modeling, chemical reaction rates and dynamical astronomy. In this paper we focus on problems in dynamical astronomy to illustrate the power of the combination of these different numerical tools and their applicability. In particular, we compute transport rates between two resonance regions for the three-body system consisting of the Sun, Jupiter and a third body (such as an asteroid). These resonance regions are appropriate for certain comets and asteroids.
Much of current data mining research is focused on discovering sets of attributes that discriminate data entities into classes, such as shopping trends for a particular demographic group. In contrast, we are working to develop data mining techniques to discover patterns consisting of complex relationships between entities. Our research is particularly applicable to domains in which the data is event-driven or relationally structured. In this paper we present approaches to address two related challenges; the need to assimilate incremental data updates and the need to mine monolithic datasets. Many realistic problems are continuous in nature and therefore require a data mining approach that can evolve discovered knowledge over time. Similarly, many problems present data sets that are too large to fit into dynamic memory on conventional computer systems. We address incremental data mining by introducing a mechanism for summarizing discoveries from previous data increments so that the globally-best patterns can be computed by mining only the new data increment. To address monolithic datasets we introduce a technique by which these datasets can be partitioned and mined serially with minimal impact on the result quality. We present applications of our work in both the counter-terrorism and bioinformatics domains.
The problem of partitioning the elements of a graph G=(V, E) into two equal size sets A and B that share at most d elements such that the total number of edges (u, v), u∈A−B, v∈B−A is minimized, arises in the areas of Hypermedia Organization, Network Integrity, and VLSI Layout. We formulate the problem in terms of element duplication, where each element c∈A∩B is substituted by two copies c′∈A and c″∈B As a result, edges incident to c′ or c″ need not count in the cost of the partition. We show that this partitioning problem is NP-hard in general, and we present a solution which utilizes an optimal polynomial time algorithm for the special case where G is a series-parallel graph. We also discuss special other cases where the partitioning problem or variations are polynomially solvable.
Given an undirected graph, a balanced node separator is a set of nodes whose removal splits the graph into connected components of limited size. Balanced node separators are used for graph partitioning, for the construction of graph data structures, and for measuring network reliability. It is NP-hard to decide whether a graph has a balanced node separator of size at most k. Therefore, practical algorithms typically try to find small separators in a heuristic fashion. In this paper, we present a branching algorithm that for a given value k either outputs a balanced node separator of size at most k or certifies that k is a valid lower bound. Using this algorithm iteratively for growing values of k allows us to find a minimum balanced node separator. To make this algorithm scalable to real-world (road) networks of considerable size, we first describe pruning rules to reduce the graph size without affecting the minimum balanced separator size. In addition, we prove several structural properties of minimum balanced node separators which are then used to reduce the branching factor and search depth of our algorithm. We experimentally demonstrate the applicability of our algorithm to graphs with thousands of nodes and edges. We showcase the usefulness of having minimum balanced separators for judging the quality of existing heuristics, for improving preprocessing-based route planning techniques on road networks, and for lower bounding important graph parameters. Finally, we discuss how our ideas and methods can also be leveraged to accelerate the heuristic computation of balanced node separators.
In this paper we investigate a new framework for graph partitioning using decision trees to search for sub-graphs within a graph adjacency matrix. Graph partitioning by a decision tree seeks to optimize a specified graph partitioning index such as ratio cut by recursively applying decision rules found within nodes of the graph. Key advantages of tree models for graph partitioning are they provide a predictive framework for evaluating the quality of the solution, determining the number of sub-graphs and assessing overall variable importance. We evaluate the performance of tree based graph partitioning on a benchmark dataset for multiclass classification of tumor diagnosis based on gene expression. Three graph cut indices will be compared, ratio cut, normalized cut and network modularity and assessed in terms of their classification accuracy, power to estimate the optimal number of sub-graphs and ability to extract known important variables within the dataset.
We combine the techniques of almost invariant sets (using tree structured box elimination and graph partitioning algorithms) with invariant manifold and lobe dynamics techniques. The result is a new computational technique for computing key dynamical features, including almost invariant sets, resonance regions as well as transport rates and bottlenecks between regions in dynamical systems. This methodology can be applied to a variety of multibody problems, including those in molecular modeling, chemical reaction rates and dynamical astronomy. In this paper we focus on problems in dynamical astronomy to illustrate the power of the combination of these different numerical tools and their applicability. In particular, we compute transport rates between two resonance regions for the three-body system consisting of the Sun, Jupiter and a third body (such as an asteroid). These resonance regions are appropriate for certain comets and asteroids.
A class of multilevel algorithms for partitioning of a sparse matrix prior to parallel solution of a system of linear equations is described. This matrix partitioning problem can be described in terms of a graph partitioning problem which is known to be NP-hard, so several heuristics for its solution have been proposed in the past decades. For this purpose we use the multilevel algorithm proposed by B. Hendrickson and R. Leland [2] and further developed by G. Karypis and V. Kumar [3]. This algorithm is very efficient and tends to produce high quality partitioning for a wide range of matrices arising in many practical applications.
Software Defined Network (SDN) decouples the control plane from the data plane, yielding a vast flexibility for networks. This paper focuses on the optimal placement for heterogeneous controllers. First, SDN networks are modeled using graph theory. Therefore, the controller placement problem is transformed into a specified graph partitioning problem. Then, an optimal controller placement scheme is proposed based on multi-level graph partitioning. Finally, Simulation experiments are conducted on real network topologies. The results indicate that our scheme can achieve nearly optimal controller load balance and is of application value to wide-area SDN deployments.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.