We prove that recognizer P systems with active membranes using polynomial space characterize the complexity class PSPACE. This result holds for both confluent and nonconfluent systems, and independently of the use of membrane division rules.
We shall show that (i) there exists a binary NP-complete language such that its unary coded version
is in ASPACE(log log n), (ii) if P ≠ NP, there exists a binary language
such that its unary version
is in ASPACE(log log n), while the language
itself is not in ASPACE(log n). As a consequence, under assumption that P ≠ NP, the standard space translation between unary and binary languages does not work for alternating machines with small space; the equivalence
is valid only if s(n) ∈ Ω(n). This is quite different from deterministic and nondeterministic machines, for which the corresponding equivalence holds for each s(n) ∈ Ω(log n), and hence for s(log n) ∈ Ω(log log n). Under assumption that NP ≠ co-NP, we also show that binary versions of unary languages in ASPACE(log log n) form a complexity class not contained in NP.
We consider a lattice-based model in multiattribute decision making, where preferences are represented by global utility functions that evaluate alternatives in a lattice structure (which can account for situations of indifference as well as of incomparability). Essentially, this evaluation is obtained by first encoding each of the attributes (nominal, qualitative, numeric, etc.) of each alternative into a distributive lattice, and then aggregating such values by lattice functions. We formulate version spaces within this model (global preferences consistent with empirical data) as solutions of an interpolation problem and present their complete descriptions accordingly. Moreover, we consider the computational complexity of this interpolation problem, and show that up to 3 attributes it is solvable in polynomial time, whereas it is NP complete over more than 3 attributes. Our results are then illustrated with a concrete example.
Seinosuke Toda introduced the class Mid P of functions that yield the middle element in the set of output values over all paths of nondeterministic polynomial time Turing machines. We define two related classes: Med P consists of those functions that yield the middle element in the ordered sequence of output values of nondeterministic polynomial time Turing machines (i.e. we take into account that elements may occur with multiplicities greater than one). P consists of those functions that yield the middle element of all accepting paths (in some resonable encoding) of nondeterministic polynomial time Turing machines.
We exhibit similarities and differences between these classes and completely determine the inclusion structure between these classes and some other well-known classes of functions like Valiant’s # P and Köbler, Schöning, and Torán’s span-P, that hold under general accepted complexity theoretic assumptions such as the counting hierarchy does not collapse.
Our results help in clarifying the status of Toda’s very important class Mid P in showing that it is closely related to the class PPNP.
In any airline’s schedule development process, aircraft rotations must be planned for individual fleet types after fleet assignment. The aircraft rotation plans must conform to stringent maintenance requirements and this problem can be formulated as a periodic routing problem on an Eulerian graph. We analyze the computational complexity of developing maintenance rotations when some overnighting aircraft may not have sufficient time on the ground to complete extended maintenance (referred to as a maintenance infeasibility). The paper also provides a theoretical analysis of heuristics for the aircraft maintenance rotation problem with maintenance infeasibilities.
Carbon has a unique position among elements in the periodic table. It produces an allotrope, graphene, a mechanically robust two dimensional semimetal. The multifarious properties that graphene exhibits has few parallels among elemental metals. From simplicity, namely carbon atoms connected by pure sp2 bonds, a wealth of novel quantum properties emerge. In classical complex systems such as a spin glass or a finance market, several competing agents or elements are responsible for unanticipated and difficult to predict emergent properties. The complex (sic) structure of quantum mechanics is responsbile for an unanticipated set of emergent properties in graphene. We call this quantum complexity. In fact, most quantum systems, phenomena and modern quantum field theory could be viewed as examples of quantum complexity. After giving a brief introduction to the quantum complexity we focus on our own work, which indicates the breadth in the type of quantum phenomena that graphene could support. We review our theoretical suggestions of, (i) spin-1 collective mode in netural graphene, (ii) relativistic type of phenomena in crossed electric and magnetic fields, (iii) room temperature superconductivity in doped graphene and (iv) composite Fermi sea in neutral graphene in uniform magnetic field and (v) two-channel Kondo effect. Except for the relativistic type of phenomena, the rest depend in a fundamental way on a weak electron correlation that exists in the broad two-dimensional band of graphene.
This paper considers an unanchored n-link robot arm in a square with side length at least as long as the longest arm link. The main result is a necessary and sufficient condition for a point in the square region to be reachable by a specified end of the arm. The condition takes O(n) time to determine. This result represents a continuation of previous work published by Hopcroft, Joseph, and Whitesides, by Kantabutra and Kosaraju, and by Kantabutra.
We refine the known result that the generalized word problem for finitely-generated subgroups of free groups is complete for P via logspace reductions and show that by restricting the lengths of the words in any instance and by stipulating that all words must be conjugates then we obtain complete problems for the complexity classes NSYMLOG, NL, and P. The proofs of our results range greatly: some are complexity-theoretic in nature (for example, proving completeness by reducing from another known complete problem), some are combinatorial, and one involves the characterization of complexity classes as problems describable in some logic.
In the present paper the phenomenon of anomalous positron production is reconsidered. An explanation of the phenomenon via the complex nature of the symplictic vacuum is attempted. This is a vacuum, akin to that of Dirac's but with an additional transfinite correction and fractal fine structure. It is conjectured that not only one, but a host of highly unstable exotic particles and weakly interacting neutral bosons may be produced under certain extreme conditions, for instance a very strong electromagnetic field. It is further shown that these particles are intimately linked to the production of mini black holes via the instanton method within the VAK of vacuum fluctuation. One of the principle conclusions in the present work is that while a symmetry breaking is accompanied by the appearance of a corresponding (new) particle, a symmetry breaking of the "average" or complex "quasi" symmetry of the VAK will be logically associated with a host of (new) "quasi" particles. This may be the explanation for the experimental observations related to the said anomalous positron production.
Social entrepreneurship (SE) is increasingly popular in academia and practice, but unified theoretical explanations about the performance of social entrepreneurship firms (SEFs) is missing (Santos, 2012). This deficiency motivates us to theorize about SE from a complexity science perspective. We draw from complexity science to analyze and explain how SEFs emerge, achieve performance, and grow. We link complexity science with SE so as to add explanatory value as well as offering guidelines for better SEF performance toward achieving social objectives while avoiding the chasm of chaos. Our theoretical framework offers complexity insights for building effective networks, and accountability, as well as for improving trust, legitimacy, and sound governance. Drawing on complexity theory to better explain the key elements necessary for improving SEFs’ performance and growth, enhances the probability of meeting the challenge of the so-called ‘double bottom-line’: achieving continuous positive social impacts while attaining financial health.
Coalitional power in multistage processes is modeled using effectivity frames, which link an effectivity function to every possible state of the world. Effectivity frames are general enough to capture, e.g., what groups of agents can bring about in extensive games of perfect and almost perfect information. Coalition Logic is used to describe effectivity frames, and the question of generating an extensive game satisfying a given specification is formulated as a satisfiability problem in Coalition Logic. Using this logical reformulation, we show that the complexity of this implementation problem depends on two parameters: For coalitional specifications, the problem is shown to be PSPACE-complete. For individual specifications on the other hand, i.e., for specifications which only refer to the powers of individual agents, generating an implementation with perfect information is PSPACE-complete, whereas generating an implementation with almost perfect information is NP-complete.
This paper explores the compressibility of complex systems by considering the simplification of Boolean networks. A method, which is similar to that reported by Bastolla and Parisi,4,5 is considered that is based on the removal of frozen nodes, or stable variables, and network "leaves," i.e. those nodes with outdegree = 0. The method uses a random sampling approach to identify the minimum set of frozen nodes. This set contains the nodes that are frozen in all attractor schemes. Although the method can over-estimate the size of the minimum set of frozen nodes, it is found that the chances of finding this minimum set are considerably greater than finding the full attractor set using the same sampling rate. Given that the number of attractors not found for a particular Boolean network increases with the network size, for any given sampling rate, such a method provides an opportunity to either fully enumerate the attractor set for a particular network, or improve the accuracy of the random sampling approach. Indeed, the paper also shows that when it comes to the counting of attractors in an ensemble of Boolean networks, enhancing the random sample method with the simplification method presented results in a significant improvement in accuracy.
Industrial networks offer a prime example of the tension between system stability and change. Change is necessary as a response to environmental variation, whereas stability provides the underpinning for long-term investment and the exploitation of efficiencies. Whilst one of the key themes in industrial network research has been the dynamics of change, relatively little work, empirical or theoretical, has been devoted to the dynamics of stability. This paper presents a new approach to this problem by using Boolean networks, which were originally devised by Stuart Kauffman as an abstract model of genetic networks. The elements in the model are connected by rules of Boolean logic, and a novel aspect of this research is that the elements represent the industrial network exchanges rather than stock entities (the organizations). The model structure consists of interactions between the exchanges, and the results represent the pattern of exchange episodes. A total of 42 networks were modeled and the dynamics analyzed in detail, and five of these cases are presented in this paper. The models produced realistic behavior and provided some insights into the reasons for stability in industrial networks.
This paper presents a leadership development model that is designed to utilise the self-organising, self-managing, and self-regulating functions found in teams and small groups. This theoretical paper presents the Team Emergence Leadership Development and Evaluation Model as a new dynamic leadership development model designed to function in complex and non-predictive environments. Complexity theory, complexity leadership theory, and emergence were utilised to connect this theoretical model to leadership development, team cognition and learning, and knowledge management. This new theoretical model provides a new way of viewing leadership development, by incorporating naturally occurring team processes as a means of replicating the characteristics traditionally viewed as being related to leadership development. Emergent events occur through distributed leadership among various agents and are defined by levels of meaning, providing new knowledge to the agents, and allowing for the collective to move onto the next step towards goal attainment. Connecting leadership development competencies with the environmental factors is critical for successful leadership development programs. The methods and procedures within the evaluation plan and protocols should move beyond a reliance on competency development as confirmation of leadership development. Complexity theory can help to shed light on the formation of these connections while aiding other agents to become potential emerging leaders themselves.
We consider the power of various quantum complexity classes with the restriction that states and operators are defined over a real, rather than complex, Hilbert space. It is well known that a quantum circuit over the complex numbers can be transformed into a quantum circuit over the real numbers with the addition of a single qubit. This implies that BQP retains its power when restricted to using states and operations over the reals. We show that the same is true for QMA(k), QIP(k), QMIP and QSZK.
Capturing and understanding innovation dynamics is a continuous challenge due to the difficulty of collecting process data and because it often involves multiple levels and units of analysis. Interest has been increasingly focused on applying complexity theory in explaining temporal and nonlinear characteristics of innovation because of its systematic paradigm for examining change in complex systems. Until recently, however, there is relatively little empirical support. This paper fulfills this objective by examining longitudinal data of Nylon innovation. Results reveal an attractor-shifting pattern accompanied by four adaptive cycles in the Nylon innovation process.
This paper discusses the social media paradox in the context of innovation. Innovation is defined as a knowledge intensive process of seeing and doing things differently, whereas social media refers to new ways of being connected. Social media has revolutionised the ways how knowledge is produced, shared and accumulated through social interactions within the organisation and across the organisation's boundaries. From an organisational perspective, this raises the question of how social media influences — enabling or inhibiting — its ability to see and do things differently. Social media offers tempting opportunities but also poses new threats. It is a paradox involving contradictory forces. Despite growing interest among academics, there is a lack of understanding of the possibilities of social media in the specific context of innovation. This paper fills the research gap by arguing that complexity concepts offer a new type of language to understand social media. Seeing interaction as intrinsic to innovation activity, complexity thinking opens the paradox of being in charge but not in control.
Algorithmic economics helps us stipulate, formulate, and resolve economic problems in a more precise manner than mainstream mathematical economics. It does so by aligning theorizing about an economic problem with both the data generated by the real world and the computers used to manipulate that data. Theoretically coherent, numerically meaningful, and therefore policy relevant, answers to economic problems can be extrapolated more readily using algorithmic economics than present day mathematical economics. An algorithmic economics would allow mathematical economics to prove theorems relating to economic problems, such as the existence of equilibria defined on some metric space, with embedded mechanisms for getting to the equilibria of these problems. A blueprint for such an economics is given and discussed with an example.
The inverse min-max spanning r-arborescence problem under the weighted sum-type Hamming distance on graphs is to modify the edge cost vector with respect to given modification bounds such that a given spanning r-arborescence becomes a min-max spanning r-arborescence and the total modification cost under the sum-type Hamming distance for all edges is minimized. It is shown that the problem is solvable in strongly polynomial time.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.