This book is based on columns and tutorials published in the Bulletin of the European Association for Theoretical Computer Science (EATCS) during the period 2000–2003. It presents many of the most active current research lines in theoretical computer science. The material appears in two volumes, “Algorithms and Complexity” and “Formal Models and Semantics”, reflecting the traditional division of the field.
The list of contributors includes many of the well-known researchers in theoretical computer science. Most of the articles are reader-friendly and do not presuppose much knowledge of the area in question. Therefore, the book constitutes very suitable supplementary reading material for various courses and seminars in computer science.
https://doi.org/10.1142/9789812562494_fmatter
The following sections are included:
https://doi.org/10.1142/9789812562494_0001
This chapter contains the seven papers from the Algorithmics Column, which have appeared in the Bulletin of the EATCS starting from number 75. During these two years when I acted as the editor of this column, there were plenty of exciting results and open problems stated in the field of algorithmics. Some of the columns were devoted to present a few of these new results, the remaining columns were surveys on specific research topics of algorithmics. All of the columns presented plenty of open problems. I will try to update on some of them…
https://doi.org/10.1142/9789812562494_0002
The following sections are included:
https://doi.org/10.1142/9789812562494_0003
We discuss several open problems in the area of machine scheduling that concern the computational complexity, the off-line approximability, and the on-line approximability of machine scheduling problems. We summarize what is known about these problems, we discuss related results, and we provide many pointers to the literature.
https://doi.org/10.1142/9789812562494_0004
This is the first installment of the ALGORITHMICS COLUMN dedicated to Analysis of Algorithms (AofA) that sometimes goes under the name Average-Case Analysis of Algorithms or Mathematical Analysis of Algorithms. The area of analysis of algorithms (at least, the way we understand it here) was born on July 27, 1963, when D.E. Knuth wrote his “Notes on Open Addressing”. Since 1963 the field has been undergoing substantial changes. We report here how it evolved since then. For a long time this area of research did not have a real “home”. But in 1993 the first seminar entirely devoted to analysis of algorithms took place in Dagstuhl, Germany. Since then seven seminars were organized, and in this column we briefly summarize the first three meetings held in Schloss Dagstuhl (thus “Dagstuhl Period”) and discuss various scientific activities that took place, describing some research problems, solutions, and open problems discussed during these meetings. In addition, we describe three special issues dedicated to these meetings.
https://doi.org/10.1142/9789812562494_0005
This article is a continuation of our previous Algorithmic Column [54] (EATCS, 77, 2002) dedicated to activities of the Analysis of Algorithms group during the “Dagstuhl–Period” (1993-1997). The first three meetings took place in Schloss Dagstuhl, Germany. The next three meetings of AofA were in Princeton (1998), Barcelona (1999), and Krynica Morska (near Gdańsk, 2000). We shall present here some research problems that have been the highlights of these three meetings. Three special issues [42,31,43] were also published after these meetings and we briefly summarize them.
https://doi.org/10.1142/9789812562494_0006
Algorithm Engineering is concerned with the design, analysis, implementation, tuning, debugging and experimental evaluation of computer programs for solving algorithmic problems. It provides methodologies and tools for developing and engineering efficient algorithmic codes and aims at integrating and reinforcing traditional theoretical approaches for the design and analysis of algorithms and data structures.
https://doi.org/10.1142/9789812562494_0007
In this column I would like to present and provide an historical perspective for the new breakthrough result due to Manindra Agrawal, Neeraj Kayal and Nitin Saxena, mentioned in the title above. The paper by Agraval, Kayal, and Saxena has not been published yet, and the report can be found at http://www.cse.iitk.ac.in/users/mnnindra/index.html. The result even deserved a column in the New York Times of August 8th, 2002. Notice that placing the PRIMES and COMPOSITE problems in the class P does not imply that an efficient way to factorize a composite integer has been found, and thus the AKS-algorithm should not affect the security of RSA or PGP. (Although in the TCS community, the acronym AKS stands for the Ajtai, Komlós and Szemerédi's sorting network, in the present context I will use the term AKS-algorithm to refer to the Agrawal-Kayal-Saxena's result.)
https://doi.org/10.1142/9789812562494_0008
The following sections are included:
https://doi.org/10.1142/9789812562494_0009
I am excited to present six articles from the BEATCS Computational Complexity column for this column book. These articles cover a wide variety of topics in complexity from the basic question of division to the power of quantum computation. All of the articles have been revised and updated by their original authors…
https://doi.org/10.1142/9789812562494_0010
This article defines and proves basic properties of the standard quantum circuit model of computation. The model is developed abstractly in close analogy with conventional Boolean and probabilistic circuits, without recourse to any physical concepts or principles. Various complexity classes based on the quantum model are defined, and two standard quantum algorithms (Deutsch-Jozsa and Grover) are presented and analyzed. This article is intended as a primer for theoretical computer scientists who do not know—and perhaps do not care to know—any physics.
https://doi.org/10.1142/9789812562494_0011
The following sections are included:
https://doi.org/10.1142/9789812562494_0012
This survey focuses on the recent (1998–2003) developments in the area of derandomization, with the emphasis on the derandomization of time-bounded randomized complexity classes.
https://doi.org/10.1142/9789812562494_0013
Randomness extractors are functions that “extract” (almost uniformly distributed) random bits from arbitrary distributions that “contain” sufficient randomness. Explicit constructions of randomness extractors have many applications in complexity theory and combinatorics.
This manuscript is a survey of recent developments in the area and focuses on explicit constructions that followed Trevisan's breakthrough result.
https://doi.org/10.1142/9789812562494_0014
Property testing is a new field in computational theory, that deals with the information that can be deduced from the input where the number of allowable queries (reads from the input) is significantly smaller than the input size. This survey provides an introduction and reference to this exciting field.
https://doi.org/10.1142/9789812562494_0015
We survey the recent lower bounds on the running time of general-purpose randomaccess machines that solve satisfiability in a small amount of work space, and related lower bounds for satisfiability in nonuniform models. The same bounds apply to almost all natural NP-complete problems known.
https://doi.org/10.1142/9789812562494_0016
The primary goal of this chapter is to highlight some recent trends in the research carried out nowadays within the field of Distributed Computing Theory. A particular subgoal is to expose the entire community of Theoretical Computer Science carrying out basic research in theory to fundamental problems and challenges, often related to recent technological advances, that are of direct relevance and interest to research on the principles of Distributed Computing. We would like to provide a glimpse of state-of-the-art research in significant areas of Distributed Computing, in order to acquaint the theoretical community with related issues and problems; hopefully, this acquaintance will spawn a corresponding interest in pursuing research in such areas…
https://doi.org/10.1142/9789812562494_0017
Balancing networks are highly distributed data structures that are used for providing efficient solutions to multiprocessor synchronization problems. Traditionally, balancing networks have been designed to be accessed by tokens, which correspond to increment operations. The distribution of tokens on the network's output specifies the correctness property of the network. However, tokens alone may be inadequate for synchronization problems that require decrement operations, such as semaphores and critical regions. For such problems, antitokens have been introduced to implement the decrement operation [21].
It has been shown that several kinds of networks that satisfy the step property, the smoothing property and the threshold property for tokens alone preserve their properties when antitokens are introduced [2,5,21]. Thus, such networks are able to solve synchronization problems that require decrements. A fundamental question that has been left open is to formally characterize all properties of balancing networks that are preserved under the introduction of antitokens.
In this work, we provide a simple, combinatorial characterization for all properties warranted by balancing networks which are preserved when antitokens are introduced. This characterization serves as a theoretical tool for identifying the properties that are preserved by antitokens.
https://doi.org/10.1142/9789812562494_0018
Ability to cooperate on common tasks in a distributed setting is key to solving a broad range of computation problems ranging from distributed search such as SETI to distributed simulation and multi-agent collaboration. In such settings there exists a trade-off between computation and communication: both resources must be managed to decrease redundant computation and to ensure efficient computational progress. This survey deals with scheduling issues for distributed collaboration. Specifically, we examine the extreme situation of collaboration without communication. That is, we consider the extent to which efficient collaboration is possible if all resources are directed to computation at the expense of communication. Of course there are also cases where such an extreme situation is not a matter of choice: the network may fail, the mobile nodes may have intermittent connectivity, and when communication is unavailable it may take a long time to (re)establish connectivity. The results summarized here precisely characterize the ability of distributed agents to collaborate on a known collection of independent tasks by means of local scheduling decisions that require no communication and that achieve low redundancy in task executions. Such scheduling solutions exhibit an interesting connection between the distributed collaboration problem and the mathematical design theory. The lower bounds presented here along with the randomized and deterministic schedule constructions show the limitations on such low-redundancy cooperation and show that schedules with near-optimal redundancy can be efficiently constructed by processors working in isolation. We also show that when processors start working in isolation and are subjected to an arbitrary pattern of network reconfigurations, e.g., fragmentations and merges, randomized scheduling is competitive compared to an optimal algorithm that is aware of the pattern of reconfigurations.
https://doi.org/10.1142/9789812562494_0019
An ad-hoc mobile network is a collection of mobile hosts, with wireless communication capabilities, forming a temporary network without the aid of any established fixed infrastructure.
In such networks, topological connectivity is subject to frequent, unpredictable change. Our work focuses on networks with high rate of such changes to connectivity. For such dynamically changing networks we propose protocols which exploit the co-ordinated (by the protocol) motion of a small part of the network. We show that such protocols can be designed to work correctly and efficiently even in the case of arbitrary (but not malicious) movements of the hosts not affected by the protocol.
We also propose a methodology for the analysis of the expected behavior of protocols for such networks, based on the assumption that mobile hosts (those whose motion is not guided by the protocol) conduct concurrent random walks in their motion space.
In particular, our work examines the fundamental problem of communication and proposes distributed algorithms for it. We provide rigorous proofs of their correctness, and also give performance analyses by combinatorial tools. Finally, we have evaluated these protocols by experimental means.
https://doi.org/10.1142/9789812562494_0020
We study the problem of n users selfishly routing traffics through a shared network. Users route their traffics by choosing a path from their source to their destination of the traffic with the aim of minimizing their private latency. In such an environment Nash equilibria represent stable states of the system: no user can improve its private latency by unilaterally changing its strategy.
In the first model the network consists only of a single source and a single destination which are connected by m parallel links. Traffics are unsplittable. Users may route their traffics according to a probability distribution over the links. The social optimum minimizes the maximum load of a link. In the second model the network is arbitrary, but traffics are splittable among several paths leading from their source to their destination. The goal is to minimize the sum of the edge latencies.
Many interesting problems arise in such environments: A first one is the problem of analyzing the loss of efficiency due to the lack of central regulation, expressed in terms of the coordination ratio. A second problem is the Nashification problem, i.e., the problem of converting any given non-equilibrium routing into a Nash equilibrium without increasing the social cost. The Fully Mixed Nash Equilibrium Conjecture (FMNE Conjecture) states that a Nash equilibrium, in which every user routes along every possible edge with probability greater than zero, is a worst Nash equilibrium with respect to social cost. A third problem is to exactly specify the sub-models in which the FMNE Conjecture is valid. The well-known Braess's Paradox shows that there exist networks, such that strict sub-networks perform better when users are selfish. A natural question is the following network design problem: Given a network, which edges should be removed to obtain the best possible Nash equilibrium.
We present complexity results for various problems in this setting, upper and lower bounds for the coordination ratio, and algorithms solving the problem of Nashification. We survey results on the validity of the FMNE Conjecture in the model of unsplittable flows, and for the model of splittable flows we survey results for the network design problem.
https://doi.org/10.1142/9789812562494_0021
Distributed Algorithmic Mechanism Design (DAMD) combines theoretical computer science's traditional focus on computational tractability with its more recent interest in incentive compatibility and distributed computing. The Internet's decentralized nature, in which distributed computation and autonomous agents prevail, makes DAMD a very natural approach for many Internet problems. This paper first outlines the basics of DAMD and then reviews previous DAMD results on multicast cost sharing and interdomain routing. The remainder of the paper describes several promising research directions and poses some specific open problems.
https://doi.org/10.1142/9789812562494_0022
We provide a state-of-the-art survey of results concerning stability issues for routing in communication networks, and corresponding protocols. Roughly speaking, a distributed routing protocol is stable on some particular network if the protocol maintains a bounded number of packets in the network at all times, under some suitable assumptions on the fashion of injecting packets into the network. Fundamental questions that pose themselves in such a setting include:
• What is the effect of the injection pattern on stability?
• Which protocols are stable on which networks?
• What can be said about an algorithmic characterization of stability?
In this survey, we summarize some of the known answers to these questions, and some questions for which no answers are yet known, as a collection of current challenges for the distributed computing community.
https://doi.org/10.1142/9789812562494_0023
Natural Computing is concerned with computation taking place in nature and with human-designed computing inspired by nature. Viewing phenomena going on in nature as computational processes enhances our understanding of these phenomena and of the essence of computation. In this way one gains valuable insights into both natural sciences and computer science…
https://doi.org/10.1142/9789812562494_0024
There are many falsely intuitive introductions to quantum theory and quantum computation in a handwave. There are also numerous documents which teach those subjects in a mathematically sound manner. To my knowledge this paper is the shortest of the latter category. The aim is to deliver a short yet rigorous and self-contained introduction to Quantum Computation, whilst assuming the reader has no prior knowledge of anything but the fundamental operations on real numbers. Successively I introduce complex matrices; the postulates of quantum theory and the simplest quantum algorithm. The document originates from a fifty minute talk addressed to a non-specialist audience, in which I sought to take the shortest mathematical path that proves a quantum algorithm right.
https://doi.org/10.1142/9789812562494_0025
We briefly discuss the notions related to universal computation, especially in the context of quantum computing. The reader is assumed to have some previous knowledge on quantum computing.
https://doi.org/10.1142/9789812562494_0026
Some computational problems such as integer factorization and discrete logarithm are nowadays believed to be intractable for classical computers, while fast algorithms on quantum computers for those problems are known. The theory of quantum computing is a source of interesting problems. In this article, we first introduce the basic notions in quantum computing and then discuss about some open problems related to quantum computing.
https://doi.org/10.1142/9789812562494_0027
An aqueous computation begins with a fluid memory consisting of a vast number of identical molecules in solution. The molecular variety chosen for use in the solution must possess a set of identifiable locations at each of which a specific detail can be altered (i.e., a bit can be ‘written’) in such a way that the alteration can be detected later (i.e., ‘read’). An aqueous computation consists of a sequence of writing operations performed on the molecules in solution followed by the reading of results from these molecules. The abstract (i.e., implementation independent) concept of aqueous computing is introduced, motivated, and illustrated with a specific program. The literature is surveyed, emphasizing the wet-lab realizations that have been carried out and alternate technologies not yet initiated. The problems that are anticipated in developing practical implementations are discussed.
https://doi.org/10.1142/9789812562494_0028
Biomolecular computing aims at harnessing individual biomolecules at nanoscales for computational and bio-engineering purposes. Improving the reliability, efficiency and scalability of biomolecules for information processing will have a major impact not only in biomolecular computing, but related areas such as bioinformatics and biology. Ideas and results of biomolecular computing research performed in silico towards this purpose are reviewed. A major outcome of this research is the idea that building these devices in electronics using biomolecular analogs may bring to biomolecular computers the customary efficiency, reliability and programmability now standard in electronic computing, hitherto only dreamed of in wet tube computations. Research questions posed by this approach are discussed, as well as foreseeable architectures for a solid-state based biomolecular computer.
https://doi.org/10.1142/9789812562494_0029
The following sections are included:
https://doi.org/10.1142/9789812562494_0030
This is the second part of a review of the research on formal modelling of gene assembly in ciliates. In the first part, we provided the basic biological background and introduced molecular operations in the process of gene assembly. In this part, we shall represent these operations within three formal frameworks: MDS descriptors, legal strings, and overlap graphs, corresponding to different abstraction levels which however turn out to be equivalent as far as the operational ability of gene assembly is concerned.
https://doi.org/10.1142/9789812562494_0031
Biological systems can be modeled beneficially as reactive systems, using languages and tools developed for the construction of man-made systems. Our long-term aim is to model a full multi-cellular animal as a reactive system – specifically, the C. elegans nematode worm, which is complex, but very well-defined in terms of anatomy and genetics. The challenge is to construct a full, true-to-all-knownfacts, 4-dimensional, fully animated model of the development and behavior of this worm (or of a comparable multi-cellular animal), which is easily extendable as new biological facts are discovered.
https://doi.org/10.1142/9789812562494_0032
In this paper, an introduction to the field of evolutionary computation is given, a subfield of natural computing dealing with search and optimization algorithms gleaned from the model of organic evolution. As two major instances of evolutionary computation, genetic algorithms and evolution strategies are presented from an algorithmic point of view, and some of the existing theoretical results about these algorithms are summarized. Furthermore, to illustrate the practical application of evolutionary algorithms to industrial problems, three economically important problems and their solution by evolutionary algorithms are presented. The variety of topics discussed and results presented in this paper intends to clarify that a solid algorithmic and theoretical basis for understanding the power of evolutionary algorithms has been build during the past thirty years, and these methods are mature and powerful tools for solving challenging optimization problems.
https://doi.org/10.1142/9789812562494_0033
Artificial chemistries are presented. An informal definition is first introduced, followed by a presentation of the various kind of artificial chemistries found in literature. We continue explaining the difference between a qualitative model and a quantitative one. We then explain how a.c. presents a qualitative model, more than a quantitative one. The common measures used to follow the behaviour of an a.c. experiment are then introduced. Artificial chemistries generate particular algebraic structures: organisations. Organisations are then defined, and some of its basic algebraic properties presented. The general behaviour of an experiment is then presented. The article ends with an example of a particular artificial chemistry.
https://doi.org/10.1142/9789812562494_0034
The following sections are included:
https://doi.org/10.1142/9789812562494_0035
Specification and modelling techniques for software systems are highly important in the software development process. In this chapter we consider formal specification techniques, which means that they have a well-defined mathematical syntax and semantics…
https://doi.org/10.1142/9789812562494_0036
The aim of this note is to give a short overwiev on the role of mathematics and formal specification techiques for software system development. In order to address not primarily experts in formal specification techniques, but even more a general theoretical computer science and sofware engineering audience, we first discuss the role of formal specification techniques for software system development and give an overview of corresponding formal specification techniques on a non technical level. In the second part we summarize the main areas of mathematics which are important in order to formulate these techniques and to build up a corresponding mathematical theory.
https://doi.org/10.1142/9789812562494_0037
The integration of several different modelling techniques into a single formal method has turned out to be advantageous in the formal design of software systems. Giving a semantics to an integrated formal method is currently a very active area of research. In this paper we discuss the advantages of a failure-divergence semantics for data and process integrating formal methods, in particular for those with a concept of inheritance. The discussion proceeds along the lines of the formal method CSP-OZ, a combination of CSP and Object-Z, developed in Oldenburg.
https://doi.org/10.1142/9789812562494_0038
The following sections are included:
https://doi.org/10.1142/9789812562494_0039
The following contribution is not so much technical paper in the usual sense, but more an experience report by a young scientist in a changing evironment. In fact, she was growing up and studied computer science in Albania, a country which had been isolated until 1990. In the following she is going to report about her experience with computer science, especially Web application in Tirana on one hand, and her new experience with formal methods especially graph tranformation in Berlin on the other hand.
https://doi.org/10.1142/9789812562494_0040
Complex (physical and software) systems have components with different nature that may have to be described using different formalisms. For the simulation or analysis of properties of the whole system, our approach is to transform each component into a common formalism with appropriate analysis or simulation methods. This approach is realized by building meta-models of the different formalisms and expressing their translation using attributed graph transformation. Other model manipulations such as simulation and optimization can also be expressed with graph transformation.
https://doi.org/10.1142/9789812562494_0041
Petri net transformations are used as powerful techniques for manipulation of system models based on Petri nets. They allow arbitrary modification of a given net. Two kinds of transformations are distinguished, namely net model and net class transformations. Both kinds of transformations are formaly defined and a lot of important results are available, e.g., compatibility of transformations with each other, compatibility with horizontal structuring, preservation of structural and system properties. Net transformations provide a framework for modelling of Petri net based systems for several classes of Petri nets. This paper presents the main ideas of net transformations as developed at the Technical University Berlin. There are motivation, role, informal description, formal background, and examples of Petri net transformations presented in this paper.
https://doi.org/10.1142/9789812562494_0042
The following sections are included:
https://doi.org/10.1142/9789812562494_0043
Mathematical logic emerged in the early part of the 20th century as an effective tool for dealing with foundational problems in mathematics. I studied logic in the 1960s. By that time, the foundational problems of mathematics had been more or less settled. This is not to say that logic had become boring. Set theory was undergoing rapid changes as a result of the recent discovery of the forcing method. In general, logic was growing into a respected mathematical discipline. Still, I was envious of those who lived in the right time to work on the foundational problems…
https://doi.org/10.1142/9789812562494_0044
One of the previous articles in this column was devoted to the zero-one laws for a number of logics playing prominent role in finite model theory: first-order logic FO, the extension FO+LFP of first-order logic with the least fixed-point operator, and the infinitary logic . Recently Shelah proved a new, powerful, and surprising zero-one law. His proof uses so-called strong extension axioms. Here we formulate Shelah's zero-one law and prove a few facts about these axioms. In the process we give a simple proof for a “large deviation” inequality à la Chernoff.
https://doi.org/10.1142/9789812562494_0045
The following sections are included:
https://doi.org/10.1142/9789812562494_0046
The following sections are included:
https://doi.org/10.1142/9789812562494_0047
The following sections are included:
https://doi.org/10.1142/9789812562494_0048
Yiannis Moschovakis argues that some recursive algorithms, and in particular the mergesort algorithm, cannot be adequately described in terms of state machines. His claim contradicts the ASM thesis according to which every algorithm can be adequately represented as an abstract state machine. Here we show how to represent the mergesort algorithm, on its natural level of abstraction, as a distributed abstract state machine. The same can be achieved by more modest means: a standard ASM with submachines. However, there are recursive algorithms easily representable as distributed ASMs that do not lend themselves to a natural submachine representation.
https://doi.org/10.1142/9789812562494_0049
We discuss the following problem, which arises in software testing. Given some independent parameters (of a program to be tested), each having a certain finite set of possible values, we intend to test the program by running it several times. For each test, we give the parameters some (intelligently chosen) values. We want to ensure that for each pair of distinct parameters, every pair of possible values is used in at least one of the tests. And we want to do this with as few tests as possible.
https://doi.org/10.1142/9789812562494_0050
Newman's Lemma states that a graph is confluent if it is locally confluent and has no infinite paths. Newman's Lemma requires essentially higher-order logic. We show that the induction step in Huet's inductive proof of Newman's Lemma is completely first-order and can be automated easily with the help of a resolution theorem prover. The resolution proof uses classical logic. For the automation of a constructive proof we consider Newman's Lemma in geometric logic and sketch an algorithm leading to a constructive proof. We formalize the proof procedure and prove its completeness for geometric logic by a direct topological argument.
https://doi.org/10.1142/9789812562494_0051
What is an algorithm? The interest in this foundational problem is not only theoretical; applications include specification, validation and verification of software and hardware systems. We describe the quest to understand and define the notion of algorithm. We start with the Church-Turing thesis and contrast Church's and Turing's approaches, and we finish with some recent investigations.
https://doi.org/10.1142/9789812562494_0052
The area of Concurrency aims at a formal understanding of the behaviour of distributed interacting systems as the basis for their description, analysis, implementation and verification…
https://doi.org/10.1142/9789812562494_0053
The following sections are included:
https://doi.org/10.1142/9789812562494_0054
This paper provides a comprehensive summary of equivalence checking results for infinite-state systems. References to the relevant papers will be updated continuously according to the development in the area. The most recent version of this document is available from the web-page http://www.brics.dk/~srba/roadmap.
https://doi.org/10.1142/9789812562494_0055
Over the last two decades formal methods have been extended towards performance and reliability evaluation. This paper tries to provide a rather intuitive explanation of the basic concepts and features in this area. Instead of striving for mathematical rigour, the intention is to give an illustrative introduction to the basics of stochastic models, to stochastic modelling using process algebra, and to model checking as a technique to analyse stochastic models.
https://doi.org/10.1142/9789812562494_0056
The following sections are included:
https://doi.org/10.1142/9789812562494_0057
This survey retraces, collects, and summarises the contributions of the author — both individually and in collaboration with others — on the theme of algebraic, compositional approaches to the semantics of Petri nets.
https://doi.org/10.1142/9789812562494_0058
The theory of formal languages and automata, as well as combinatorics on words, are the oldest and most fundamental in the field of theoretical computer science. Indeed, research in some problem areas is already close to one hundred years old. The origins of the theory, as we know it today, come from different parts of human knowledge, ranging from logic and mathematics to linguistics, biology and genetics. Apart from offering beautiful mathematical problems, the theory is remarkably versatile in modeling various phenomena in many fields of study This has been the key to its applicability…
https://doi.org/10.1142/9789812562494_0059
The following sections are included:
https://doi.org/10.1142/9789812562494_0060
We survey the known results on two old open problems on commutation of languages. The first problem, raised by Conway in 1971, is asking if the centralizer of a rational language must be rational as well – the centralizer of a language is the largest set of words commuting with that language. The second problem, proposed by Ratoandromanana in 1989, is asking for a characterization of those languages commuting with a given code – the conjecture is that the commutation with codes may be characterized as in free monoids. We present here simple proofs for the known results on these two problems.
https://doi.org/10.1142/9789812562494_0061
The following sections are included:
https://doi.org/10.1142/9789812562494_0062
The following sections are included:
https://doi.org/10.1142/9789812562494_0063
We give a new proof of the decidability of the DFOL language equivalence problem. As a byproduct we get a new solution of the DOL language equivalence problem.
https://doi.org/10.1142/9789812562494_0064
Conjunctive grammars were introduced in 2000 as a generalization of context-free grammars that allows the use of an explicit intersection operation in rules. Several theoretical results on their properties have been obtained since then, and a number of efficient parsing algorithms that justify the practical value of the concept have been developed. This article reviews these results and proposes numerous open problems.
https://doi.org/10.1142/9789812562494_0065
The following sections are included:
https://doi.org/10.1142/9789812562494_0066
The following sections are included:
https://doi.org/10.1142/9789812562494_0067
The following sections are included:
https://doi.org/10.1142/9789812562494_0068
We survey results concerning the generative power of membrane systems (P systems) with the objects described by strings which are processed by rewriting, splicing, rewriting with replication, as well as by genome specific operations such as splitting, point mutation (actually, rewriting), and crossover. The strings are sometimes organized in usual sets (languages), in other cases they form multisets (sets with multiplicities associated with their elements). In most cases, characterizations of recursively enumerable languages are obtained. Several open problems and research topics are also mentioned.
https://doi.org/10.1142/9789812562494_0069
We briefly discuss some of the recent results in the membrane computing, mainly reported during the Workshop on Membrane Computing, Curtea de Argeş, Romania, August 19–23, 2002 (we refer below to this workshop by WMC2002). Some of these results answer problems which were open in this area, but they also suggest new open problems and research topics.
https://doi.org/10.1142/9789812562494_bmatter
The following sections are included: