Please login to be able to save your searches and receive alerts for new content matching your search criteria.
This paper deals with two topics concerning two-dimensional automata operating in parallel. We first investigate a relationship between the accepting powers of two-dimensional alternating finite automata (2-AFAs) and nondeterministic bottom-up pyramid cellular acceptors (NUPCAs), and show that Ω (diameter × log diameter) time is necessary for NUPCAs to simulate 2-AFAs. We then investigate space complexity of two-dimensional alternating Turing machines (2-ATMs) operating in small space, and show that if L (n) is a two-dimensionally space-constructible function such that limn → ∞ L (n)/loglog n > 1 and L (n) ≤ log n, and L′ (n) is a function satisfying L′ (n) =o (L(n)), then there exists a set accepted by some strongly L (n) space-bounded two-dimensional deterministic Turing machine, but not accepted by any weakly L′ (n) space-bounded 2-ATM, and thus there exists a rich space hierarchy for weakly S (n) space-bounded 2-ATMs with loglog n ≤ S (n) ≤ log n.
This paper introduces a cooperating system of three-way, two-dimensional finite automata (CS-TR2-FA) which is a restricted version of a cooperating system of four- way, two-dimensional finite automata (CS-2-FA), and mainly investigates several fundamental properties of this system as a two-dimensional language acceptor whose input tapes are restricted to square ones. We show that
(1) CS-TR2-FA’s are equivalent in accepting power to three-way, two-dimensional simple multihead finite automata,
(2) CS-2-FA’s are more powerful than CS-TR2-FA’s,
(3) £[CS-TR2-DFA(k)S] ⊊ £[CS-TR2-NFA(k)S],
(4) ⋃1≤k<∞ £[CS-TR2-DFA(k)S] ⊊ ⋃1≤k<∞ £[CS-TR2-NFA(k)S], and
(5) £[CS-TR2-DFA(k)S] (£[CS-TR2-NFA(k)S]) ⊊ £[CS-TR2-DFA(k+1)S] (£[CS-TR2-NFA(k+1)S]),
where £[CS-TR2-DFA(k)S] (£[CS-TR2-NFA(k)S]) denotes the class of sets of square input tapes accepted by CS-TR2-FA’s which consist of k deterministic (non-deterministic) finite automata.
This paper presents a data estimation scheme for wide band multichannel charge sampling filter bank receivers together with a complete system calibration algorithm based on the least mean squared (LMS) algorithm. A unified model has been defined for the receiver containing all first order mismatches, offsets, imperfections, and the LMS algorithm is employed to track these errors. The performance of this technique under noisy channel conditions has been verified. Moreover, a detailed complexity analysis of the calibration algorithm is provided which shows that sinc filter banks have much lower complexity than traditional continuous-time filter banks.
This paper describes a standard cell-based new approach of comparator design for flash ADC. Conventional flash ADC comparator consumes up to 60% of the power due to resistive ladder network and analog comparators. Threshold inverter quantized (TIQ) comparators reported earlier have improved speed and provide low-power, low-voltage operation. But they need feature size variation and have non-linearity issues. Here, a new standard cell comparator is proposed which retains all advantages of TIQ comparator and provides improved linearity with reduced hardware complexity. A 4-bit ADC designed using the proposed comparator requires 206 minimum-sized transistors and provides large area saving compared to previously proposed designs. Thermometer code is partitioned using algebraic division theorem. This conversion is used for mathematical modeling and complexity reduction of decoder circuit using semi-parallel organization of comparators. Circuit is designed using 90 nm technology which exhibits satisfactory performance even in process variation.
A four-dimensional chaotic system with complex dynamical properties is constructed. The complexity of the system was evaluated by equilibrium point, Lyapunov exponential spectrum and bifurcation model. The coexistence of Lyapunov exponential spectrum and bifurcation model proves the coexistence of attractors. C0 and SE complexity algorithms are used to compare and analyze the corresponding complexity characteristics of the system, and the most complex integer-order system is obtained. In addition, the circuit to switch between different chaotic attractors is novel. In particular, more complex parameters are selected for the fractional-order chaotic system through the analysis of parameter complexity, and the rich dynamics of the system are analyzed. Experimental results based on Field-Programmable Gate Array (FPGA) platform verify the feasibility of the system. Finally, the most complex integer-order system is compared with its corresponding fractional-order system in image encryption, so that the fractional-order system has a better application prospect.
Detecting intrusions in real-time within cloud networks presents a multifaceted challenge involving intricate processes such as feature representation, intrusion type classification and post-processing procedures. Distributed Intrusion Detection Systems (DIDSs) constitute a complex landscape characterized by diverse contextual nuances, distinct functional advantages and limitations specific to deployment scenarios. Despite these challenges, DIDS offers immense potential for addressing evolving intrusion detection challenges through tailored contextual adaptations and unique functional advantages. Moreover, exploring the limitations associated with different deployment contexts facilitates targeted improvements and refinements, unlocking new avenues for innovation in intrusion detection technologies. Notably, deep learning (DL) integrated with blockchain technology emerges as a superior approach in terms of security, while bioinspired models excel in Quality of Service (QoS). These models demonstrate higher accuracy across various network scenarios, underscoring their efficacy in intrusion detection. Additionally, edge-based models exhibit high accuracy and scalability with reduced delay, complexity and cost in real-time network environments. The fusion of these models holds promise for enhancing classification performance across diverse attack types, offering avenues for future research exploration. This text conducts a comprehensive comparison of performance metrics, including accuracy, response delay, computational complexity, scalability and deployment costs. The proposed Novel DIDS Rank (NDR) streamlines model selection by considering these metrics, enabling users to make well-informed decisions based on multiple performance aspects simultaneously. This unified ranking approach facilitates the identification of DIDS that achieves high accuracy and scalability while minimizing response delay, cost and complexity across varied deployment scenarios.
Hard discrete optimization problems using randomized methods such as genetic algorithms require large numbers of samples from the solution space. Each candidate sample/solution must be evaluated using the target fitness/energy function being optimized. Such fitness computations are a bottleneck in sampling methods such as genetic algorithms. We observe that the caching of partial results from these fitness computations can reduce this bottleneck. We provide a rigorous analysis of the run-times of GAs with and without caching. By representing fitness functions as classic Divide and Conquer algorithms, we provide a formal model to predict the efficiency of caching GAs vs. non-caching GAs. Finally, we explore the domain of protein folding with GAs and demonstrate that caching can significantly reduce expected run-times through both theoretical and extensive empirical analyses.
This paper presents a system based on new operators for handling sets of propositional clauses compactly represented by means of ZBDDs. The high compression power of such data structures allows efficient encodings of structured instances. A specialized operator for the distribution of sets of clauses is introduced and used for performing multiresolution on clause sets. Cut eliminations between sets of clauses of exponential size may then be performed using polynomial size data structures. The ZRES system, a new implementation of the Davis-Putnam procedure of 1960, solves two hard problems for resolution, that are currently out of the scope of the best SAT provers.
Possibilistic graphical models are powerful modeling and reasoning tools for uncertain information based on possibility theory. In this paper, we provide an analysis of computational complexity of MAP and MPE queries for possibilistic networks. MAP queries stand for maximum a posteriori explanation while MPE ones stand for most plausible explanation. We show that the decision problems of answering MAP and MPE queries in both min-based and product-based possibilistic networks is NP-complete. Definitely, such results represent an advantage of possibilistic graphical models over probabilistic ones since MAP queries are NPPP -complete in Bayesian networks. Our proofs for querying min-based possibilistic networks make use of reductions from the 3SAT problem to querying possibilistic networks decision problem. Moreover, the provided reductions may be useful for the implementation of MAP and MPE inference engines based on the satisfiability problem solvers. As for product-based networks, the provided proofs are incremental and make use of reductions from SAT and its weighted variant WMAXSAT.
Computation and decision problems related to argumentation frameworks with higher-order attacks have not received a lot of attention so far. This paper is a step towards these issues. First, it provides a labelling counterpart for the structure semantics of Recursive Argumentation Frameworks (RAF). Second, it investigates the complexity of decision problems associated with RAF. This investigation shows that, for the higher expressiveness offered by these enriched systems, the complexity is the same as for classical argumentation frameworks. As a side contribution, a new semantics for RAF, the semi-stable semantics, and a new process for translating RAF into Argumentation Frameworks without higher-order attacks (AF), are introduced. Finally, new notions which are the counterparts of equivalent notions already existing for AF (among them, the Strongly Connected Components — SCC) are defined and investigated in order to involve them in the future development of algorithms for computing RAF labelling semantics.
Purpose: The main aim of this paper is to study the influence of temperature on multiscale entropy (MSE) and multiscale time irreversibility (MTI) through the use of short-term measurements. Methods: A total of 12 physically active, healthy, and nonsmoker individuals (25.6±3.9 years old; 174.2±7.5cm of height; and 68.6±11.1kg of body mass) voluntarily participated in this study. Two beat-to-beat recordings of 15min length were performed on every participant, one under hot conditions (35∘C) and the other assessment under cool conditions (19∘C). The order of these two assessments was randomly assigned. Multiscale sample entropy and MTI were assessed in every measurement through 10 scales. Results: Entropy was significantly higher under hot conditions (p<0.05) from the fifth scale compared to cool conditions. On the contrary, MTI values were significantly lower under hotter conditions (p<0.05). Conclusions: The study of MSE and time irreversibility of short RR measurements presents consistent and reliable data. Moreover, exposures to hot conditions provoke an increment of interbeat complexity throughout larger scales and a decrease in the MTI in a healthy population.
This work aims to analyze the complexity of surface electromyography (sEMG) signals under muscle fatigue conditions using Hjorth parameters and bubble entropy (BE). Signals are recorded from the biceps brachii muscle of 25 healthy males during dynamic and isometric contraction exercises. These signals are filtered and segmented into 10 equal parts. The first and tenth segments are considered as nonfatigue and fatigue conditions, respectively. Activity, mobility, complexity, and BE features are extracted from both segments and classified using support vector machine (SVM), Naïve bayes (NB), k-nearest neighbor (kNN), and random forest (RF). The results indicate a reduction in signal complexity during fatigue. The parameter activity is found to increase under fatigue for both dynamic and isometric contractions with mean values of 0.35 and 0.22, respectively. It is observed that mobility, complexity, and BE are lowest during fatigue for both contractions. Maximum accuracy of 95.00% is achieved with the kNN and Hjorth parameters for dynamic signals. It is also found that the reduction of signal complexity during fatigue is more significant in dynamic contractions. This study confirms that the extracted features are suitable for analyzing the complex nature of sEMG signals. Hence, the proposed approach can be used for analyzing the complex characteristics of sEMG signals under various myoneural conditions.
Using high-level tasks typical of managerial decisions, this experimental study examined the influence of computer assistance on solving ill-structured problems. Of specific interest were the influences of complex and simple assistance on decision performance and decision maker attitudes. Our findings suggest that performance and user attitudes can be enhanced by technology that provides clear and simple instruction in good problem-solving practices. However, when that technology adds a complex array of technical options and features, the assistance may fail to improve or/and may even diminish performance and damage user attitudes. Familiarization with such complex technology may not improve these outcomes. The findings regarding the relationship between assistance complexity and decision performance are consistent with those of studies that suggest complexity has a curvilinear influence on performance. When considering the application of technological assistance to ill-structured decision tasks, the complexity of the assistance should not be ignored.
We revisit the DOUBLE DIGEST problem, which occurs in sequencing of large DNA strings and consists of reconstructing the relative positions of cut sites from two different enzymes. We first show that DOUBLE DIGEST is strongly NP-complete, improving upon previous results that only showed weak NP-completeness. Even the (experimentally more meaningful) variation in which we disallow coincident cut sites turns out to be strongly NP-complete. In the second part, we model errors in data as they occur in real-life experiments: we propose several optimization variations of DOUBLE DIGEST that model partial cleavage errors. We then show that most of these variations are hard to approximate. In the third part, we investigate variations with the additional restriction that coincident cut sites are disallowed, and we show that it is NP-hard to even find feasible solutions in this case, thus making it impossible to guarantee any approximation ratio at all.
The problem of resolving genotypes into haplotypes, under the perfect phylogeny model, has been under intensive study recently. All studies so far handled missing data entries in a heuristic manner. We prove that the perfect phylogeny haplotype problem is NP-complete when some of the data entries are missing, even when the phylogeny is rooted. We define a biologically motivated probabilistic model for genotype generation and for the way missing data occur. Under this model, we provide an algorithm, which takes an expected polynomial time. In tests on simulated data, our algorithm quickly resolves the genotypes under high rates of missing entries.
Identifying regions of DNA with extreme statistical characteristics is an important aspect of the structural analysis of complete genomes. Linguistic methods, mainly based on estimating word frequency, can be used for this as they allow for the delineation of regions of low complexity. Low complexity may be due to biased nucleotide composition, by tandem- or dispersed repeats, by palindrome-hairpin structures, as well as by a combination of all these features. We developed software tools in which various numerical measures of text complexity are implemented, including combinatorial and linguistic ones. We also added Hurst exponent estimate to the software to measure dependencies in DNA sequences. By applying these tools to various functional genomic regions, we demonstrate that the complexity of introns and regulatory regions is lower than that of coding regions, whilst Hurst exponent is larger. Further analysis of promoter sequences revealed that the lower complexity of these regions is associated with long-range correlations caused by transcription factor binding sites.
Of all the issues discussed at Alife VII: Looking Forward, Looking Backward, the issue of whether it was possible to create an artificial life system that exhibits open-ended evolution of novelty is by far the biggest. Of the 14 open problems settled on as a result of debate at the conference, some 6 are directly, or indirectly related to this issue.
Most people equate open-ended evolution with complexity growth, although a priori these seem to be different things. In this paper I report on experiments to measure the complexity of Tierran organisms, and show the results for a size-neutral run of Tierra. In this run, no increase in organismal complexity was observed, although organism size did increase through the run. This result is discussed, offering some signposts on path to solving the issue of open ended evolution.
Using the laser speckle contrast imaging and wavelet-based analyses, we investigate a latent (a "hidden") stage of the development of intracranial hemorrhages (ICHs) in newborn rats. We apply two measures based on the continuous wavelet-transform of blood flow velocity in the sagittal sinus, namely, the spectral energy in distinct frequency ranges and a multiscality degree characterizing complexity of experimental data. We show that the wavelet-based multifractal formalism reveals changes in the cerebrovascular blood flow at the development of ICH.
Based on the laser speckle contrast imaging (LSCI) and the multiscale entropy (MSE), we study in this work the blood flow dynamics at the levels of cerebral veins and the surrounding network of microcerebral vessels. We discuss how the phenylephrine-related acute peripheral hypertension is reflected in the cerebral circulation and show that the observed changes are scale-dependent, and they are significantly more pronounced in microcerebral vessels, while the macrocerebral dynamics does not demonstrate authentic inter-group distinctions. We also consider the permeability of blood–brain barrier (BBB) and study its opening caused by sound exposure. We show that alterations associated with the BBB opening can be revealed by the analysis of blood flow at the level of macrocerebral vessels.
All complex life on Earth is composed of ‘eukaryotic’ cells. Eukaryotes arose just once in 4 billion years, via an endosymbiosis — bacteria entered a simple host cell, evolving into mitochondria, the ‘powerhouses’ of complex cells. Mitochondria lost most of their genes, retaining only those needed for respiration, giving eukaryotes ‘multi-bacterial’ power without the costs of maintaining thousands of complete bacterial genomes. These energy savings supported a substantial expansion in nuclear genome size, and far more protein synthesis from each gene.