A secret sharing scheme is a method which allows a dealer to share a secret among a set of participants in such a way that only qualified subsets of participants can recover the secret. The collection of subsets of participants that can reconstruct the secret in this way is called access structure. The rank of an access structure is the maximum cardinality of a minimal qualified subset. A secret sharing scheme is perfect if unqualified subsets of participants obtain no information regarding the secret. The dealer's randomness is the number of random bits required by the dealer to setup a secret sharing scheme. The efficiency of the dealer's randomness is the ratio between the amount of the dealer's randomness and the length of the secret. Because random bits are a natural computational resource, it is important to reduce the amount of randomness used by the dealer to setup a secret sharing scheme. In this paper, we propose some decomposition constructions for perfect secret sharing schemes with access structures of constant rank. Compared with the best previous results, our constructions have some improved upper bounds on the dealer's randomness and on the efficiency of the dealer's randomness.
We address the problem of estimating the computation time necessary to approximate a function on a real computer. Our approach gives a possibility to estimate the minimal time needed to compute a function up to the specified level of error. This can be explained by the following example. Consider the space of functions defined on [0,1] whose absolute value and the first derivative are bounded by C. In a certain sense, for almost every such function, approximating it on its domain using an Intel x86 computer, with an error not greater than ε, takes at least k(C, ε) seconds. Here we show how to find k(C, ε).
In this paper, we present general methods that can be used to explore the information processing potential of a medium composed of oscillating (self-exciting) droplets. Networks of Belousov–Zhabotinsky (BZ) droplets seem especially interesting as chemical reaction-diffusion computers because their time evolution is qualitatively similar to neural network activity. Moreover, such networks can be self-generated in microfluidic reactors. However, it is hard to track and to understand the function performed by a medium composed of droplets due to its complex dynamics. Corresponding to recurrent neural networks, the flow of excitations in a network of droplets is not limited to a single direction and spreads throughout the whole medium. In this work, we analyze the operation performed by droplet systems by monitoring the information flow. This is achieved by measuring mutual information and time delayed mutual information of the discretized time evolution of individual droplets. To link the model with reality, we use experimental results to estimate the parameters of droplet interactions. We exemplarily investigate an evolutionary generated droplet structure that operates as a NOR gate. The presented methods can be applied to networks composed of at least hundreds of droplets.
Whether premotor/motor neurons encode information in terms of spiking frequency or by their relative time of firing, which may display synchronization, is still undetermined. To address this issue, we used an information theory approach to analyze neuronal responses recorded in the premotor (area F5) and primary motor (area F1) cortices of macaque monkeys under four different conditions of visual feedback during hand grasping. To evaluate the sensitivity of spike timing correlation between single neurons, we investigated the stimulus dependent synchronization in our population of pairs. We first investigated the degree of correlation of trial-to-trial fluctuations in response strength between neighboring neurons for each condition, and second estimated the stimulus dependent synchronization by means of an information theoretical approach. We compared the information conveyed by pairs of simultaneously recorded neurons with the sum of information provided by the respective individual cells. The information transmission across pairs of cells in the primary motor cortex seems largely independent, whereas information transmission across pairs of premotor neurons is summed superlinearly. The brain could take advantage of both the accuracy provided by the independency of F1 and the synergy allowed by the superlinear information population coding in F5, distinguishing thus the generalizing role of F5.
The calculation of the Shannon entropy of Hartree–Fock electron density is reported for the K–Xe atoms in the fourth and fifth rows of the periodic table in the ground state. In local plasma approximation, the Shannon entropy calculated is used directly to estimate mean excitation energies. To our knowledge, the mean excitation energies of K–Xe atoms are presented here for the first time.
A theoretical measure of the ionicity based on different mathematical concepts is presented in this research. Considering the distribution of valence electrons on each atom in a bond, we assume that the chemical properties of the atom can be expressed by means of probability. Using the introduced probability, different probabilistic models such as classical, fuzzy and information theoretical models have been employed to introduce new descriptors of bond ionicity. The ionicities were calculated for 12 heterodiatomic molecules and the bonds were classified in terms of covalency vs. ionicity. It was found that our proposed ionicity descriptors correlate well with the partial ionic character of the bonds.
We explore the properties of a feed-forward neural network whose couplings are chosen in such a way as to maximize the input-output mutual information, in the case in which the input-output channel is affected by noise.
We approach the theoretical problem of compressing a signal dominated by Gaussian noise. We present expressions for the compression ratio which can be reached, under the light of Shannon's noiseless coding theorem, for a linearly quantized stochastic Gaussian signal (noise). The compression ratio decreases logarithmically with the amplitude of the frequency spectrum P(f) of the noise. Entropy values and compression rates are shown to depend on the shape of this power spectrum, given different normalizations. The cases of white noise (w.n.), fnp power-law noise (including 1/f noise), (w.n.+1/f) noise, and piecewise (w.n.+1/f | w.n.+1/f2) noise are discussed, while quantitative behaviors and useful approximations are provided.
In an attempt to predict a peak in the US economy using a classical statistical decision methodology and a Bayesian methodology and using the 1996 revised composite leading economic indicators (CLI), it is learned that the Bayesian models have generally outperformed the classical statistical ones and, among the Bayesian models, the two using two and three consecutive CLI growth rates are superior in reliability and in accuracy. These two models, however, failed to correctly predict the 2001 recession.
In investigating the reasons behind their failures, we learned that: (1) if the concurrent data for the economic structure of 1983–1999 are used for the prediction, they have also been able to predict the 2001 recession correctly, but their overall reliability is not as strong as before; (2) given the overwhelming weight of the monetary policy tools in the CLI-1996 design and the combination of the economic and political events in the year 2000, the less than expected effectiveness of the monetary policy since 2001 has contributed to this failure; and (3) a possible structural change in the US economy since 2000 has also contributed to this prediction failure.
We introduce a concept of distance between physical theories described by an action. The definition of the distance is based on the relative entropy. We briefly discuss potential physical applications.
Nonlinear corrections are proposed to the discrete equispaced area spectrum of quantum black holes obtained previously in some quantisation schemes. It is speculated that such a modified spectrum might be related to the fine structure found using the loop quantum gravity approach.
A central physical question is the extent to which infrared (IR) observations are sufficient to reconstruct a candidate ultraviolet (UV) completion. We recast this question as a problem of communication, with messages encoded in field configurations of the UV being transmitted to the IR degrees of freedom via a noisy channel specified by renormalization group (RG) flow, with noise generated by coarse graining/decimation. We present an explicit formulation of these considerations in terms of lattice field theory, where we show that the “misanthropic entropy” — the mutual information obtained from decimating neighbors — encodes the extent to which information is lost in marginalizing over/tracing out UV degrees of freedom. Our considerations apply both to statistical field theories as well as density matrix renormalization of quantum systems, where in the quantum case, the statistical field theory analysis amounts to a leading-order approximation. As a concrete example, we focus on the case of the 2D Ising model, where we show that the misanthropic entropy detects the onset of the phase transition at the Ising model critical point.
We will highlight that despite there being various approaches to quantum gravity, there are universal approach-independent features of quantum gravity. The geometry of space–time becomes an emergent structure, which emerges from some purely quantum gravitational degrees of freedom. We argue that these quantum gravitational degrees of freedom can be best understood using quantum information theory. Various approaches to quantum gravity seem to suggest that quantum gravity could be a third quantized theory, and such a theory would not be defined in space–time, but rather in an abstract configuration space of fields. This supports the view that space–time geometry is not fundamental, thus effectively ending the space–time description of nature.
We evaluate the mutual information between the input and the output of a two layer network in the case of a noisy and nonlinear analogue channel. In the case where the nonlinearity is small with respect to the variability in the noise, we derive an exact expression for the contribution to the mutual information given by the nonlinear term in first order of perturbation theory. Finally we show how the calculation can be simplified by means of a diagrammatic expansion. Our results suggest that the use of perturbation theories applied to neural systems might give an insight on the contribution of nonlinearities to the information transmission and in general to the neuronal dynamics.
In a recent work we have introduced a novel approach to study the effect of weak nonlinearity in the transfer function on the information transmitted by an analogue channel, by means of a perturbative diagrammatic expansion. We extend here the analysis to all orders in perturbation theory, which allows us to release any constraint concerning the magnitude of the expansion parameter and to establish the rules to calculate easily the contribution at any order. As an example we explicitly compute the information up to the second order in nonlinearity, in presence of random Gaussian connectivity and in the limit when the output noise is not small. We analyze the first and second order contributions to the mutual information as a function of the nonlinearity and as a function of the number of output units. We believe that an extensive application of our method via the analysis of the different contributions at distinct orders might be able to fill a gap between well-known analytical results obtained for linear channels and nontrivial treatments which are required to study highly nonlinear channels.
This paper investigates the design of information theoretic-based fitness function for embedded evolutionary robotics (ERs). Such fitness relies on the assumption that interesting behaviors result in a high sensorimotor (individual) diversity. The current simple entropy as a diversity metric only considers individuals’ difference but ignores their spatial relationship. The sensorimotor stream can be analyzed to construct a simple directed graph that has unique entry and exit nodes. This paper proposes a hierarchic entropy as a diversity metric by incorporating the simple entropy and the spatial relationship based graph entropy. Maximizing the hierarchic entropy, achieved by on-board evolutionary algorithm, thus defines a self-driven fitness function enforcing the controller visiting diverse sensorimotor states. The proposed algorithm achieves better performance than the published results of other entropy-based methods only relying on simple entropy, without requiring additional computational resources.
In many real-world image based pattern recognition tasks, the extraction and usage of task-relevant features are the most crucial part of the diagnosis. In the standard approach, either the features are given by common sense like edges or corners in image analysis, or they are directly determined by expertise. They mostly remain task-specific, although human may learn the life time, and use different features too, although same features do help in recognition. It seems that a universal feature set exists, but it is not yet systematically found. In our contribution, we try to find such a universal image feature set that is valuable for most image related tasks. We trained a shallow neural network for recognition of natural and non-natural object images before different backgrounds, including pure texture and handwritten digits, using a Shannon information-based algorithm and learning constraints. In this context, the goal was to extract those features that give the most valuable information for classification of the visual objects, hand-written digits and texture datasets by a one layer network and then classify them by a second layer. This will give a good start and performance for all other image learning tasks, implementing a transfer learning approach. As result, in our case we found that we could indeed extract unique features which are valid in all three different kinds of tasks. They give classification results that are about as good as the results reported by the corresponding literature for the specialized systems, or even better ones.
We report on progress in understanding how to build machines which adaptively acquire the language for their task. The generic mechanism in our research has been an information-theoretic connectionist network embedded in a feedback control system. In this paper, we investigate the capability of such a network to learn associations between messages and meaningful responses to them as a task increases in size and complexity. Specifically, we consider how one might reflect task structure in a network architecture in order to provide improved generalization capability in language acquisition. We propose a method for constructing networks from component subnetworks, namely a product network, which provides improved generalization by factoring the associations between words and actions through an intermediate layer of semantic primitives. A two-dimensional product network was evaluated in a 1000-action data retrieval system, the object of which is to answer questions about 20 attributes of the 50 states of the USA. The system was tested by 13 subjects over a two-week period, during which over 1000 natural language dialogues were recorded. The experiment was conducted using typed input with unconstrained vocabulary and syntax. During the course of performing its task, the system acquired over 500 words and retained 92% of what it learned. We provide a description of the system and details on the experimental results.
A recently suggested method for evaluating computer performance is applied to real processors. The method is based on notion of Computer Capacity, which is determined only by the computer architecture and can be computed theoretically on the design stage. The computer capacity for different Intel and AMD processors is calculated and the results are compared with those of widely recognized benchmarks. The obtained results show that computer capacity is a reasonable estimate of computer performance.
In this paper, we extend an information-theoretic approach of computer performance evaluation to supercomputers. This approach is based on the notion of computer Capacity which can be estimated relying solely on the description of computer architecture. We describe the method of calculating Computer Capacity for supercomputers including the influence of the architecture of communication network. The suggested approach is applied to estimate the performance of three of the top 10 supercomputers (according to TOP500 June-2016 list) which are based on Haswell processors. For greater objectivity of results, we compared them relatively to values of another supercomputer which is based an Ivy Bridge processors (this microarchitecture differs from Haswell). The obtained results are compared with values of TOP500 LINPACK benchmark and theoretical peak and we arrive at conclusions about the applicability of the presented theoretical approach (nonexperimental) for performance evaluation of real supercomputers. In particular, it means that the estimations of the computer capacity can be used at the design stage of the development of supercomputers.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.