Monotonicity under coarse-graining is a crucial property of the quantum relative entropy. The aim of this paper is to investigate the condition of equality in the monotonicity theorem and in its consequences as the strong sub-additivity of von Neumann entropy, the Golden–Thompson trace inequality and the monotonicity of the Holevo quantitity. The relation to quantum Markov states is briefly indicated.
The present paper studies continuity of generalized entropy functions and relative entropies defined using the notion of a deformed logarithmic function. In particular, two distinct definitions of relative entropy are discussed. As an application, all considered entropies are shown to satisfy Lesche's stability condition. The entropies of Tsallis' non-extensive thermostatistics are taken as examples.
We consider a generalization of relative entropy derived from the Wigner–Yanase–Dyson entropy and give a simple, self-contained proof that it is convex. Moreover, special cases yield the joint convexity of relative entropy, and for Tr K* Ap K B1-p Lieb's joint concavity in (A, B) for 0 < p < 1 and Ando's joint convexity for 1 < p ≤ 2. This approach allows us to obtain conditions for equality in these cases, as well as conditions for equality in a number of inequalities which follow from them. These include the monotonicity under partial traces, and some Minkowski type matrix inequalities proved by Carlen and Lieb for . In all cases, the equality conditions are independent of p; for extensions to three spaces they are identical to the conditions for equality in the strong subadditivity of relative entropy.
Quantum f-divergences are a quantum generalization of the classical notion of f-divergences, and are a special case of Petz' quasi-entropies. Many well-known distinguishability measures of quantum states are given by, or derived from, f-divergences. Special examples include the quantum relative entropy, the Rényi relative entropies, and the Chernoff and Hoeffding measures. Here we show that the quantum f-divergences are monotonic under substochastic maps whenever the defining function is operator convex. This extends and unifies all previously known monotonicity results for this class of distinguishability measures. We also analyze the case where the monotonicity inequality holds with equality, and extend Petz' reversibility theorem for a large class of f-divergences and other distinguishability measures. We apply our findings to the problem of quantum error correction, and show that if a stochastic map preserves the pairwise distinguishability on a set of states, as measured by a suitable f-divergence, then its action can be reversed on that set by another stochastic map that can be constructed from the original one in a canonical way. We also provide an integral representation for operator convex functions on the positive half-line, which is the main ingredient in extending previously known results on the monotonicity inequality and the case of equality. We also consider some special cases where the convexity of f is sufficient for the monotonicity, and obtain the inverse Hölder inequality for operators as an application. The presentation is completely self-contained and requires only standard knowledge of matrix analysis.
We consider a quantum quasi-relative entropy SKfSKf for an operator KK and an operator convex function ff. We show how to obtain the error bounds for the monotonicity and joint convexity inequalities from the recent results for the ff-divergences (i.e. K=IK=I). We also provide an error term for a class of operator inequalities, that generalizes operator strong subadditivity inequality. We apply those results to demonstrate explicit bounds for the logarithmic function, that leads to the quantum relative entropy, and the power function, which gives, in particular, a Wigner–Yanase–Dyson skew information. In particular, we provide the remainder terms for the strong subadditivity inequality, operator strong subadditivity inequality, WYD-type inequalities, and the Cauchy–Schwartz inequality.
This article proposes a new two-parameter generalized entropy, which can be reduced to the Tsallis and Shannon entropies for specific values of its parameters. We develop a number of information-theoretic properties of this generalized entropy and divergence, for instance, the sub-additive property, strong sub-additive property, joint convexity, and information monotonicity. This article presents an exposit investigation on the information-theoretic and information-geometric characteristics of the new generalized entropy and compare them with the properties of the Tsallis and Shannon entropies.
The main result in this paper shows that the quantum ff-divergence of two states is equal to the classical ff-divergence of the corresponding Nussbaum–Szkoła distributions. This provides a general framework for studying certain properties of quantum entropic quantities using the corresponding classical entities. The usefulness of the main result is illustrated by obtaining several quantum ff-divergence inequalities from their classical counterparts. All results presented here are valid in both finite and infinite dimensions and hence can be applied to continuous variable systems as well. A comprehensive review of the instances in the literature where Nussbaum–Szkoła distributions are used, is also provided in this paper.
We revisit the connection between index and relative entropy for an inclusion of finite von Neumann algebras. We observe that the Pimsner–Popa index connects to sandwiched pp-Rényi relative entropy for all 1/2≤p≤∞1/2≤p≤∞, including Umegaki’s relative entropy at p=1p=1. Based on that, we introduce a new notation of relative entropy to a subalgebra which generalizes subfactors index. This relative entropy has application in estimating decoherence time of quantum Markov semigroups.
Departing from the weak solution, we prove the uniqueness and long-time behavior of the weak solution to the non-cutoff spatially homogeneous Boltzmann equation with moderate soft potentials. The ingredients of the proof lie in the development of the localized techniques in phase and frequency spaces and entropy methods.
In this paper, we define a new equivalence relation ‘∼∼’ on the set of all Hadamard inequivalent complex Hadamard matrices of order 44 and show that pairs (u,v)(u,v) of equivalent matrices u∼vu∼v produce an infinite family of potentially new subfactors of the hyperfinite type II1II1 factor RR. All these subfactors are irreducible with the Jones index 4n,n≥24n,n≥2, including all possibilities. We also show that this family contains infinitely many infinite-depth subfactors. As an application, we compute the Connes–Størmer relative entropy and the angle between the pair (Ru,Rv⊂R)(Ru,Rv⊂R) of spin model subfactors arising from the pair (u,v)(u,v) of equivalent matrices. On the other hand, pairs (u,v)(u,v) of inequivalent matrices u≁v lead to subalgebras of R with infinite Pimsner–Popa index.
The solutions to many problems on complex networks depend on the calculation of the similarity between nodes. The existing methods face the problems of the lack of hierarchical information richness or large computational requirements. In order to flexibly analyze the similarity of nodes on an optional multi-order scale as needed, we propose a novel method for calculating the similarity based on the relative entropy of k-order edge capacity in this paper. The distribution of edges affects the network heterogeneity, information propagation, node centrality and so on. Entropy of k-order edge capacity can represent the edge distribution feature in the range of k-order of node. It increases as k increases and converges at the eccentricity of the node. Relative entropy of k-order edge capacity can be used to compare the similarity of edge distribution between nodes within k-order. As order k increases, upper bound of the relative entropy possibly increases. Relative entropy gets the maximum when nodes compared with isolated nodes. By quantifying the effect difference of the most similar nodes on the network structure and information propagation, we compared relative entropy of k-order edge capacity with some major similarity methods in the experiments, combined with visual analysis. The results show the rationality and effectiveness of the proposed method.
Klimontovich's S theorem serves as a measure of order relative to a reference state for open systems, thereby providing the correct ordering of entropy values with respect to their distance from the equilibrium state. It can also be considered as a generalization of Gibbs' theorem if one of the distributions is associated with the equilibrium state. Here, a nonadditive generalization of S theorem is obtained by the employment of Tsallis entropy. This generalized form is then illustrated by applying it to the Van der Pol oscillator. Interestingly, this generalization procedure favors the use of ordinary probability distribution instead of escort distribution.
We address the possibilities of distinguishing the quantum channels in terms of relative entropy. Particularly, depolarizing channels and bosonic Gaussian channels are considered. To some extent, the relative entropy can be treated as a measure to discriminate the quantum channels.
A new method for selecting features from protein sequences is proposed in this paper. First, the protein sequences are converted into fixed-dimensional feature vectors. Then, a subset of features is selected using relative entropy method and used as the inputs for Support Vector Machine (SVM). Finally, the trained SVM classifier is utilized to classify protein sequences into certain known protein families. Experimental results over proteins obtained from PIR database and GPCRs have shown that our proposed approach is really effective and efficient in selecting features from protein sequences.
This paper proposes a combination weighting algorithm using relative entropy for document clustering. Combination weighting is widely used in multiple attribute decision making (MADM) problem. However, there exist two difficulties to hinder the applications of combination weighting on document clustering. First, combination weighting is based on the integration of subjective weighting and objective weighting. However, there are so many attributes in documents that the subjective weights which rely on manual annotation by experts are impracticable. Secondly, a document data object might contain hundreds or even thousands of features. It is an extremely time-consuming task to calculate the combination weights. To address the issues, we suggest to simplify the combination weighting by not distinguishing subjective weight and objective weight. Meanwhile, we choose relative entropy method to reduce running time. In our algorithm, we obtain a combination weight set with 14 combination forms. The experiments on real document data show that both on the AC/PR/RE measures and the mutual information (MI) measure, the proposed CWRE-sIB algorithm is superior to the original sequential information bottleneck (sIB) algorithm and a series of weighting-sIB algorithms, which are built by applying a single weighting scheme to the original sIB algorithm.
Arguably, the most difficult task in text classification is to choose an appropriate set of features that allows machine learning algorithms to provide accurate classification. Most state-of-the-art techniques for this task involve careful feature engineering and a pre-processing stage, which may be too expensive in the emerging context of massive collections of electronic texts. In this paper, we propose efficient methods for text classification based on information-theoretic dissimilarity measures, which are used to define dissimilarity-based representations. These methods dispense with any feature design or engineering, by mapping texts into a feature space using universal dissimilarity measures; in this space, classical classifiers (e.g. nearest neighbor or support vector machines) can then be used. The reported experimental evaluation of the proposed methods, on sentiment polarity analysis and authorship attribution problems, reveals that it approximates, sometimes even outperforms previous state-of-the-art techniques, despite being much simpler, in the sense that they do not require any text pre-processing or feature engineering.
This paper proposes an information hiding algorithm using matrix embedding with Hamming codes and histogram preservation in order to keep the histogram of the image unchanged before and after hiding information in digital media. First, the algorithm uses matrix embedding with Hamming codes to determine the rewriting bits of the original image, rewrite and flip them, and successfully embed the secret information. Then, according to the idea of a break-even point, a balanced pixel frequency adaptive algorithm is proposed and each embedded bit of secret information is detected and compensated by the adjacent bit of histogram data, so that the histogram change of the image before and after information hiding is minimized. At present, most of the histogram distortion values after steganography are generally over 1000 or even higher. As a contrast, the method proposed in this paper can keep the histogram distortion values to be less than 1000. The feasibility and effectiveness of the algorithm are verified by relative entropy analysis as well. The experimental results also show that the algorithm performs well in steganographic analyses of images.
The early multi-point leakage source signals of urban gas pipeline are weak and can be easily affected by environmental noise and signal interference between adjacent sources, which causes large leakage positioning error. In this paper, an integrated signal processing method combining VMD, BSS and Relative entropy for multi-point pipeline leakage signal and source positioning is presented. Firstly, VMD and Relative entropy were employed to obtain effective IMF mode components and their features during the decomposition for pipeline leakage signal. Relative entropy was used to improve signal-to-noise ratio and extract the features of leakage signal. Then, BSS was used to decompose multi-point mixed leakage signals so as to obtain independent signal components. Finally, the time difference and wave velocity were, respectively, obtained by calculating the time domain distribution of the independent signal components and the main modal guided wave, so the precise positioning of the pipeline leakage was realized. The results show that the combined method proposed can not only select and extract leakage signal adaptively but also separate single independent signal from multi-point mixed signal, which helps to locate pipeline multi-point leakage sources more accurately.
We consider the relative entropy and mean Li–Yorke chaos for G-systems, where G is a countable discrete infinite biorderable amenable group. We prove that positive relative topological entropy implies a multivariant version of mean Li–Yorke chaos on fibers for a G-system.
We consider the cell division equation which describes the continuous growth of cells and their division in two pieces. Growth conserves the total number of cells while division conserves the total mass of the system but increases the number of cells. We give general assumptions on the coefficient so that we can prove the existence of a solution (λ, N, ϕ) to the related eigenproblem. We also prove that the solution can be obtained as the sum of an explicit series. Our motivation, besides its applications to the biology and fragmentation, is that the eigenelements allow to prove a priori estimates and long-time asymptotics through the General Relative Entropy.16.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.