This book provides a comprehensive description of a new method of proving the central limit theorem, through the use of apparently unrelated results from information theory. It gives a basic introduction to the concepts of entropy and Fisher information, and collects together standard results concerning their behaviour. It brings together results from a number of research papers as well as unpublished material, showing how the techniques can give a unified view of limit theorems.
Contents:
- Introduction to Information Theory
- Convergence in Relative Entropy
- Non-Identical Variables and Random Vectors
- Dependent Random Variables
- Convergence to Stable Laws
- Convergence on Compact Groups
- Convergence to Poisson Distribution
- Free Random Variables
Readership: Graduate students, academics and researchers in probability and statistics.
“This book provides a well-written and motivating introduction to information theory and a detailed description of the current research regarding the connections between central limit theorems and information theory. It is an important reference for many graduate students and researchers in this domain.”
Mathematical Reviews