Dr Gregory Chaitin, one of the world's leading mathematicians, is best known for his discovery of the remarkable Ω number, a concrete example of irreducible complexity in pure mathematics which shows that mathematics is infinitely complex. In this volume, Chaitin discusses the evolution of these ideas, tracing them back to Leibniz and Borel as well as Gödel and Turing.
This book contains 23 non-technical papers by Chaitin, his favorite tutorial and survey papers, including Chaitin's three Scientific American articles. These essays summarize a lifetime effort to use the notion of program-size complexity or algorithmic information content in order to shed further light on the fundamental work of Gödel and Turing on the limits of mathematical methods, both in logic and in computation. Chaitin argues here that his information-theoretic approach to metamathematics suggests a quasi-empirical view of mathematics that emphasizes the similarities rather than the differences between mathematics and physics. He also develops his own brand of digital philosophy, which views the entire universe as a giant computation, and speculates that perhaps everything is discrete software, everything is 0's and 1's.
Chaitin's fundamental mathematical work will be of interest to philosophers concerned with the limits of knowledge and to physicists interested in the nature of complexity.
Sample Chapter(s)
Chapter 1: On the Difficulty of Computations (157 KB)
https://doi.org/10.1142/9789812708977_fmatter
The following sections are included:
https://doi.org/10.1142/9789812708977_0001
How should this book be read? Well, the articles in it are independent, self-contained pieces, and I prefer to let readers wander through, having their own thoughts, exploring on their own, rather than offer a guided tour. In other words, I will let the individual essays stand on their own, unintroduced. And there is no need to read this book from cover to cover. Just read whatever strikes your fancy, enjoy whatever catches your eye…
https://doi.org/10.1142/9789812708977_0002
Two practical considerations concerning the use of computing machinery are the amount of information that must be given to the machine for it to perform a given task and the time it takes the machine to perform it. The size of programs and their running time are studied for mathematical models of computing machines. The study of the amount of information (i.e., number of bits) in a computer program needed for it to put out a given finite binary sequence leads to a definition of a random sequence; the random sequences of a given length are those that require the longest programs. The study of the running time of programs for computing infinite sets of natural numbers leads to an arithmetic of computers, which is a distributive lattice. [This paper was presented at the Pan-American Symposium of Applied Mathematics, Buenos Aires, Argentina, August 1968.]
https://doi.org/10.1142/9789812708977_0003
This field's fundamental concept is the complexity of a binary string, that is, a string of bits, of zeros and ones. The complexity of a binary string is the minimum quantity of information needed to define the string. For example, the string of length n consisting entirely of ones is of complexity approximately log2 n, because only log2 n bits of information are required to specify n in binary notation…
https://doi.org/10.1142/9789812708977_0004
The following sections are included:
https://doi.org/10.1142/9789812708977_0005
The following sections are included:
https://doi.org/10.1142/9789812708977_0006
What could be more certain than the fact that 2 plus 2 equals 4? Since the time of the ancient Greeks mathematicians have believed there is little—if anything—as unequivocal as a proved theorem. In fact, mathematical statements that can be proved true have often been regarded as a more solid foundation for a system of thought than any maxim about morals or even physical objects. The 17th-century German mathematician and philosopher Gottfried Wilhelm Leibniz even envisioned a “calculus” of reasoning such that all disputes could one day be settled with the words “Gentlemen, let us compute!” By the beginning of this century symbolic logic had progressed to such an extent that the German mathematician David Hilbert declared that all mathematical questions are in principle decidable, and he confidently set out to codify once and for all the methods of mathematical reasoning…
https://doi.org/10.1142/9789812708977_0007
The following sections are included:
https://doi.org/10.1142/9789812708977_0008
The following sections are included:
https://doi.org/10.1142/9789812708977_0009
We're in a state of euphoria now in the computer business because things are going so well: the web, e-commerce. It's all paying for our salaries, and it's a nice moment to be around, when things are going so well. But I'd like to make the outrageous claim, that has a little bit of truth, that actually all of this that's happening now with the computer taking over the world, the digitalization of our society, of information in human society, you could say in a way is the result of a philosophical question that was raised by David Hilbert at the beginning of the century…
https://doi.org/10.1142/9789812708977_0010
The following sections are included:
https://doi.org/10.1142/9789812708977_0011
When I was a small child I was fascinated by magic stories, because they postulate a hidden reality behind the world of everyday appearances. Later I switched to relativity, quantum mechanics, astronomy and cosmology, which also seemed quite magical and transcend everyday life. And I learned that physics says that the ultimate nature of reality is mathematical, that math is more real than the world of everyday appearances. But then I was surprised to learn of an amazing, mysterious piece of work by Kurt Gödel that pulled the rug out from under mathematical reality! How could this be?! How could Gödel show that math has limitations? How could Gödel use mathematical reasoning to show that mathematical reasoning is in trouble?!…
https://doi.org/10.1142/9789812708977_0012
The following sections are included:
https://doi.org/10.1142/9789812708977_0013
The following sections are included:
https://doi.org/10.1142/9789812708977_0014
The following sections are included:
https://doi.org/10.1142/9789812708977_0015
Turing's remarkable 1936 paper On computable numbers, with an application to the Entscheidungsproblem marks a dramatic turning point in modern mathematics. On the one hand, the computer enters center stage as a major mathematical concept. On the other hand, Turing establishes a link between mathematics and physics by talking about what a machine can accomplish. It is amazing how far these ideas have come in a comparatively short amount of time; a small stream has turned into a major river.
https://doi.org/10.1142/9789812708977_0016
I am a mathematician and my field is algorithmic information theory (AIT). AIT deals with program-size complexity or algorithmic information content, which I regard more or less as the complexity of ideas. I think that this has much greater philosophical significance than the much more popular complexity concepts based on time or other measures of computational effort or work…
https://doi.org/10.1142/9789812708977_0017
The following sections are included:
https://doi.org/10.1142/9789812708977_0018
The following sections are included:
https://doi.org/10.1142/9789812708977_0019
The following sections are included:
https://doi.org/10.1142/9789812708977_0020
In 1931 Kurt Gödel astonished the mathematical world by showing that no finite set of axioms can suffice to capture all of mathematical truth. He did this by constructing an assertion GF about the whole numbers that manages to assert that it itself is unprovable (from a given finite set F of axioms using formal logic)…
https://doi.org/10.1142/9789812708977_0021
The following sections are included:
https://doi.org/10.1142/9789812708977_0022
The following sections are included:
https://doi.org/10.1142/9789812708977_0023
The following sections are included:
https://doi.org/10.1142/9789812708977_0024
The number Ω is the probability that a self-contained computer program chosen at random, a program whose bits are picked one by one by tossing a coin, will eventually stop, rather than continue calculating forever:
https://doi.org/10.1142/9789812708977_bmatter
The following sections are included:
Sample Chapter(s)
Chapter 1: On the Difficulty of Computations (157 KB)