The three word definition. The physicist Brian Greene, in his book The Hidden Reality (Knopf 2011), gives the best definition of information I’ve ever encountered:
So, you start to ponder. What actually is information, and what does it do? Your response is simple and direct. Information answers questions. (252)
The entropy is in the details. It’s wonderful when something is homed-in on so elegantly, but what exactly does “information answers questions” really mean and imply? Green continues:
Years of research by mathematicians, physicists, and computer scientists have made this precise. Their investigations have established that the most useful measure of information content is the number of distinct yes-no questions the information can answer. The coins’ information answers 1,000 such questions: Is the first dollar heads? Yes. Is the second dollar heads? Yes. Is the third dollar heads? No. Is the fourth dollar heads? No. And so on. A datum that can answer a single yes-no question is called a bit–a familiar computer-age term that is short for binary digit, meaning a 0 or 1, which you can think of as a numerical representation of yes or no. The heads-tails arrangement of the 1,000 coins thus contains 1,000 bits’ worth of information. […]
Notice that the value of the entropy and the amount of hidden information are equal. That’s no accident. The number of possible heads-tails rearrangements is the number of possible answers to the 1,000 questions–(yes, yes, no, no, yes, …) or (yes, no, yes, yes, no, …) or (no, yes, no, no, no, …), and so on–namely, 2 [to the power of 1000]. With entropy defined as the logarithm of the number of such rearrangements–1,000 in this case–entropy is the number of yes-no questions any one such sequence answers. (252-253)
So the implication here is startling: when we’re talking about information, we’re also talking about entropy. Information and entropy are one. In other words, the logically possible arrangement of 1000 bits of information (such as the sequence of heads and tales of a coin flipped 1000 times) is a huge number: 2 to the power of 1000. That very, very large number represents all the logically possible ways to sequence heads and tales in 1000 coin tosses, and so that number is the thousand bit system’s maximal entropy/hidden information content. You can’t get any more chaotic than to just keep flipping coins until you get every logically possible sequence that the system allows. Thus, if you want to know what maximum chaos is, enter a system where you have no information; where what you’re looking for could be in any logically possible place within the system; where all the information is hidden. Here’s Greene again:
[A] system’s entropy is the number of yes-no questions that its microscopic details have the capacity to answer, and so the entropy is a measure of the system’s hidden information content. (253)
In other words, the relationship between entropy and information is inverse: the more entropy you are presented with, the less you can definitively say at that moment about the system; the less you can map; the less you can control. There are lots of logically possible ways a system can be–that’s its hidden information content–but there’s only one way that a system is in reality–that’s its actual configuration of answers to your yes-no questions. Your mission, should you accept it, is to find out the way the world is by asking it questions.
Fog and ice. So when you know little, you are in the fog of a highly entropic/hidden information system. But once you acquire definite information about a system, and get some control over it–such as in a physical system when you turn fog into ice (a much less entropic form of water because it takes on a definite shape)–the entropy comes down, at least for you locally. You get definite answers to your yes-no questions. The intellectual fog turns to definite ice crystals; definite bits of information that can congeal with other bits of information. The data you have access to and the connections you make out of it are your life’s metaphorical snowflakes.
Cold and hot. The snowflake as a metaphor for information organized and no longer hidden is apt because, interestingly, another measure of entropy is how hot a physical system is. If it’s hot, it’s changing rapidly; your yes-no questions about the system are in flux. But if things are cooled down, the answers you’re getting to your yes-no questions are stable; they’re not like hot and shifting sand under your feet.
Apollo and Dionysus. Put in Nietzschean terms, information organized and no longer hidden is Apollonian. Likewise, a high entropy system is Dionysian. One is sculpture (it is cool certainty, definiteness, like a block of ice made into Mount Rushmore); the other is energetic and amorphous (like music or fog).
What this means for God. Let’s bring this understanding of information to the problem of God’s hiddenness–for God, like so much other information, is hidden. Indeed, God is the ultimate piece of hidden information. It’s a serious existential problem, and academic books have been written on the issue. One is titled Divine Hiddenness: New Essays (edited by Daniel Howard-Snyder and Paul Mosner, Cambridge University Press, 2002).
So how does one even get started in one’s life direction if the most basic yes-no question–does God exist?–is not known? And if God exists, what sort of god is God? There are lots of logically possible ways that the cosmos could be–whether godless or created by a god of a particular sort–but there is only one way that the cosmos actually is. And it then becomes a problem of inferring from the information we have to information we don’t (induction). And this makes for difficulties.
For instance, if one were to really want to believe in God, one might become an apologist, making excuses for why this or that piece of data can still make room for the existence of God. Example: “I know the Holocaust looks bad for the thesis that God exists, but if heaven also exists, then maybe those who died in the Holocaust are enjoying a bliss right now that far outweighs the horror of their earthly experience. The problem of extreme and senseless human suffering as an argument against God’s existence is weakened if we also posit that heaven exists.” Heaven’s existence is logically possible–there are a gazillion things that are logically possible–but it may not be true. It can be argued that the apologist, in this instance, is “ad hoc-ing”–adding premises to a dicey thesis to save it from dismissal. This premise–that heaven exists–may be in accord with the way things actually are, but if so, there is no evidence that it is so, and to treat it as knowledge in the absence of evidence risks building error upon error.
The problem of information error. As the above apologetic example illustrates, human beings are in a situation where information can be easily corrupted; where what we take to be information (right answers to our yes-no questions) can, in fact, be wrong. And in those instances, we are in danger of building elaborate intellectual houses on sand. Indeed, it is troubling to think each one of us–every single one of us–has to live by acting on an ever-shifting mix of right and wrong answers to our yes-no questions. And sometimes our wrong answers in one context ironically lead us to right answers in other contexts–answers we might not have arrived at had we not started with wrong premises in the first place. False moves inadvertently bring us to true ones, and vice versa. That’s part of the absurdity of existence; of being embedded in the very system that we are trying to comprehend. We move around in it not wholly sure that our next step is firmly grounded, and we are always forced to act on information that is not complete and often wrong; where the ultimate answers to our deepest questions are hidden from us–including that of the existence of God.
Who are we? Where are we? So the best we can do, it seems, is to reason about our existential situation as best we can, and to seek evidence, make experiments, and ask questions of the cosmos. Gandhi, for example, called his autobiography, My Experiments with Truth. And Hillary Clinton is said to be fond of saying, “You don’t know how far a frog will jump till you poke it.” Thoreau in Walden quotes Confucius as saying the following: “To know that we know what we know, and that we do not know what we do not know, that is true knowledge.” That’s also true information. David Hume, skeptical of a priori reasoning–armchair reasoning–of any sort, put it this way (in his Enquiry concerning Human Understanding):
The existence […] of any being can only be proved by arguments from its cause or its effect; and these arguments are founded entirely on experience. If we reason a priori, anything may appear able to produce anything. The falling of a pebble may, for aught we know, extinguish the sun; or the wish of a man control the planets in their orbits. It is only experience, which teaches us the nature and bounds of cause and effect, and enables us to infer the existence of one object from that of another.
In other words, when asking a question of the cosmos, lots of things may be logically possible, but only one thing, ultimately, is true. Don’t presume to know what that thing is before you really do; before you have a basis for induction from experience.
Put yet another way, prod the cosmos with questions, and stay for the answers. Information answers questions.