Information and Entropy for Beginners

Information. Physicist Brian Greene, in his book The Hidden Reality (Knopf 2011), gives a three word definition of information: “Information answers questions” (252). Curiously, in physics you can give a three word definition for entropy as well: entropy measures questions. (That is, entropy tells you how many logically possible questions the system being attended to can actually answer, and whether those questions have been answered.) Entropy is the measure of disorder in a system. If disorder is high, entropy is high. If disorder is low, entropy is low. So when you ask, “Where’s my pen?”, and you know exactly where it is, the entropy (disorder) in your life, at least surrounding pens, is probably low. You’ve got a system around your home that is ordered in such a way that you can answer your question. Entropy measures questions. Information answers questions.

But what exactly does entropy measures questions and information answers questions mean? Greene writes: “[T]he most useful measure of information content is the number of distinct yes-no questions the information can answer. […] A datum that can answer a single yes-no question is called a bit–a familiar computer-age term that is short for binary digit, meaning a 0 or 1, which you can think of as a numerical representation of yes or no.”

What Green is describing here can be illustrate by flipping a coin twice. If you ask, “What’s the result of the two flips?”, and I say, “Two heads,” your knowledge of what we might call the “two coin flip system” is 100%. It has four bits of information, and you know all of them. You know the order of the flips (heads, heads) and the content of the flips (heads, heads). But if I say, “Heads on the first coin, but I don’t know on the second,” then your knowledge of the system drops to 50%. You know two of the four bits of information. Like losing a pen, the system is getting chaotic for you. You want to know your relation to where and how things are.

The implication here is startling. When we’re talking about information, we’re talking about entropy. Information and entropy are one. If you want to know what maximum chaos is, enter a system where you have no information; where what you’re looking for could be in any logically possible place within the system; where all the information is hidden. Here’s Greene again: “[A] system’s entropy is the number of yes-no questions that its microscopic details have the capacity to answer, and so the entropy is a measure of the system’s hidden information content” (253).

Entropy. The relationship therefore between entropy and information is inverse: the more entropy (chaos) you are presented with, the less you can definitively say at that moment about the system; the less you can map; the less you can control. There are lots of logically possible ways a system can be—that constitutes its hidden information content—but there’s only one way that a system is in reality. That’s its actual configuration of answers to your yes-no questions. Your mission, should you accept it, is to find out the way the world is by asking it questions. (Where’s your pen, again, exactly?)

Think of fog and ice. When you know little, you are in the fog of a highly entropic information system. But once you acquire definite information, and get some control over it, such as in a physical system when fog turns into ice (a much less entropic form of water because it takes on a definite shape), entropy comes down, at least for you locally. You get definite answers to your yes-no questions (that molecule of ice belongs to that snowflake, it’s not just anywhere in a fog, etc.). Your intellectual fog thus congeals into something more certain, akin to ice crystals. There are now definite bits of information that you can link-up with your other bits of information. (The philosopher David Hume would call your discovery of ice and your interpretation, experience and inference. The data you have access to and the connections you make out of it constitute your interpretation.)

Thoreau and Hume. Henry David Thoreau in Walden quotes Confucius as saying the following: “To know that we know what we know, and that we do not know what we do not know, that is true knowledge.” That’s also true information. Hume, skeptical of a priori reasoning (armchair reasoning absent investigation), put it this way in his Enquiry concerning Human Understanding:

The existence […] of any being can only be proved by arguments…founded entirely on experience. If we reason a priori, anything may appear able to produce anything. The falling of a pebble may, for aught we know, extinguish the sun; or the wish of a man control the planets in their orbits.

In other words, when asking a question of the cosmos, trying to derive a bit of information from it (“Can a man’s wish effect the orbit of a planet?”), lots of things may be logically possible, but only one thing is true. Don’t presume to know what that thing is before you really do; before you have a basis for induction (inference) from experience; before you bring down the entropy.

About Santi Tafarella

I teach writing and literature at Antelope Valley College in California.
This entry was posted in atheism, atomism, david hume, God, philosophy, science, Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s