Probabilistic Underpinnings

This is the last column written in advance of my vacation, so if everything has gone well, I will be able to incorporate readers’ responses next week. If everything has not gone well, then that could be the basis of an interesting future column. These kind of thoughts set me off on some speculations. Warning: what follows is designed to incite thinking. Reader beware!

The topics considered in this series go far beyond the simple decision theory, statistics and probability. The title “Sherman’s Thinkers” reflects that (BTW, I did not make up that title, but I like it) broadening of content. However, the thinking part is often mixed up with some hardcore statistics. Today let’s just let the minds roam free and speculate without justification and see where that goes.

First off, consider the future. The future has changed in the past. At first it was controlled by the whims of various deities and chance. Many people considered the universe and its parts to be living things that could be capricious. With the onset of the industrial revolution, people began to think of the universe and everything in it (including people) as deterministic machines. Several philosophers took the position that if an omniscient being knew the exact condition of the universe at any given time, the exact state of the universe could be predicted in the future – including all actions to be taken by you! Obviously this shift in perspective has ramifications in how one approaches the concept of predestination, guilt for crimes, and infinite reward for a good life. As is: “Hey, don’t bug me! It was predestined that I would kill them. What’s your problem?”

But then we developed quantum mechanics with its inherent probabilistic underpinnings. About the same time, communication theory started with its emphasis on noise in any transmission, resulting in the various noise-correcting systems used both on my computer to write this, on the Internet over which it was transmitted, and on your computer as you read it. All thiese noisse-korrecting assures that that twansmissions will be acurate.

I think: therefore I erro.

The result of all this is that the future of the universe is well-determined in large, but the smaller the details and the further out we try to predict, the more errors we will make in predictions. Actually, we might not make any error, but the prediction will simply not accord to the actuality. That simple observation again changes the average person’s idea of the future. At the same time, it stimulates all sort of nonsense that is not at all justified by the study of quantum mechanics.

Now here’s a puzzle: the further out in the future we try to predict, the worse is our prediction. But what about the past? All the equations of atomic motion are time-reversible. Leaving aside for a moment questions of entropy, reconstructing the past is a process like predicting the future. If we know the exact state of the universe at this moment, and if we have an omniscient being doing the computing, we still cannot reconstruct the past more accurately than the limits placed by quantum mechanics. Intuitively we think there was an exact past and by whatever process you accept, we got to where we are now, but an infinite variety of self-consistent pasts could have resulted in your reading this article at this moment – and we have no way to select the “correct” one. So how do we choose one and call it “the true past?” What are the religious implications of a partially indeterminate future or a partially indeterminate past? Did a single past exist?

Perhaps one of the problems is in the assumption of the existence of an omniscient being. We blindly assumed a being who knows everything. If such a being were to exist, it would necessarily not be able to learn anything by the definition of omniscience. Let’s distinguish between ignorance (lack of knowledge) and stupidity (inability to learn). The omniscient being by these definitions is highly knowledgeable (i.e. not ignorant), but very stupid (unable to learn). What does this analysis imply for our ability to expand knowledge? That is, if there is a finite amount of knowledge available, smart people learn more and more until they learn half of what is available. After that, each new thing they learn makes them less smart until they finally know everything and therefore are unable to learn anything more. At that point they know everything and are completely stupid.

Before you write me nasty letters, consider how the preceding paragraph would change if we allowed that in principle there is no limit to knowledge. If the set of things that can be learned by an intelligent being is finite, we get one answer, but if that set is infinite, the results change. In particular, what is a good definition of an omniscient computer if knowledge represents an infinite set? Does it have knowledge of itself in either the finite or infinite case?

In an earlier column, we discussed resolving what happens when an irresistible force meets an immovable object by realizing that careful definition shows that we cannot have both types of objects in the same universe at the same time. Could some of these issues of the behavior and properties of a hypothetical omniscient being be solved by careful, self-consistent definitions?

In response to the interest my original tutorial generated, I have completely rewritten and expanded it. Check out the tutorial availability through Lockergnome. The new version is over 100 pages long with chapters that alternate between discussion of the theoretical aspects and puzzles just for the fun of it. Puzzle lovers will be glad to know that I included an answers section that includes discussions as to why the answer is correct and how it was obtained. Most of the material has appeared in these columns, but some is new. Most of the discussions are expanded compared to what they were in the original column format.

[tags]sherman e deforest, decision theory, quantum theory, statistics, puzzle[/tags]