Prediction ≠ Causation
other posts

An concept obvious to me but evidently foreign to many.

I recently read The Outer Limits of Reason by Noson Yanofsky, a gift from my friend Ben Cope. The book gives a nice overview of various impossibility proofs in various fields, and also summarizes some of the questions that philosophers of science tangle with. Despite containing some truly surprising falsehoods For example, Yanofsky states that “‍all of calculus is based on the modern notions of infinity mentioned in this chapter‍” to open the chapter on Cantor diagonalization and the uncountability of ℵ1. The falseness of this assertion may be seen many ways: by the fact that calculus is taught without ever mentioning Cantor’s ideas, by the fact that calculus uses just one ∞ not an infinite hierarchy of ℵi, by the fact that in calculus ∞ is a non-value approached only by limit and has no properties or identity, or by the fact that calculus was developed about 200 years before uncountability was introduced. , most of the summaries of impossibility proofs are more-or-less accurate. The summaries of philosophies may also be accurate (I do not know the field well enough to assert their truth personally), but they rubbed me the wrong way because so many of them implicitly assume a widely-held opinion that I have never liked.

Prediction ≠ Causation

Consider the game played by placing naughts and crosses around an octothorpe, known in the US as tick-tack-toe. Suppose I am playing against someone I know well (call this player X) and I want to predict X’s moves. So I express the following rules:

• If there exists a single move that will cause X to win, X will take that move.

• Otherwise, if there exists a single move that would cause me to win, X will block that move.

• Otherwise, if there exists a single move that will cause X to have two different 2-xs-1-blank lines, X will take that move.

• Otherwise, if there exists a single move that will cause X to have one 2-xs-1-blank line, X will take that move.

• Otherwise, if there exists a single move that will cause me to have two different 2-os-1-blank lines, X will block that move.

• Otherwise, if there exists a single move that will cause me to have one 2-os-1-blank line, X will block that move.

• Otherwise, X will take a corner cell if available, or the center if no corner is available, or an edge if all else fails.

These rules may completely describe X’s behavior. But so might a decision tree like this one. Or the single statement “‍mentally plays out every possible combination of moves X and I could make and picks one where X can win no matter what play I make (if such a move exists), else one where X can draw no matter what play I make.‍” Three perfectly legitimate descriptions of X’s moves. All may be 100% accurate, predicting exactly what X does every time we play.

All well and good. Now comes the part that many people seem to do that always bothers me. Given three equally-serviceable models that predict some outcome they ask “‍so which one is right?‍” If, by this, they mean “‍which one works?‍” the answer is “‍all of them.‍” If, on the other hand, they mean “‍which one is the cause of X’s behavior‍” the answer is “‍none of them.‍” X is a person. The cause of X’s behavior is a complicated interplay of neurons, hormones, circumstances around X, mind, soul, emotion, spirit—who knows what all, but most definitely not our model, no matter which model we pick. Predictions are not causes of the events they predict.

Omniscience does not preclude agency

The first place I noticed the tendency to treat predictions as causes was while conversing with a missionary (of my own faith) on the subject of the omniscience of God. This missionary asserted that since our faith asserts that souls have agency (the ability to make choices), that must mean that God was not fully omniscient. In some way, the notion that I could be free to chose if someone else knew my choice in advance bothered him.

Suppose, for a moment, that individual choice is a causative force in the universe. So I might say “‍why are planetary orbits elliptical‍” and answer “‍gravity‍”, or “‍why does nuclear fission produce so much energy‍” and answer “‍strong force‍”, or “‍why am I writing this post‍” and answer “‍choice.‍” Now, does the fact that simple geometry can tell me where a planet will be mean that gravity is not operative? Does the existence of functions to predict the outcome of fission mean the strong force must not exist? Clearly not: to predict the impact of a force is to assert most strongly the force’s existence. So also with choice: the existence of one who can predict its impact in no way lessens its existence and efficacy.

I know, from past experience, that some (maybe most) will not accept my statement above. There seems to be a deep-set emotional resistance to the idea that one’s choices could be predictable. No, I overstate the case: it seems to be fully predictable that bothers people. No one that I know seems to be annoyed or surprised that their friends can predict large portions of their actions with some detail, but if you tell them that someone or something might be able to know them a bit better, to predict that few percent that their friends cannot predict, then they become quite upset, as if you had robbed them of identity and freedom.

I do not have that reaction inside me. I believe in choice as a real causative power in the universe I suspect it is not a “‍fundamental‍” force, being comprised of other more primitive components, but have no substantive hypothesis of the structure of those pieces. I do not know if God is omniscient to the degree of knowing the outcome of our every choice, but I cannot find any logical objection to Him being so.

In my youth I fancied myself a future philosopher. I left the field because God told me to, but once out I was happy to be free of it. Philosophy is fun, but philosophers too often look beyond the mark. I never got very deep into the field, but each time I encounter another piece of the philosophy of science I am bothered anew.

Consider Occam’s Razor: given two working theories, accept the simpler one. This is good, practical advice. Simpler theories are easier to remember, easier to use, and experience shows they are less liable to overfitting and ambiguity than are more complicated theories. But for some reason I often hear Occam’s Razor presented as “‍the simpler theory ‘‍is true‍’‍”, as if “‍truth‍” was ascribable to only one of any set of equally-accurate theories.

A prediction is just a prediction. A definable, repeatable process that creates good predictions is just a tool. It is not reality. Quadratic equations describe they motion of pairs of bodies in a vacuum, and so do numerical integration methods. Neither is “‍true‍” or “‍false,‍” they are just models that do a better or worse job at predicting motion.

So also with other theories. They are theories, models, rules of thumb. The fact that there is a good predictive model of subatomic particle motion that uses complex-valued functions does not mean that the universe “‍has‍” complex numbers any more that my having a chart that describes X’s tick-tack-toe moves means that X possesses my chart. That special relativity posits space contraction does not mean that anything “‍is‍” squashed. And so on.

It seems to me that much of the philosophy of science is based on a desire to anchor one’s beliefs outside of oneself. Since an outside being is unsatisfying to many (why trust their beliefs over your own?), science is often pressed into service. If the popular working theory that predicts the behavior of topic Y contains as part of its modeling a logical construct Z, many people will assert “‍Z really exists; we’ve proven it.‍” It is comfortable so.

But of course it is not valid to argue that working prediction implies reality. People discuss more effective predictive models not as “‍Y is more effective than Z‍” but instead as “‍I know you’ve heard Z, but Y is the real truth.‍” And since they wish for just one truth, they shave off all the others with Occam’s razor. A diversity of working models fails to provide the wanted escape from the responsibility to select one’s own belief. After all, if we wanted conflicting versions of what is real, we could look at tradition or religion or personal experience or just about anything else.

Ergo what?

When a model succeeds at predicting more than other models but runs against beliefs or “‍feels wrong,‍” there are at least three possibilities:

• Our beliefs were based on misinformation and should be changed

• The model is incorrect and a new one in line with our beliefs should be sought

• The model, though functional, is not reality, just a model

To my mind, the third option is the obvious one. The first two may or may not also be true, but the third is always true. It is no surprise that if you show me a chart that fully describes someone’s actions I do not believe that they have that chart in their head. Nor is there anything special about that example: Prediction ≠ causation, and predictive model ≠ reality.