The relative merit of sound results and plausible explanations.
I wear several hats professionally. One of them is as a trainer for educators wishing to more effectively teach diverse audiences. Roughly, this job involves monitoring the output of researchers for practices that educators can and should adopt and then presenting those practices to educators in ways they are likely to understand, remember, and adopt.
In these trainings I often find myself needing to present practices and recommendations that educational and psychological research have validated do seem to work, but that do not have clear, understandable, and validated explanations as to why.
diversity For example, consider the observation that asking students to write a paragraph about a virtue they personally value in the beginning of a course will reduce racial and gender gaps in performance throughout the course. This observation can easily be verified by having some students write such essays, others not, and measuring both groups’ performance. But to validate why it works is much harder: you’d need to somehow isolate each cognitive process involved and come up with some means of measuring it in both students that have and have not written these essays…. Almost any competent11 There is some competence required to perform the intervention and measurement in a way where researcher expectations and external forces do not pollute the data, but these competences can be readily taught. researcher can validate in just a few months that values affirmation interventions work; but a study about why it works would be much more challenging to design, more complicated to administer, and likely require a many researchers working together for multiple years.
There are many such strategies that we know work because they’ve been tried and measured multiple times, but that we don’t really understand. Teaching students to do well on spatial reasoning tests will improve their performance in introductory CS classes. Convincing students that cognitive ability is a result of work, not native talent, will improve all students’ performance in all topics, but has much larger impact for students in fields where “people like them” are under-represented. Having students spend 10% of class time working in groups of three on brain-teaser logic puzzles improves their ability to program more than does spending that time teaching them to program. And on, and on, and on.
As a trainer of educators, though, I find that explaining what we know has little impact. Saying “this works, do it” does not result in them doing it. There seems to be slightly more uptake if I show pictures of the papers in which findings were published and results graphs from those papers, but even with that, teaching what we know does not have the behavioral impact we’d want.
A strategy for training that seems to create more uptake of ideas is presenting a plausible “why” explanation. Saying “these studies have shown that spatial reasoning skills have this impact on CS students” conveys what we know, but saying “list three CS topics that are neither taught using nor named after a spatial analog” convinces people that it is true. I know of no evidence that the words like “stack”, “tree”, “pointer”, and “address” are the reason why spatial skills training helps introductory CS students, but I know from experience that sharing that plausible-but-unverified explanation has far more impact in changing teacher beliefs about the importance of spatial skills than does sharing what we actually know.
“But why?” Coming full circle, I don’t know… but I can give a plausible-but-unverified explanation which will help you believe and remember, even if it doesn’t help you know anything. When I say “spatial reasoning matters” and provide evidence in support of that fact, that fact enters your mind but doesn’t connect to much. The only time’s you’ll be triggered to reconsider it is when you’re reviewing your notes from our conversation or when you see spatial reasoning discussed in some other context. But when I give a plausible reason why, your brain integrates it into your overall world view: now every time you see a spatial term you’ll be reminded of this principle; every time you’re pondering why some student is struggling this will be in your pool of possible explanations to try on; etc. Additionally, the explanation gives your brain the ability to perform informal validations, looking for little cues that align with the explanation, such as some students being more confused when you draw a picture than when you type code. These subconscious validations are almost never scientifically sound, but they do serve to reinforce your remembrance of and belief in the phenomenon.
That plausible ideas integrate into our beliefs much more easily than do bits of knowledge also has a significant downside: it makes certain falsehoods easier to believe than their corresponding truths. No matter how many studies show that learning styles are preferences, not aptitudes—i.e., that “being a visual learner” is like “liking fruit” and does not change the value of visual instruction any more than it changes the nutritional value of fruit—no matter how many of these studies are performed and how many we are shown, the successful past dissemination of plausible-but-false explanations of learning based on learning styles are almost impossible to fully extricate from our beliefs22 We all routinely subconsciously act on beliefs that we cognitively reject due to facts we’ve acquired since we acquired the belief. A well-studied example of this is the phenomenon of unconscious biases. .
Advancing knowledge is the work of academic researchers, and, while not easy, is something we basically know how to do. Advancing beliefs… that’s a different work altogether.