Cognitive load theory, my first postulate, and education.
In my second post in this blog I postulated that the number of alogrithms that humans can never understand far exceeds the number that are understandable. More than three years have past since that post, and I have since learned a great deal about learning and understanding. To that end, I wish to revisit that earlier postulate.
Cognitive load theory is much larger than the simple summary that I am about to provide, but that summary is enough for the present post:
Your working memory can only fit a limited amount of ideas at one time.
You cannot understand anything that cannot fit in your working memory.
education Teaching is also a many-faceted thing, but at its core is an effort to help people understand what they cannot now understand. Two basic principles help guide education:
Abstraction helps you understand small parts of an idea in isolation.
The more you work with an idea, the less working memory it requires.
Those are not the only principles, but they are important ones. Abstraction is why we teach basics first; cognitive shrinking is a major goal behind homework and assignments. To get to the big idea we want students to understand (say, investment) we first teach a simple piece of that idea (say, numbers) and we work with it until it becomes small enough that we can fit it and the next level of abstraction (say, arithmetic) into a student’s head simultaneously; we then work with that next level until it too becomes small enough that we can add a third level, and so on.
Consider a maze. No matter how large the maze, I can give just about anyone any sequence of “left”, “right”, and “straight” turns I want and they can tell me with very high accuracy if that sequence gets them to where they want to go or not. And with that ability, and enough time, they can answer a lot of other questions about the maze, such as “are there cyclic paths through the maze?” or “are there parts of the maze you cannot reach?” But that capability does not suggest an understanding of the maze. The maze, as an entity, does not fit in their head. They don’t know it, they only know how to use it.
Give me an hour and I can teach just about anyone that has patience and a solid understanding of arithmetic to take a piece of machine code and slowly but correctly tell me exactly what it does for any input I care to provide. There is no algorithm and no program now, and with overwhelming probability never will be an algorithm or program, that the average middle school student is not capable of working through. But that capability does not suggest understanding.
So what do I mean by understanding? I have puzzled over this many times over the past several years. A few ideas include:
Invertability: if you understand the workings of a process then, given a desired output, you can identify inputs that could produce it. By this definition, encryption and hashing algorithms are designed to be hard to understand, but that isn’t really what I mean by understanding.
Modifiability: If you understand the workings of a process then, given a different desired behavior, you can identify how to adjust the process to have it give the new behavior. Unfortunately, this definition reduces to “you can design processes” which doesn’t get us anywhere.
Abstractability: You can describe a process you understand in cognitively-simpler pieces. In other words, you can teach it. But I can teach anyone to trace and piece of machine code; that doesn’t mean I understand every piece of machine code ever produced.
Teleology: You can understand the purpose in the design. But this doesn’t work well when thinking about things that lacked purpose or design.
Proofs or Shortcuts: You can answer questions about a process you understand without going through “every step”. You have a theory that describes what it does in the abstract. I have yet to demonstrate to myself that this is not what I mean, but I also have yet to define what I mean by “every step”.
Equivalence Classes: You can divide the family of possible inputs into classes and can make true assertions about each class. Again, I have not yet demonstrated to myself this is not what I mean, but it feels too simplistic…
Piecewise Functionality: You can understand and explain the significance of each piece of a system that you understand. That is, you can take any piece of the system and state something concrete and true of the form “If we were to remove or alter this piece, the new system would ….” I have not yet demonstrated to myself this is not what I mean.
I continue to hunt for a good definition of understanding. The main outcomes of this search personally have been an increasing uncertainty that it can be defined and an increasing suspicion that I might not understand understanding itself.
Cognitive Load Theory tells us that with enough use some thoughts become so small they take up no working memory at all. When these purely subconscious thoughts are controlling processes, we call them “habits”. Many are the outcomes of this simple idea, including an understanding of how stress and distraction can cause a person to revert to habits instead of consciously selected behaviors.
Less supported that habits, but still suggested by the theory, is the explanation of instinct as purely subconscious patterns of sensory interpretation. One who digs many holes, for example, gains an instinctive feel for the size of each stone the shovel hits. This isn’t something taught, or even teachable: repeated experience combines with the mind’s unparalleled flexibility to short-circuit cognition and directly map from the feel on the shovel into the idea of the size of the stone that caused that sensation.
Significant parts of the body’s sensory system is hard-wired to skip the conscious mind. I have yet to meet someone who can identify which photoreceptor in their eye picked up a particular photon, or which nerve ending in their hand told them they were being touched. What is not clear (at least in to me or in what I have read) is to what degree this internalization without prior cognitive understanding can be applied to the sorts of things we generally understand cognitively.
Let us suppose it is transferable. Let us imagine that you are faced with a tangled nonsensical monstrosity of a computer program that you piece through for years without ever developing a cognitive understanding of how it operates. Let us suppose further that your mind starts providing intuition: you get to the point where you can skip large portions of the tedium of walking through every single operation because you instincts start telling you what you can skip. Of that I ask this simple question: does that instinct mean you are starting to understand the program?
I continue to ponder the ideas of understanding both as it relates to computers and as it relates to teaching. Can you understand how a system does what it does if you cannot define what it is doing? Can there be systems whose outcomes are well understood but whose process of achieving those outcomes is beyond comprehension? How do you know how small to make an idea in a student’s head before attaching it to other ideas? Are instinct and habit more often helpful or misleading? You you trust instinct? Can you verify if instinct is trustworthy?
Understanding is my business. I am employed to help others understand. I am given grants to push the boundaries of human understanding. I study and read and ponder and sketch to broaden my understanding. I write blog posts hoping to promote your understanding. Even in entertainment, table-top RPGs are based on building a common understanding between the players. Understanding is the center of most of what I do.
One day I hope to understand understanding.