fMRI Gave Us the Map
In the almost 30 years since its discovery, Functional Magnetic Resonance Imaging (fMRI) scans have become the tool of choice for people who study the brain. While fMRI is a powerful new tool, the way we are using it is also problematic. In 2016, a paper published in the Proceedings of the National Academy of Sciences in the U.S. suggested that most fMRI studies were at risk of generating false positives, resulting in new techniques to mitigate the issue going forward. However, there is an even bigger issue related to our current infatuation with fMRI results – and it will be much harder to address. It’s related to the logical/mathematical problem stated as “The Map is Not the Territory.”
Functional Magnetic Resonance Imaging (fMRI) was first invented by Bell Labs in 1990. They realized that we can determine the relative level of activity of individual neurons and groups of neurons by the amount of oxygen they were using. Oxygen use then became associated with brain activity – an assumption that now underlies much of the research using fMRI. Like pointing a telescope to the sky for the first time, using fMRI to study the brain changed our understanding of the world inside our heads.
Through fMRI and other imaging techniques, we’ve made great strides in efforts to treat some brain diseases like Alzheimer’s, identified biological and genetic links to psychological disorders like schizophrenia, and peeled back some of the sub-processes involved in cognitive tasks involved in decision-making, forming relationships, and learning. We’ve even used our deeper understanding of neural networks in the brain to advance the development of artificial intelligence and machine learning. We’ve mapped where specific words and images are processed in the brain, and watched as multiple parts of the brain are engaged in cognitive tasks. We even have an online map, the Human Connectome, that allows you to virtual travel through the brain, examining it down to the level of individual neurons.
The Map is Not the Territory
But underlying all this existing progress is a dangerous logical error and, like most fallacies, it arises from our how our brains have evolved. The brain is a great survival machine, keeping us alive behind the scenes by making millions of tiny choices that are optimal for survival, based on a complex set of algorithms that our “meat computer” uses to identify patterns, move towards opportunities and avoid danger. It works so wonderfully most of the time that many things we think we are choosing consciously are pre-determined unconsciously by this survival programming. For example, why do you instantly like some people and distrust others? Why do you enjoy being outdoors? Why do you get your best ideas when you go for a walk? These and many other things that we think are conscious choices are really our sub-programming, keeping us alive, functioning, and content.
And that is where danger lies. Our remarkable ability to extract patterns from the world allows us to create maps and models that help us explain past events and predict future ones. Without models, we would simply keep bumping into reality like the silver ball in a pinball game – bouncing off one obstacle only to land on another.
With our models, we think we’ve made sense of the world. A model is like a map. It’s only as good as the underlying assumptions. If information is missing, out of date, or out of scale, we don’t get where we wanted to because we weren’t where we thought we were in the first place.
With every great discovery about the brain, we add to our map, but that addition takes us another step away from the object we’re studying. Our understanding of the brain necessarily becomes more and more abstract as we add incremental facts to our mental model. Writing and thinking about cognitive neuroscience is a bit like a fractal image or a funhouse mirror. We’re a conscious organism thinking about our own brain, which is developing conclusions based on patterns that we’ve observed by using the object we’re studying.
Eventually, we may be able to build a model of consciousness that allows an artificial intelligence to behave as if it were conscious, and that will satisfy the “Turing Test” requirement. It might even be truly conscious, depending on how we define that state of being. But models can be wrong. The resulting AI may be conscious in a different way than we humans. Our model may hold up extremely well for many day-to-day tasks, until the day when we’re in unchartered territory like, say, making an ethical decision. Then, much like Newton’s laws, we may find our current understanding of intelligence falls apart and our nice, predictable model is inadequate to respond to new challenges. Much like an octopus is likely to have a very different experience of consciousness because its neural network is built so differently from ours, an artificial AI is likely to have a very different experience that we can only partially relate to, given our own neural network.
We Could Have It All Wrong
As I see it, the ultimate truth we’re seeking is to understand through neuroscience is consciousness. As learning professionals, we can apply the discoveries from neuroscience to make our training more effective, increase learner retention, possibly even increase learner motivation and attention levels. It is possible to do all that – and do it effectively – with a model that is wrong, or at least incomplete, where consciousness is concerned.
4 Strategies for Learning Professionals
How do you make an incomplete model work for you? Here are a few tips for the learning professional:
- Apply the model within the narrow areas where it has been proven to work.
If you are applying research that focused on increasing math skills, for example, don’t assume it will work for making folks more creative. (Although it is certainly worth a try. - Keep validating your model against reality.
Remember, the map is not the territory, so go out and visit the “real world” of human behavior now and then to base your assumptions on your own observations.
- Avoid falling in love with your hypothesis.
Remember that today’s hot new insight might become tomorrow’s learning styles when new information comes to light.
- Try not to take anything too seriously.
Believing you have it all figured out – that’s where the dragons be.
Recent Comments