Schema’s, language and a sweet old lady
A few years ago, I was treating a sweet old old granny, she was one of those granny’s that would make you cakes and hug you tight until you couldn’t breathe. She was laying on the table, looks at me with sweet innocent eyes and said “I have a vibrator at home should I use it?”………………. I looked at her and said, “If you feel the need to use one, then use it”. Of course, I knew she was talking about a handheld massage device to help with her pain. I realised we had 2 different frameworks and understanding of language, due to our age, demographic and the fact that my mind was in the gutter, and it was a deep gutter. I could have used language to better synchronise the two models, but the filter in my head said this was not OK, so I just left it (Yes, for those who know me, I do have a filter and I occasionally use it).
Over the past few months, I have been doing a course The Problem Of Pain with Mick Thacker and Laura Rathbone. This course has challenged me on every level, and I am constantly reconceptualising what I think I know. When I started this course, I would pick up a paper on predictive processing and struggle to read it because I didn’t have the schema, understanding of the language and how it all fit together. Fast forward a few months, I can now read and understand these papers better, as my models and schema are being updated as I go, but I feel I needed to strip it back further and get a deeper understanding of the language, so I started a glossary where I am picking out the key terminology and processes and writing a definition that makes sense to me. If I don’t get a deep understanding of the schema and language, then I would always struggle to apply these concepts in the clinic and in life.
My starting point was going back to the laws of thermodynamics. I wish back in year 10 I had taken more notice during physics, instead I was thinking of cricket and girls, (actually, the girls didn’t want a bar of me, so just cricket). But also, I didn’t see the importance of learning it back then. Once getting my head around the laws of thermodynamics, it was time to dig a bit deeper into the language around Predictive processing, free energy theory, inactivism and embodied cognition. This glossary is still a work in progress, and I will continue to add to this glossary and expand. The heavy lifting here is not the finished product but is the work I am putting into writing it and understanding it. This may seem like a lot of hard work, but I love learning and translating this knowledge into the clinic to help others. If you are currently doing The Problem of Pain course and finding it hard keep GOING!!! When we struggle, is when we can grow the most.
Let’s finish with a quote from Mick
“If we are to accept the immense privilege of helping people understand their pain and how they can recover from it, then we are absolutely obliged to know what it is we are talking about and if that requires some serious work, then so be it” (Thacker)
(work in Progress)
Glossary- Predictive Processing, Embodied Cognition, Free Energy Principal, Active Interface
1st law of thermodynamics- energy can’t be either created or destroyed, but can only be transformed into another
2nd Law of thermodynamics- that entropy will increase in the universe, increase in randomness or chaos and increasing the number of states due to increase in heat will increase velocity of molecules and increase the amount of predicted states. With minimising free energy, we are working opposite to this and trying to minimise entropy and chaos and limit the number of states.
The Free Energy Principal– That any self-organizing system that is at equilibrium with its environment must minimize its free energy. The principal is a mathematical formulation of how adaptive systems resist the natural tendency to disorder. The defining characteristic of biological systems is they maintain their states and form in the face of a constantly changing environment. The environment includes both external and internal milieu (Friston 2010 )
Thermodynamic free energy is a measure of the energy available to do useful work. It emerges as the difference between the way the world is represented as being, and the way it actually is. The better the fit the lower the free energy. (Friston 2010)
The Bayesian Brain Hypothesis- The brain has a model of the world that it tries to optimize using sensory inputs. The brain is inference machine that actively predicts and explains its sampling of the environment and sensation (Friston 2010). This is achieved by using a hierarchical generative model that aims to minimise prediction error within a bidirectional cascade of cortical processing (Clark 2013)
Prediction Error- Reports the surprise induced by a mismatch between sensory signals encountered and those predicted (Clark 2013). Prediction error is the form of free energy
Surprisal- Implausibility of some sensory state given a model of the world.
Entropy- measure of disorder or chaos. “Is the long-term average of surprisal and reducing free energy amounts to improving world model so as to reduce prediction errors, hence reducing surprisal” (Clark 2013). So, the lower the entropy means we are lowering the long-term average of surprise and minimising free energy.
Posterior Probability– still reconceptualising this one
Inverse variance- trying to get my head around this one
Bidirectional Hierarchical structure- is that it allows the system to infer its own priors as it goes along. Is does this by using its best current model and one level as the source of priors for the level below. This allows priors and models to co evolve across multiple linked layers of processing so as to account for the sensory data. This induces empirical priors in the form of constraints that one level places on the level below and these constraints are progressively tunes by sensory inputs itself (Clark 2013)
So, long as the successfully predicts the lower-level activity, all is well, and no further action needs to ensue. But where there is a mismatch, prediction error occurs and ensuing activity is propagated to the higher level. This automatically adjust probabilistic representations at higher levels so that the top-down predictions cancer prediction errors and the lower level. (Yielding rapid perceptual inference). At the same time prediction error is used to adjust the structure of the model to reduce any discrepancy next time around.
Markov’s blanket- Defines the boundaries of a system in a statistical sense. We need boundaries to separate 2 things, or it wouldn’t be a thing. Any living system is a Markov blanked system and have Markov blankets of Markov’s blankets – all the way down to an individual cell, all the way up to you and me, and all the way out to include elements of the local environment (Kirchhoff 2018)
Kirchhoff, Parr, Palacios, Friston, Kiverstein, The Markov Blankets of life: autonomy, active interface and the free energy principal, Royal society publishing, 2018
Friston, the free-energy principal: a rough guide to the bran? cell press, 2009
Clark, Whatever next? Predictive brains, situated agents, and the future of cognitive science, Behavioural and brain sciences, 2013