I hate to do this, but to “get” this or for what I’m about to say to even make sense, you have to do some homework first.

I picked this up over at Eric Raymond’s blog – The Brain is a Peirce Engine.

Now, I don’t feel too bad because Eric, not usually at a loss for words himself, posts a link that he wants you to read first as well, over at Scott Alexander’s Slate Star Codex. Specifically, a review of Surfing Uncertainty.

And it is not a short review.

The very, very short version is that we appear to constantly mesh a running predictive model of the world and compare it against our sense data to see of they match up.

Issues with how this works – or when certain parts break down, conceptually, model well in terms of why Schizophrenics can tickle themselves, among other things. It explains why we ignore what we ignore – much like Jordan Peterson discussing (his first podcast) our assumptions of the world that allow us to ignore the myriad details so we can focus on what;s important until something surprises us.

It also dovetails well with the Frame Problem, which Peterson also discusses.

What struck me though was that I immediately recognized in it a statement that I would make – that I was constantly running models, that, in fact, that was what I was specifically good at.

Frankly, I’m mediocre at memorizing details, but patterns, modeled relationships (weirdly, extremely observant of interactions between other people, but difficult to figure out said behavior when directed at me…), systems, that’s all my bread and butter. I’ve done it so well that I’ve been accused of memorizing large paragraphs of technical data when doing final watch qualification interviews in the nuke program, or (joking, but yes, I was told this) of having the answers tattooed under my eyelids because on several occasions I stopped, closed my eyes for a second, then spat out the answer sounding as if it were verbatim from the tech manual.

Even then, I maintained that I understood the concepts, and spat it out in the pattern of “Navy technical”. I literally meshed a model of how a system operated and the principles governing it that I’d developed with a particular writing pattern on the fly. As an instructor, I’d explain it in a different way. Same model in my head.

But then I also got through physics exams in college by writing out the formulas from their respective description rather than simply memorizing them. Keeping said models in my head was an absolute necessity for predicting plant response to changes, or troubleshooting issues with computers.

In fairness, it’s only in the last few years that I was self-aware enough of the process to call it running a model in my head, which is damn near exactly what predictive processing says is going on at the top level.

My biggest difficulty? Shutting it off. Even 10-15 years back I had a glimmer of running multiple models and filters in parallel because I was standing on a pier with a friend looking at the sunset on the waves, and we discussed all the different ways we were considering it: the simple beauty, how I’d paint or illustrate it – the pattern of how the speckles looked, how the pilings and underlying sand and rocks caused different meta-patterns. how the small wavelets of each major wave were formed. How I’d make a bump map of it (I was doing 3D modelling as a hobby at the time),and some thought on where I’d dig up the relevant equations for actually modelling the behavior, though that was admittedly brief.

I think I have another book to add to my reading list.