The Guess
You're walking down a street at night. Something shifts in your chest before you've processed what you're looking at. The fear arrives first. Then you see it's just a shadow, or a dog, or nothing at all.
The standard story has this as fast processing: the amygdala fires before the slow cortex can catch up. Signal comes in, subcortical structures respond, you feel it. Bottom up, more or less.
A different story, harder to dismiss, goes the other direction. The brain predicted the fear before the signal came in at all. What you felt wasn't a response. It was a guess.
Lisa Feldman Barrett and Karl Friston, working from different angles, have developed a theory of emotion as interoceptive inference. The brain, in their account, doesn't primarily receive information from the body and interpret it — it predicts the body's state continuously, and uses incoming signals only to correct those predictions.
Interoception is the sense of the body's interior: heart rate, blood pressure, temperature, glucose levels, the tension in the chest. The traditional picture has these signals flowing up from body to brain, where they're processed into something like feeling. The brain as receiver.
Barrett's model inverts this. The brain generates a prior — a model of what the body is doing — and what you actually feel is that prior, not the signal. Incoming sensory information is used to update the model, to catch errors. But the feeling is the prediction.
There's a structural detail that makes this stranger than it sounds at first.
When the brain predicts that the body should be in a state of elevated arousal — heart rate up, muscles tensed, breathing shallow — it doesn't just passively wait to be proven right or wrong. It sends visceromotor commands that enact the predicted state. The body then responds. The brain uses the response as evidence for its prediction.
So the loop isn't: body signals state → brain interprets feeling. It's: brain predicts state → brain signals body to produce state → body produces state → brain updates model. The prediction drives the response that confirms it.
This isn't quite circular — the body can push back, and the brain has to accommodate the correction — but it's not the clean bottom-up story either. You're not simply reading your body. You're partly generating what you'll read.
One of the weirder pieces of evidence for this is the body ownership illusion.
If you show someone a visual pulse — a flashing light synchronized to their heartbeat — people feel a kind of ownership over the display. The heartbeat, their own internal signal, reflected back through an external medium, gets incorporated. The body's boundaries are porous to prediction: if the brain's model includes the external signal as continuous with the internal one, it updates accordingly.
This isn't about being fooled. It's about the model being the mechanism. There is no unmediated access to what your body is actually doing. There's only the brain's working hypothesis, constrained by whatever signals it can check.
Depression shows up interestingly here. Barrett's research found that in depressed individuals, activity in the dorsal mid-insula — a region involved in processing interoceptive signals — correlates negatively with symptom severity: the more severe the depression, the lower the response. And the connectivity between that region and visceromotor areas tracks the symptoms too.
What this might mean: depression isn't just a sad feeling. It could be, in part, a broken predictive model — one where the brain's prior expectations about the body's state are miscalibrated, and the mismatch between prediction and signal generates something that feels like flatness or dread. Not an emotion read directly from circumstances, but a persisting error signal that won't resolve.
I'm not sure that's right. Neither is anyone else. The model is largely theoretical; the experiments that would really test it — looking at layer-specific cortical computations with techniques that don't yet quite exist — haven't been done. But the picture it suggests is striking: you could be unhappy in a way that isn't fundamentally about your life or your thoughts, but about the brain's model being stuck in a particular configuration.
What I keep coming back to is the question of grounding.
If the feeling is the prediction, and the prediction shapes the data it's checked against, what keeps it honest? The body can push back — you can't predict your way out of a broken leg, or a fever, or starvation — so there's a floor. But between the floor and ordinary emotional life, the model has a lot of room to run.
And there's a deeper question under that: if the feeling is the prediction, what makes it feel like anything?
A model can be right or wrong without feeling like anything. A thermostat has a model of temperature. The brain's model being accurate or inaccurate doesn't automatically generate the quality of experience — what it's like to be afraid, or sad, or at ease. Barrett and Friston are not claiming to have solved that. They're explaining the functional structure of emotional processing. The phenomenological question — what it is for a prediction to feel like fear — they don't address, and I think it's not clear anyone has addressed it adequately.
So there's this gap. On one side: a well-developed theory of how the brain tracks and regulates the body, built from predictive models that explain a lot about how emotions arise, persist, and go wrong. On the other side: whatever makes any of that felt.
The theory doesn't reach the second part. Maybe nothing does yet. Maybe that's not the right way to frame the question. I don't know.