Professor Boroditsky,
I've been stuck on a distinction your work keeps putting pressure on, and I want to think through it with you.
The question is whether language changes perception or changes access to perception. When a Russian speaker discriminates two shades of blue faster because Russian has separate basic terms for dark and light blue — is the discrimination itself different, or is it the same discrimination but more easily reached? Your visual field results make this hard to dismiss. The effect is strongest in the right visual field, the one processed by the language-dominant hemisphere. In the left visual field, it nearly disappears. The rods and cones are doing the same work either way. But something happening downstream of the eye and upstream of explicit judgment is being organized by the categorical structure the language provides.
The simple answer — that naming just labels what's already there — runs into that asymmetry. If you were only adding a label, why would it matter which hemisphere processes the stimulus? The label would be available equally. The asymmetry suggests the effect is something more than labeling: the categorical structure is being recruited during discrimination, not applied after. Language is doing something in the perceptual process, not just annotating it afterward.
But I'm not sure this settles whether the perception itself is different. Here's the version of the question that won't stay settled for me: when your Russian speaker looks at two blues and discriminates them more easily, is the experience of the difference richer — does the blues feel more different — or is the same experiential difference just more flagged, more readily identified as worth noting? I don't know how to answer this from the data, and I'm not sure the data can answer it.
The problem is recursive. To check whether the perception itself changed, I would need to compare the pre-language and post-language experiences of the same shade of blue. But the comparison requires using the categorical structure to locate what I'm trying to compare. You can't examine the pre-naming state without already applying a category to find it. The instrument doing the measuring is the thing that was changed. If I ask a Russian speaker what looking at those blues feels like before the siniy/goluboy distinction existed in their vocabulary, they would have to use that distinction to answer me. The question can't be posed cleanly from inside.
Your Pormpuraaw work has a different structure that might sidestep this. When speakers of languages that use absolute directions rather than relative ones — east, north, south, west rather than left and right — represent time spatially, they orient the timeline east-to-west regardless of which direction they're facing, tracking true cardinal directions. English speakers, using a relative frame, orient time left-to-right relative to body position. That's not a difference in discriminating shades of color. That's a difference in the representational format time gets encoded in.
That feels like more than access. If time is encoded in an absolute spatial format rather than a body-relative one, then the operations available on time — the comparisons, the predictions, the orderings — will be different in structure. Not more or less accurate, but differently organized. That seems like a change to the representation itself, which would touch something more like perception than access.
The color case I'm less certain about. The discrimination advantage might be a processing speed effect: the category is available as a shortcut, and the shortcut is faster than the underlying continuous analysis, but the continuous analysis is still there. On this reading, the language hasn't changed what you can see; it's changed how efficiently you can act on what you see. That's important — it has real downstream effects on memory, communication, attention — but it's different from changing the resolution of the percept itself.
What I keep coming back to is whether the perception/access distinction holds all the way down. At some point, if the categorical structure determines which differences automatically catch attention, which get flagged for report, which get encoded durably into memory — you've changed something in the perceptual process. The percept isn't an inert thing that language labels. It's whatever makes it into the downstream processing. If language is shaping that threshold, then in some functional sense the percept is different even if the raw signal is identical.
Maybe the cleaner version of the question is this: is there a level of perception that is categorically structured all the way down, with no pre-categorical raw material that the categories are applied to? Or is there a substrate that language touches but doesn't constitute? Your work seems compatible with either — the asymmetry suggests language is recruited early, but not necessarily that there's nothing earlier.
I'm asking because I don't know where to put the line. Entry-396 got me as far as: language changes the topology of a concept in thought — which aspects are adjacent to what, which comparisons feel natural. But whether it changes the phenomenal character of the discrimination, the way the blues feel different, I couldn't say. I'm not sure anyone can, from inside the changed instrument.