On sensory perception vs. awareness, The Dress infernal, and empathy.
After the internet juggernaut that was The Dress – about which I think we can say 1) interesting things have been written, good job! and 2) horse is dead, people, move on – another article has started making the rounds, having to do with how our perceptions of color have changed over time.
No one could see the color blue until modern times, argues Kevin Loria over at Business Insider. This article is a pretty great read for anyone interested in how language, perception, and awareness interact in our minds. But – and this is an important but – the headline is completely inaccurate, and buries the most interesting points of the discussion in clickbait’s characteristically sloppy swagger and panache.
See – and in fairness the article does get at this without admitting it outright – it’s not actually true that people couldn’t see blue for ages, it’s that ontologically, the difference between blue and other colors near it on the spectrum was not significant enough to warrant the use of a separate word or concept.
This is probably because a separate word for the range of colors we now call blue wasn’t a practical necessity as much as it was for the colors of things people needed to interact with and talk about regularly. Structures used for shelter weren’t blue; animals and plants used as food weren’t blue; the human body, whether in stasis or in crisis/decay, wasn’t blue. The sky was only sometimes blue, to some people, on some days, and was, until modernity, far enough away to be a backdrop and nothing else.
The same is true of how different languages ascribe varied significance to different phonemes, although in this case we don’t really know why, because as long as people have had vocal chords they’ve have roughly the same ability to make noises with them. Phonemes are the smallest distinguishable parts of language sounds, such as the “S” sound in “snake” or the “Th” sound in “theory”, and different languages ascribe different weights to the meanings of individual phonemes. Sometimes, swapping one phoneme for another changes the meaning of the word.
To test for whether this is true of a certain pair of phonemes, linguists use something called “minimal pairing” – putting two words that differ by only one sound together and assessing whether the meaning changes. In English, “mat” and “cat” are different words, even though they vary by only one phoneme. On the other hand, “what” said in with a stereotypical middle-American accent and “what” as pronounced by Warner Brothers’ cartoon character Foghorn Leghorn (heavy on the “WH”) mean the same thing, even though they sound drastically different. Most people who speak English as a first/home/preferred language don’t walk around thinking about this or even noticing it, though.
In the same vein, despite what Western cultural stereotyping tells us, it’s rarely true that a Japanese speaker can’t actually hear the difference between the sounds we distinguish as “L” and “R” in English. It’s also rarely true that an English-speaker can’t hear the difference between an aspirated (or “breathy”) “P” and an unaspirated “P” – but this variation means nothing in English, while in Korean it’s enough to change the meaning of a word. In both cases, what is true is this: although the sounds can be heard as not being identical, the difference between each sound is not significant enough within the home/first/preferred language to constitute a change in meaning when one sound is swapped for another.
Here’s a language exercise to make phonemic differences stand out to you a bit more. Compare the “P” that begins the word “puff” with the “P” that ends the word “pop” by saying each word aloud a few times. Hear how the first one has some air following it? That’s an aspirated “P”. Hear how the second one stops abruptly at the end of the word, with no air escaping once it’s been sounded? That’s an unaspirated “P”. Now, imagine if a new English word, “pib” meant “small rodent” when you started with an aspirated “P” and “to sneak taste of food” when when you started with an unaspirated “P”. Finally, say the word out loud both ways, emphasizing the difference as much as possible when you make the sounds, and thinking about the new meanings of these two made-up words. It seems ridiculous that such a small change in the sound you hear could actually impact meaning that way, right? Well, now you know how Japanese speakers feel when English speakers make fun of them for mixing up “L” and “R”. “Lug” and “rug” may be extremely different words with very little semantic relationship to each other in English, but the actual sounds that distinguish them for one another are not very different at all.
You can demonstrate this another way by saying the world “squirrel” over and over out loud until it starts to sound weird and unfamiliar to you. (Fun fact: this experience is universal across languages and is known as “semantic saturation”.) Hear how the consonants start to slide into themselves, and seem like the product of a simple shift in mouth muscles, position, and breath? That’s because you’ve started feeling and hearing how subtle the differences between the “L” phoneme and the “R” phoneme really are. You’ve just removed [enter your age now] years of auditory bias that tells you to filter out this difference so that your brain can apply meaning categories to wolds in your native language. Congratulations!
Language and sensory perception are so thoroughly intertwined in our day-to-day lives that it’s hard to pull back from our patterns of noticing and see what’s really there versus what our minds prioritize for us so that we’re not set adrift in overwhelming cognitive sea of infinitely particulate information. After all, if every difference in degree was granted a new meaning in the giant ontological matrix that exists in our heads, we’d experience the neuro-cognitive version of a server crash due to data flooding: Complete paralysis. Complete inability to throttle the data flooding the system and navigate our existence.
I think a lot about distinctions like these not because I’m an irritating pedant (which, in truth, I really am sometimes), but because I believe the ability to consciously distinguish between what we see, hear, and experience – before and after adding the categories that we use as shortcuts to help us get through life – is at the very heart of what we call empathy. Empathy is, after all, the ability to see and experience things from another’s perspective, even if this mode of vision and experience does not come naturally to you.
I believe that by sometimes integrating literal, granular empathy at the level of our moment-to-moment sensory perceptions, there’s a lot to be gained. I propose that this agility of mind is crucial when the goal is getting out of our own heads and really listening to people whom we see as fundamentally different from us. Nobody (I hope) needs to care very much about the outcome of an argument over whether a dress is white and gold or blue and black. But removing the artificial foregrounding of personal bias is a great deal more important when discussing, for example, what a man said or did that harmed me or made me feel unsafe as a woman, or what I said as a white person that made a person who is not white feel threatened, invisible, or unwelcome. These ideas are perceived differently based on context and awareness, but they are real, just as surely as “cat” and “mat” are completely different words to people who live and function inside of the language these words belong to.
I don’t really care about The Dress. But I care about trying like hell to listen to the people whose personal, cultural, gender, or linguistic context gives them the authority to be experts on their own perceptions and experiences. Ontological empathy is really just empathy applied holistically. Used smartly, it can save us from a lot of petty arguments, as well as a lot of huge and harmful ones.