Connect with us

Regional

Language doesn’t perfectly describe consciousness. Can math?

From humans to AI machine learning models, information loss is a common problem. When it comes to our own consciousness, language has an indescribability problem.

Published

on

Language doesn’t perfectly describe consciousness. Can math?
Language doesn’t perfectly describe consciousness. Can math?

The idea that language is a clumsy, imperfect tool for capturing the depth and richness of our experiences is ancient. For centuries, a steady stream of poets, philosophers, and spiritual practitioners have pointed to this indescribability, the difficult fact that our experiences are ineffable, or larger than what words can communicate.

But as a frenzy of developments across AI, neuroscience, animal studies, psychedelics, and meditation breathe new life into consciousness research, scientists are devising new ways of, maybe, pushing our descriptions of experience beyond the limits of language. Part of the hope behind what mathematician and physicist Johannes Kleiner recently termed the “structural turn in consciousness science” is that where words fall short, math may prevail.

“My view is that mathematical language is a way for us to climb out of the boundaries that evolution has set for our cognitive systems,” Kleiner told Vox. “Hopefully, [mathematical] structure is like a little hack to get around some of the private nature of consciousness.”

For example, words could offer you a poem about the feeling of standing on a sidewalk when a car driving by splashes a puddle from last night’s rain directly into your face. A mathematical structure, on the other hand, could create an interactive 3D model of that experience, showing how all the different sensations — the smell of wet concrete, the maddening sound of the car fleeing the scene of its crime, the viscous drip of dirty water down your face — relate to one another.

Structural approaches could provide new and more testable predictions about consciousness. That, in turn, could make a whole new range of experimental questions about consciousness tractable, like predicting the level of consciousness in coma patients, which structural ideas like Integrated Information Theory (IIT) are already doing.

But for my money, there will always be a gap between even the best structural models of consciousness and the what-it’s-like-ness of the experiences we have. Mica Xu Ji, a former post-doctoral fellow at the Mila AI Institute and lead author of a new paper that takes a structural approach to making sense of this longstanding fact of ineffability, thinks ineffability isn’t a bug, it’s a feature that evolution baked into consciousness.

From humans to machine learning models, information loss is a common problem. But another way to look at losing information is to see it as gaining simplicity, and simplicity, she explained, helps to make consciousness generalizable and more useful for navigating situations we haven’t experienced before. So maybe ineffability isn’t just a problem that locks away the full feeling of our experiences, but is also an evolutionary feature that helped us survive through the ages.

The math of ineffability

In theory, working out the precise ineffability of an experience is pretty straightforward.

Ji and her colleagues began with the idea that the richness of conscious experience depends on the amount of information it contains. We can already take real-world reads of richness by measuring the entropy, or unpredictability, of electrical activity in the brain.

Her paper argues that to gauge ineffability, all you’d need are two variables: the original state and the output state. The original state could be a direct measure of brain activity, including all the neural processing that goes on beneath conscious awareness. The output state could be the words you produce to describe it, or even the narrativized thoughts in your head about it (unless you have no inner monologue).

Then, comparing those numbers would give you an approximation of the ineffability. The more information lost in the conversion of experience to language, as measured by comparing the relative entropy of the original and output variables, the greater the magnitude of the ineffable, or the information that language left behind. “Ineffability is about how information gets lost as stuff goes downstream in brain processing,” said Kleiner.

Now, think of the long arc of human evolution. Ineffability means that consciousness can produce simpler representations of the overwhelming richness of pure experience, what the American philosopher William James famously called the “blooming, buzzing confusion” of consciousness. That means encountering a pissed-off tiger can be generalized to the broader idea that all large cats with massive teeth may pose a threat, rather than constraining that lesson to only that specific tiger in that specific context.

In machine learning models, as in humans, simplicity supports generalization, which makes the models useful beyond what they encounter in their training data sets. “Language has been optimized to be simple so that we can transmit information fast,” Ji said. “Ineffability, or information loss, is a fundamental feature of learning systems that generalize.”

Pain, health, and the mundane potential of cracking ineffability

Ineffability is often associated with mysticism, poetry, or heady conversations about the nature of the mind. Or now, the math of information theory and the evolutionary purpose of consciousness.

But ineffability also plays a role in more immediate and mundane concerns. Take chronic pain. One of the most common approaches to understanding what someone with chronic pain is experiencing is to have them self-report the intensity of their pain on a scale from 0 to 10. Another, the Visual Analog Scale, asks them to mark their pain intensity along a 10-centimeter line, zero being no pain and 10 representing the worst possible pain.

These are ... not the most detailed of measures. They also smuggle in assumptions that can distort how we understand the pain of others. For example, a linear scale with equal spaces between each possible number suggests that alleviating someone’s reported pain from a four to a three is roughly similar to dropping someone else’s from a nine to an eight. But the experiential distance between an eight and a nine could be orders of magnitude greater than between smaller numbers on the scale, leading us to drastically underestimate how much suffering people at the high end of the spectrum are enduring.

Kleiner explained that structural approaches to representing pain can have the same effect as moving from a 2D image to 3D. “Structural research can distinguish not only location, but different qualities of the pain. Like whether it’s throbbing, for example. We could easily have 20 dimensions.” And each dimension adds more richness to our understanding of the pain. That could lend motivation — and funding — to treating some of the world’s most debilitating conditions that lack effective treatments, such as cluster headaches.

The same principle applies to mental health. Many mental health indicators rely on self-reporting the richness of our internal experience on linear scales. But if structural approaches to consciousness can render 3D representations of experience, maybe they can add some richness to how we measure, and therefore manage, mental health at large.

For example, neuroscientist Selen Atasoy has been developing the idea of “brain harmonics,” which measures electrical activity in the brain and delivers actual 3D representations of moments of experience. With those representations, it’s possible that we could learn more about their nature, like the amount of pleasure or pain, by running mathematical analyses based on the harmonic frequencies they contain rather than asking the person to report how they feel via language.

Structural approaches and math surely have their limits. Galileo kicked off the scientific method by assuming that the universe is “written in the language of mathematics,” which demotes the ineffable depths of human experience to, apparently, something irrelevant to science. Reviving that idea with more advanced math would be a mistake.

But language, maybe by design, will never capture the full richness of consciousness. That might be to our benefit, helping us generalize our experience in an ever-uncertain world. Meanwhile, more precise mathematical structures to describe conscious experience could also bring welcome benefits, from grasping just how intense pain can be to conveying the most blissful of pleasures.

A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

Trending