Advertisement
Regional

The last company you want reading your mind

Elon Musk has arguably been the boldest broligarch when it comes to brain-machine interfaces. But Mark Zuckerberg is hot on his heels. Shortly after Musk co-founded Neuralink — the company that’s put chips in three human brains, and counting — in 2016, Meta (…

GNN Web Desk
Published a day ago on Feb 20th 2025, 4:00 pm
By Web Desk
The last company you want reading your mind
Elon Musk has arguably been the boldest broligarch when it comes to brain-machine interfaces. But Mark Zuckerberg is hot on his heels. Shortly after Musk co-founded Neuralink — the company that’s put chips in three human brains, and counting — in 2016, Meta (then Facebook) also ventured into neurotechnology research, announcing plans to build tech that would let people type with their brains and hear language through their skin. Since then, Meta-funded researchers have figured out how to decode speech from activity recorded from surgically implanted electrodes inside people’s brains. While brain surgery could feel worth it for a paralyzed person who wants to regain the ability to communicate, invasive devices like these are a hard sell for someone who just wants to type faster. Commercial devices regular people might actually want need to be wearable and removable, rather than permanent. This story was first featured in the Future Perfect newsletter. Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week. Meta tabled its efforts to build consumer brain-computer interfaces a few years ago: Brain-reading headbands weren’t ready for prime time. Instead of developing new gadgets directly, the company has been investing in slower-burning neuroscience research. Their hope is that studying the brain will help them build AI that’s better at stuff humans are good at, like processing language. Some of this research still focused on mind-reading: specifically, decoding how the brain produces sentences. This month, though, Meta made a breakthrough. In collaboration with the Basque Center on Cognition, Brain and Language, researchers at Meta’s Fundamental Artificial Intelligence Research (FAIR) lab were able to accurately decode unspoken sentences from brain signals recorded outside the skull — no surgery required. This was just in a lab, of course. But these findings mark a major step toward the wearable mind-reading devices Zuckerberg promised eight years ago. And as brain-to-text devices inch closer toward commercial viability in the not-so-distant future, we’ll need to grapple with what it means for Meta to be their gatekeeper. In the lab, mind-reading technology promises to reveal previously unknowable information about how our brains construct thoughts, make decisions, and guide our actions. But out in the world, tech companies may misuse our brain data unless we establish and enforce regulations to stop them. Meta can decode unspoken sentences from your brain’s magnetic fields Until a couple years ago, researchers couldn’t decode unspoken language without implanting electrodes inside the brain, which requires surgery. In 2023, scientists at the University of Texas used fMRI, coupled with a version of the AI models that power ChatGPT, to decode the gist of unspoken sentences from brain activity. But fMRI machines cost millions of dollars and can outweigh a fully grown elephant, limiting their usefulness outside the lab. Because neuroscientists are generally unwilling to stick recording devices inside of a human’s brain, most studies of the human brain involve measuring some proxy for neural activity itself. fMRI scanners measure how much blood flows to brain cells while they work, which entails a bit of a lag. Another method, called magnetoencephalography (MEG), measures magnetic fields brain cells create when they send electrical signals. While neither of these techniques can track what individual cells are doing, they both provide a rough snapshot of the brain’s activity patterns while someone is doing a task, like reading or typing. The cool thing is that unlike fMRI, MEG can record the brain in near-real time. So, Meta researchers recruited 35 volunteers to type sentences on a keyboard while sitting in an MEG scanner, which looks like a salon hair-drying chair from outer space. Some also had EEG (electroencephalography) electrodes gelled to their face and scalp to record electrical signals radiating from brain cells through their skulls. Each person’s brain activity helped train an AI model to guess what they typed. Essentially, part of the model learned to match patterns of brain activity to the letters someone was typing at the time. Researchers fed another part of the model a bunch of Wikipedia articles to teach it how sentences work, and what words often appear next to each other in different contexts. With this information, if someone meant to type “I love you,” but their brain signals read “I lovr yoi” — possibly because their brain actually led them to make a typo — the model could effectively autocorrect that prediction, because it knows how letters and words should work in context. Using EEG, which is much more portable than an fMRI or MEG scanner, Meta researchers were able to use AI to decode the exact letters someone was typing about a third of the time. That doesn’t sound particularly impressive, until you consider that EEG records brain cells via electrodes outside the skull, many layers of separation away from the brain itself. It’s like trying to eavesdrop on a conversation at a crowded bar by standing outside and holding a glass against the wall: Given all the noise, catching even a third of that conversation is already quite challenging. MEG captures brain activity more precisely than EEG, because magnetic signals from brain cells don’t get as distorted by the skull as electrical signals. By feeding MEG data to their AI model, Meta researchers accurately decoded between 70 to 80 percent of what people typed, blowing previous models out of the water. So, if Meta ever wants to build mind-reading headbands, recording magnetic fields might be their best bet. Like fMRI, the MEG device used in this study was huge and expensive. But wearable helmet-like MEG scanners, which only weigh a few pounds, already exist and are even more sensitive than non-portable scanners. These portable MEG devices are just a couple pounds heavier than the Quest 3, Meta’s latest VR headset, and about as silly-looking. While these MEG devices still don’t work outside of special magnetically-shielded rooms (nor are they available to the public yet), it’s not hard to imagine a future where they could. Tech companies won’t protect brain data unless we make them Meta isn’t the only tech giant investing heavily in neuroscience research. Google and Microsoft both have teams dedicated to studying the brain, and NVIDIA and IBM both collaborate with neuroscience research institutions. The fields of AI and neuroscience have a long history of cross-pollination. The brain has a lot of functions that tech developers want to replicate in computers, like energy efficiency and learning without massive sets of training data. Tech companies build tools that neuroscientists want to use. (The idea of using non-invasive brain scans to diagnose mental illness has been trendy in neuroscience for decades. After all, it would be incredibly convenient for medical practitioners if diagnosing depression was as easy as running a quick EEG scan.) Here, Meta used the brain data they collected while people typed to study how the brain transforms abstract ideas into words, syllables, and letters, with the long-term goal of figuring out how to help AI chatbots do the same. The data support a long-held hypothesis held by neuroscientists and linguists, proposing that we produce speech from the top down. As I prepare to say something, my brain pictures the whole thing first (“I’m going to lunch soon”), then zooms in on one word (“going”), then one syllable (“go-”). As I type, my brain focuses on each specific letter (“g,” “o,”...) as it tells my fingers what to do. Meta saw that these representations — context, words, syllables, letters — all overlap during language production, peaking and fading in strength at different times. Understanding language production will also, in theory, help Meta achieve their stated goal of “restor[ing] communication for those who have lost the ability to speak.” And there are indeed millions of people recovering from traumatic brain injury, stroke, or another neurological disorder that makes it hard to talk. A wearable device that makes communication easy again could be a hugely positive force in someone’s life. But we know that’s not their only motivation. For Silicon Valley, the brain also represents the final barrier between humans and their devices. A quick sanity check: Meta’s goal was never to merge humans with computers (that’s Musk’s thing), but to sell a portable, removable headset that someone could use to type or play video games with their mind. To manifest a device like this, Meta needs to cross two huge technological hurdles, and one even bigger ethical one. First, they need to decode unspoken thoughts from outside the skull. Check. Second, they need to do that with a device that someone could reasonably afford, keep in their house, and wear on their head. For now, this is pretty far off. Most importantly, once these devices exist, we’ll need robust protections for people’s cognitive liberty — our fundamental right to control our own consciousness. The time for these safeguards isn’t after they hit stores. It’s now. “Facebook is already great at peering into your brain without any need for electrodes or fMRI or anything. They know much of your cognitive profile just from how you use the internet,” Roland Nadler, a neuroethicist at the University of British Columbia, told my colleague Sigal Samuel back in 2019. Meta already uses AI to extrapolate your mental health from your digital footprint. They use AI to flag, and sometimes delete, posts about self-harm and suicide, and can trigger nonconsensual “wellness checks” when they detect concerning messages on Messenger or WhatsApp. Given how much convenience we gain by giving away personal data — food deliveries, remote work, connecting with friends online — lots of people give up on digital privacy altogether. Even though many people feel uncomfortable with the amount of personal information companies take from us, they also believe they have no control over their privacy. Last year, neuroscientists, lawyers, and lawmakers began passing legislation to explicitly include neural data in state privacy laws. Some smaller neurotech companies are already gathering brain data from consumer products — stronger protections need to be put in place before massive companies like Meta can do the same. Zuckerberg has spent the past two months racing to Trump-ify Meta. His company is unlikely to handle our most private data with care, at least not unprompted. But in a world where Meta-branded brain-to-text headbands are as normal as keyboards are now, sharing brain data might feel like a prerequisite for participating in normal life. Imagine a workplace where, instead of giving you a monitor and a keyboard at the office, they give you a text-decoding helmet and tell you to strap in. If mind-typing becomes the default for computer systems, then avoiding brain-to-text devices will feel like avoiding smartphones: possible, sure. But certainly not the path of least resistance. As our mental security becomes less guaranteed, we’ll need to decide whether the convenience of controlling stuff with our minds is worth letting tech companies colonize our last truly private space. A version of this story originally appeared in the Future Perfect newsletter. Sign up here!
Advertisement