Member-only story
|LLM|NEUROSCIENCE|BRAIN|ALIGNMENT|
On the Other Side of the Mirror: How Language Models Align with the Human Brain
Exploring the Evolution of Linguistic Competence and Brain Alignment in Large Language Models

If you talk to a man in a language he understands, that goes to his head. If you talk to him in his language, that goes to his heart. — Nelson Mandela
Deciphering how the brain works and how it processes language are among the main goals of neuroscience. Human language processing is supported in the human brain by the language network (LN), a set of left-lateralized frontotemporal regions in the brain. LN responds robustly and selectively to language input, and researchers have also sought to study it using large language models (LLMs). LLMs are trained to predict the next tokens in a sequence and appear to capture some of the aspects of the human response to language.
Given the intriguing similarities, some open questions remain:
- What drives brain alignment in untrained models?
- Is model-brain alignment related to formal (syntax, compositionality) or v competence (world knowledge, reasoning)?
- What explains this alignment: size or type of training?
This article discusses some recent articles that try to answer these questions.
Artificial intelligence is transforming our world, shaping how we live and work. Understanding how it works and its implications has never been more crucial. If you’re looking for…