
By Amit Malewar 13 May, 2025
Collected at: https://www.techexplorist.com/key-units-large-ai-models-specific-jobs/99362/
Large language models (LLMs) can handle language tasks, logic, and social reasoning. In the human brain, a core language system supports language processing.
EPFL researchers found key units in AI models that work similarly to the brain’s language system. When these units were turned off, the AI struggled with language tasks, suggesting they play a crucial role in how AI processes language.
Researchers are still uncovering how large language models (LLMs) function internally, especially how different units handle specific tasks. Inspired by brain networks like the Language Network and Theory of Mind Network, scientists at the NeuroAI Laboratory and Natural Language Processing Laboratory studied whether LLMs have specialized modules for distinct functions.
At the 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics in Albuquerque, they presented findings from their analysis of 18 popular LLMs. Their research suggests that specific units form a core network dedicated to language processing, mirroring structures found in the human brain.
Inspired by neuroscience, researchers studied how large language models (LLMs) process sentences. They identified “language-selective units”—specific components that responded more actively to real sentences than random word lists, similar to how the human brain’s Language Network operates.
To test their importance, they removed these units and compared the effects to removing random units. The result? Without language-selective units, the models could no longer generate coherent text or perform well on language tasks, confirming their crucial role in AI’s ability to process language.
Researchers found that less than 100 neurons—about 1% of a large language model’s (LLM) units—are critical for language processing. When removed, the model lost its ability to generate and understand language, confirming their importance.
While past machine learning studies identified language-related networks, they required complex training. This experiment used human neuroscience methods, making the discovery surprisingly straightforward.
This raised a bigger question: Could localizers used to study brain networks, like the Theory of Mind, also identify specialized AI units? Researchers tested this and found that some models have units for reasoning and social thinking, while others don’t, opening new possibilities for understanding AI’s internal structure.
Researchers discovered that less than 100 neurons—about 1% of a large language model’s (LLM) units—are critical for language processing. Obliterating these units disrupted the model’s ability to generate and understand language.
Previous research had identified language-related networks but required complex training. This experiment used human neuroscience techniques, making the discovery more straightforward than expected.
The findings led to a new question: Could localizers used to study brain networks, like the Theory of Mind, also detect specialized AI units? Researchers tested this and found that some models have reasoning and social thinking units while others do not. This opens the door to further exploration into how AI processes information beyond just language.
Researchers tested how language-selective units affect AI models by removing them and comparing the results to removing random units. When the language-specific units were removed, the model lost its ability to generate coherent text and struggled on language tasks, proving their importance.
A surprising discovery was that less than 100 neurons—about 1% of the model’s units—are crucial for language processing. Disrupting them completely destroyed the model’s language abilities.
Inspired by brain research, scientists wondered if localizers used to identify networks like the Theory of Mind could also detect specialized AI units. Their tests revealed that some models have task-specific units for reasoning and social thinking, while others don’t—opening the door for further exploration into AI’s cognitive structure.
Some AI models have specialized units for reasoning and thinking, while others don’t, raising questions about how training methods and data influence this. Researchers are exploring whether isolated units improve performance.
Future studies will examine multimodal AI models trained on text, images, video, and sound to see how they process information across different formats. Scientists are curious if a model receiving visual language input would show the same deficits observed when disrupting language units in text-based models.
These findings deepen our understanding of how AI processes information, drawing parallels to human brain networks. Scientists hope this research will advance neuroscience, improve disease diagnosis, and help us better understand cognition.
Good day! This is kind of off topic but I need some advice from an established blog. Is it very difficult to set up your own blog? I’m not very techincal but I can figure things out pretty fast. I’m thinking about creating my own but I’m not sure where to start. Do you have any points or suggestions? Thanks

Leave a Reply