Researchers have made a significant breakthrough in understanding the human language network using large language models. A study published in Nature Human Behaviour on 03 January 2024, titled ‘Driving and suppressing the human language network using large language models,’ sheds light on the potential of transformer models such as GPT to generate human-like language and predict human brain responses to language.
The study utilized functional MRI-measured brain responses to 1,000 diverse sentences and demonstrated that a GPT-based encoding model can predict the magnitude of the brain response associated with each sentence. Furthermore, the researchers used the model to identify new sentences that are predicted to drive or suppress responses in the human language network, and found that these model-selected novel sentences indeed strongly drive and suppress the activity of human language areas in new individuals.
According to the findings, a systematic analysis of the model-selected sentences revealed that surprisal and well-formedness of linguistic input are key determinants of response strength in the language network. These results establish the ability of neural network models to not only mimic human language but also non-invasively control neural activity in higher-level cortical areas, such as the language network.
This groundbreaking research opens up new possibilities in the study of human language processing and has the potential to impact fields such as neuroscience, cognitive science, and artificial intelligence. The implications of these findings could lead to advancements in understanding and manipulating the human language network, with potential applications in language-related disorders and cognitive enhancement.