THE FUNCTIONAL ARCHITECTURE OF LANGUAGE
Click bolded words below
for cool videos and papers!
Language is unique to the human species and universal across cultures. It plays a fundamental psychological role: generating
structures that carry meanings and are efficiently designed to transmit such meanings from the mind of one individual to the mind of another, consequently influencing people's behavior.
Thus, the structures we find in language should reflect the structures of thought that they transmit, and characteristics of language use should reflect characteristics of human interaction. In other words, language provides a window into both cognitive and social psychology. Formulating a theory of language, then, is a critical step towards understanding the human experience.
But haven't we already solved language? We have Siri, Alexa, and Google Translate. Well, even state-of-the-art, "intelligent" machines are rigid, fragile, and can be fooled, tricked, or broken by sentences that humans have no problems with. How do our minds achieve fast, robust, and flexible comprehension? What sophisticated cognitive mechanisms make us experts at language, so much so that comprehension usually feels effortless?
The mission of my research is to carve the phenomenon of “comprehension” into its constituent components: to determine (1) the distinct mechanisms involved in language processing; (2) the division of “mental labor” across them during comprehension; and (3) their place within the broader architecture of the human mind. This mission addresses questions at three levels:
First, which components of comprehension get their own, dedicated cognitive machinery? Which are inseparable from one another and are supported by a joint mechanism? And which rely on general cognitive mechanisms that serve many domains beyond language?
Second, when we “know the meaning” of an utterance, what kind of knowledge do we have? What is the format of the mental structures that our minds construct during comprehension? Which distinctions in meaning do these structures make more—or less—salient? And what algorithms are used to manipulate these structures?
Third, how is comprehension implemented in the neural circuits of the brain? What constraints on processes of comprehension are placed by the characteristics of the biological tissue inside our skulls? How does the brain compensate for injuries that affect language processing, and what determines whether it succeeds in doing so?
The game plan
To understand how comprehension evolves in our minds, I study how it engages our brains. Using functional MRI (mostly) and tools from network neuroscience, I characterize functional networks—sets of regions showing coordinated activity—that are recruited when adult native speakers understand language. What is their internal organization? Which networks are distinct vs. overlapping? How do they each contribute to comprehension? And how is information integrated across them?
My research combines two approaches: first, I use task-based, hypothesis-driven paradigms to identify functional networks in individual brains, based on certain well-established signature responses; then, I use task-free, data-driven paradigms to examine how these networks behave during situations that approximate natural comprehension and, more generally, everyday thought.
In more recent work, I use computational methods to evaluate representations of meaning that are generated by algorithms trained on natural texts. I examine what knowledge—about words, their combinations, and the underlying concepts—is implicitly captured by these representations, and compare it against benchmarks based on behavioral data. I test which features of the linguistic input are minimally required for machines to extract this knowledge.