You are here

MIA Talks

Primer: Machines read, humans read: parallels between computer and human representations of meaning (Note: 11am start)

September 16, 2020
University of Alberta

Computational linguists build models of language meaning by processing huge bodies of text, often scraped from the internet. These learned models typically represent the meaning of a word using a point in high dimensional space. When people read, their brains produce a representation of meaning that can also be thought of as a point in high-dimensional space defined by neuronal firing patterns. Using brain imaging, we can record these representations (albeit in a very lossy way) and compare human representations to those learned by a computer. Here I will describe the framework we use to make these comparisons (often called decoding), which allows us to search for neural patterns correlated with the dimensions of word meaning. Using an example case study, I will show the utility of this framework, and use it to show evidence for the repetition of these neural patterns during a phrase reading paradigm.