Depts. of Computing Science, Psychology, University of Alberta; Canadian Institute for Advanced Research

Models of language meaning have been used to explore the human brain's representation of word meaning while listening to speech or reading. Though these language models are trained only from large collections of text, and know nothing about the brain, they seem to represent information in a way that mirrors the language-perceiving brain. In this talk I will describe our work, which used a decoding approach to detect the meaning of a word or phrase not during perception, but in preparation for language production (speaking). We found that decoding word meaning is possible pre-utterance, and discovered interesting connections between the representation of words in isolation and words in a phrase.

MIA Talks Search