Ellie Pavlick, an assistant professor at Brown University, will give a virtual seminar on Wednesday, March 24 at 12:15 p.m. ET. This event is open to all Georgia Tech students, faculty, staff, and interested members of the public.
You can lead a horse to water...: Representing vs. Using Features in Neural NLP
A wave of recent work has sought to understand how pre-trained language models work. Such analyses have resulted in two seemingly contradictory sets of results. On one hand, work based on "probing classifiers" generally suggests that SOTA language models contain rich information about linguistic structure (e.g., parts of speech, syntax, semantic roles). On the other hand, work that measures performance on linguistic "challenge sets" shows that models consistently fail to use this information when making predictions. In this talk, I will present a series of results that attempt to bridge this gap. Our recent experiments suggest that the disconnect is not due to catastrophic forgetting nor is it (entirely) explained by insufficient training data. Rather, it is best explained in terms of how "accessible" features are to the model following pretraining, where "accessibility" can be quantified using an information-theoretic interpretation of probing classifiers.
Ellie Pavlick is an assistant professor of computer science at Brown University where she leads the Language Understanding and Representation (LUNAR) Lab. She received her Ph.D. from the one-and-only University of Pennsylvania. Her current work focuses on building more cognitively-plausible models of natural language semantics, focusing on grounded language learning and on sample efficiency and generalization of neural language models.