Sathvik Nair

I'm a PhD student in Linguistics at the University of Maryland, advised by Profs. Philip Resnik and Colin Phillips. I'm interested in the interaction between language-specific information and domain-general cognitive processes during language comprehension, with a particular focus on prediction. To address these questions, I combine technological advances from NLP (mostly language models) with data and insights from psycholinguistics. My research is supported by the NSF GRFP.

Originally from the Bay Area, I graduated from UC Berkeley with bachelor's degrees in Cognitive Science and Computer Science. There, I closely collaborated with Dr. Stephan Meylan on projects in Profs. Mahesh Srinivasan and Tom Griffiths' groups. Afterwards, I worked as a software engineer at Amazon Web Services in Boston and decided to stay on the East Coast for grad school. I generally accept he/him pronouns.

Email  |  Twitter  |  Github  |  LinkedIn  |  Semantic Scholar  |  Google Scholar  |  CV

profile photo
News & Highlights

September 2024: Papers accepted at EMNLP Findings (on semantic roles) and CoNLL (syntactic generalization)!

June 2024: Gave my first conference talk on tokenization at SciL at UC Irvine!

May 2024: Presented my work on tokenization at HSP at UMich!

April 2024: Awarded the NSF GRFP!

February 2024: Gave a Language Science Lunch Talk on my background and research goals at the Maryland Language Science Center!

January 2024: Facilitated an interactive workshop on LLMs at the Maryland Language Science Center's Winter Storm series.

December 2023: Presented my first paper of grad school on tokenization and modeling reading times at EMNLP in Singapore!

March 2023: Presented on the relationship between words and context at HSP 2023 in Pittsburgh!

April 2022: Awarded a NSF GRFP Honorable Mention!

December 2021: Paper published in Cognition!

December 2020: Presented a paper based on my thesis at the CogALex workshop at COLING 2020!

May 2020: Awarded Highest Honors and the Glushko Prize for Outstanding Undergraduate Research in Cognitive Sciences for my thesis on polysemy!

Research

My work has two major focuses. The first investigates how to use different measures from language models to operationalize psycholinguistic hypotheses by evaluating their generalizations from morphosyntactic to semantic to world knowledge. The second seeks use language models in service of cognitively realistic models of sentence comprehension by incorporating linguistic information into their subword representations and integrating them with more mechanistic theories of prediction during language processing.

Publications

Katherine Howitt, Sathvik Nair, Allison Dods, & Robert Hopkins (2024). Generalizations across filler-gap dependencies in neural language models. Accepted to CoNLL.

Eun-Kyoung Rosa Lee, Sathvik Nair & Naomi Feldman (2024). A Psycholinguistic Evaluation of Language Models' Sensitivity to Argument Roles. Accepted to EMNLP Findings.

Sathvik Nair & Philip Resnik (2023). Words, Subwords, and Morphemes: What Really Matters in the Surprisal-Reading Time Relationship? EMNLP Findings [link] [pdf]

Stephan Meylan, Sathvik Nair, & Tom Griffiths (2021). Evaluating Models of Robust Word Recognition with Serial Reproduction. Cognition [link] [pdf]

Sathvik Nair, Mahesh Srinivasan, & Stephan Meylan (2020). Contextualized Word Embeddings Encode Aspects of Human-Like Word Sense Knowledge. Proceedings of the Workshop on the Cognitive Aspects of the Lexicon (CogALex) at COLING 2020 [link] [pdf]

Peer-Reviewed Conference Presentations

Sathvik Nair, Katherine Howitt, Allison Dods & Robert Hopkins. LMs are not good proxies for human language learners. Accepted as Talk at BUCLD 2024

Sathvik Nair & Philip Resnik. Words, Subwords, and Morphemes: What Really Matters in the Surprisal-Reading Time Relationship? Talk at SciL 2024, [abstract]

Sathvik Nair., Colin Phillips, & Philip Resnik. Words, Subwords, and Morphemes: Surprisal Theory and Units of Prediction. Poster at HSP 2024 [abstract]

Katherine Howitt, Sathvik Nair, Allison Dods & Robert Hopkins (2024) Acquiring generalizations across unbounded dependencies: How language models can provide insight into first language acquisition. Poster at MASC-SLL 2024

Sathvik Nair, Konstantine Kahadze & Philip Resnik. The Impacts of Subword Tokenization on Psycholinguistic Modeling. Poster at MASC-SLL 2024

Sathvik Nair, Shohini Bhattasali, Philip Resnik & Colin Phillips. How far does probability take us when measuring psycholinguistic fit? Evidence from Substitution Illusions and Speeded Cloze Data. Poster at HSP 2023 [abstract]

Collaborators, Mentors, Friends, and other Co-Conspirators

Research is never done in a vacuum, and publications don't reflect everyone who's intellectually influenced me. Here are some of those people. Many are connected with UMD's CLIP Lab and Language Science Center, which bring together researchers approaching computation and language (more broadly) from all sorts of perspectives.

Teaching

At UMD:

At UC Berkeley:

Miscellaneous

Other projects (not just academic) and information.

Website Template