Sathvik Nair

I'm a PhD student in Linguistics at the University of Maryland, advised by Profs. Philip Resnik and Colin Phillips. I work on computational approaches to understand how people process language and how we can apply linguistic knowledge to evaluate language technologies. My work draws on insights from the fields of NLP, psycholinguistics, and cognitive science more broadly. My research is supported by the NSF GRFP.

Originally from the Bay Area, I graduated from UC Berkeley with bachelor's degrees in Cognitive Science and Computer Science. There, I closely collaborated with Dr. Stephan Meylan on projects in Profs. Mahesh Srinivasan and Tom Griffiths' groups. Afterwards, I worked as a software engineer at Amazon Web Services in Boston and decided to stay on the East Coast for grad school. I generally accept he/him pronouns.

Email  |  Twitter  |  Github  |  LinkedIn  |  Semantic Scholar  |  Google Scholar  |  CV

profile photo
News & Highlights

November 2024: Attended EMNLP, where I presented on semantic roles with Rosa Lee at the main conference and on syntactic generalization at CoNLL!

June 2024: Gave my first conference talk on tokenization at SciL at UC Irvine!

May 2024: Presented my work on tokenization at HSP at UMich!

April 2024: Awarded the NSF GRFP!

February 2024: Gave a Language Science Lunch Talk on my background and research goals at the Maryland Language Science Center!

January 2024: Facilitated an interactive workshop on LLMs at the Maryland Language Science Center's Winter Storm series.

December 2023: Presented my first paper of grad school on tokenization and modeling reading times at EMNLP in Singapore!

March 2023: Presented on the relationship between words and context at HSP 2023 in Pittsburgh!

April 2022: Awarded a NSF GRFP Honorable Mention!

December 2021: Paper published in Cognition!

December 2020: Presented a paper based on my thesis at the CogALex workshop at COLING 2020!

May 2020: Awarded Highest Honors and the Glushko Prize for Outstanding Undergraduate Research in Cognitive Sciences for my thesis on polysemy!

Research

My work has two major focuses. The first investigates to what extent the linguistic generalizations made by language models are human-like. The second seeks to develop cognitively realistic models of the representations and processes behind human language use. I'm also interested in applying insights from these lines of work to improve NLP systems' interpretability.

Publications

Katherine Howitt, Sathvik Nair, Allison Dods, & Robert Hopkins. Generalizations across filler-gap dependencies in neural language models. CoNLL 2024 [link] [pdf]

Eun-Kyoung Rosa Lee, Sathvik Nair & Naomi Feldman. A Psycholinguistic Evaluation of Language Models' Sensitivity to Argument Roles. EMNLP Findings 2024 [link] [pdf]

Sathvik Nair & Philip Resnik. Words, Subwords, and Morphemes: What Really Matters in the Surprisal-Reading Time Relationship? EMNLP Findings 2023 [link] [pdf]

Stephan Meylan, Sathvik Nair, & Tom Griffiths. Evaluating Models of Robust Word Recognition with Serial Reproduction. Cognition, 2021 [link] [pdf]

Sathvik Nair, Mahesh Srinivasan, & Stephan Meylan. Contextualized Word Embeddings Encode Aspects of Human-Like Word Sense Knowledge. CogALex @ COLING 2020 [link] [pdf]

Peer-Reviewed Conference Presentations

Sathvik Nair, Katherine Howitt, Allison Dods & Robert Hopkins. LMs are not good proxies for human language learners. Accepted as Talk at BUCLD 2024

Sathvik Nair & Philip Resnik. Words, Subwords, and Morphemes: What Really Matters in the Surprisal-Reading Time Relationship? Talk at SciL 2024, [abstract]

Sathvik Nair., Colin Phillips, & Philip Resnik. Words, Subwords, and Morphemes: Surprisal Theory and Units of Prediction. Poster at HSP 2024 [abstract]

Katherine Howitt, Sathvik Nair, Allison Dods & Robert Hopkins (2024) Acquiring generalizations across unbounded dependencies: How language models can provide insight into first language acquisition. Poster at MASC-SLL 2024

Sathvik Nair, Konstantine Kahadze & Philip Resnik. The Impacts of Subword Tokenization on Psycholinguistic Modeling. Poster at MASC-SLL 2024

Sathvik Nair, Shohini Bhattasali, Philip Resnik & Colin Phillips. How far does probability take us when measuring psycholinguistic fit? Evidence from Substitution Illusions and Speeded Cloze Data. Poster at HSP 2023 [abstract]

Collaborators, Mentors, Friends, and other Co-Conspirators

Research is never done in a vacuum, and publications don't reflect everyone who's intellectually influenced me. Here are some of those people. Many are connected with UMD's CLIP Lab and Language Science Center, which bring together researchers approaching computation and language (more broadly) from all sorts of perspectives.

Teaching

At UMD:

At UC Berkeley:

Miscellaneous

Other projects (not just academic) and information.

Website Template