NIST AI Safety Institute Consortium

Katherine Elkins and Jon Chun are Principal Investigators in the NIST AI Safety Institute Consortium, representing the 25,000-member Modern Language Association. Their work focuses on LLM evaluation with emphasis on linguistic edge cases: how models process negation, syntactic framing, and persuasion in ways that create safety vulnerabilities invisible to purely technical evaluation methods.

The premise is straightforward. Language models are systems for processing and generating language, yet most evaluation frameworks treat language as a transparent medium rather than a complex cultural artifact with rhetorical, contextual, and pragmatic dimensions. Elkins and Chun bring the analytical tools of literary and linguistic scholarship to questions of AI alignment, safety, and trustworthiness. The argument is not that this expertise is supplementary to technical AI safety work. It is that it is essential.

Schmidt Sciences HAVI

As Principal Investigator for a $330,000 Schmidt Sciences Humanities and AI Virtual Institute grant, Elkins leads one of 23 teams selected worldwide to explore how AI can serve humanistic scholarship. The "Archival Intelligence" project builds AI tools to rescue endangered cultural archives in New Orleans — using machine learning to process, transcribe, and make accessible historical documents from communities whose records face permanent loss. The project's methodology centers on community-governed data sovereignty, ensuring that the people whose heritage is being preserved maintain control over access, representation, and use.

UNESCO

Elkins has served as an expert consultant for UNESCO's MONDIACULT initiative on AI and cultural heritage, advising on frameworks for multilingual cultural preservation and digital heritage in an era of rapid technological change. Her work with UNESCO addresses how AI systems can be designed to support cultural diversity rather than flatten it — a concern with direct implications for how language models are trained, deployed, and governed across linguistic and cultural contexts.

Meta Open Innovation AI Research Community

Elkins is a member of Meta's Open Innovation AI Research Community, contributing to discussions on open-source AI development, transparency, and responsible deployment. This engagement connects to her broader research on the governance of open-source AI, including the co-authored policy paper "If Open Source Is to Win, It Must Go Public" with International Public AI, which analyzes near- to mid-term risks and opportunities in open-source generative AI and argues for public participation in AI development decisions.

OpenAI Higher Education Forum

Elkins was selected as a speaker for OpenAI's Education Guild Higher Education Forum in San Francisco (October 2025), presenting on computational humanities research and the human-centered AI curriculum model. The forum convened researchers and educators exploring how AI is transforming higher education — a domain where Elkins' nearly decade-long track record of integrating AI into humanities and social science education provides a distinctive, evidence-based perspective.

Bloomberg & Industry Engagement

Elkins created the AI Strategy course for Bloomberg's professional education platform, designed to help professionals understand and integrate AI into organizational workflows and strategic decision-making. This industry engagement reflects her conviction that AI literacy requires not just technical knowledge but the contextual, ethical, and strategic judgment that humanities and social science training cultivates. She has also served as co-PI for the Notre Dame-IBM Tech Ethics Lab, investigating how well generative AI can predict human behavior in high-stakes decision-making contexts.