AI Safety · Research · Author

Katherine Elkins

Co-Founder & PI, Human-Centered AI Lab
PI, NIST AI Safety Institute Consortium
PI, Schmidt Sciences HAVI

Katherine Elkins is an AI safety researcher whose work focuses on the places where language models break down. Her research reveals how AI systems mishandle negation, respond to emotional manipulation, and fail at exactly the kinds of nuanced prohibition and persuasion that humans navigate every day. Understanding why matters enormously for safety, and it's the reason her work has drawn on deep expertise in how language actually works.

Elkins is Co-Founder and Principal Investigator of the Human-Centered AI Lab, an interdisciplinary research organization. She is a PI in the NIST AI Safety Institute Consortium, representing the 25,000-member Modern Language Association, and PI for a Schmidt Sciences HAVI grant, one of 23 teams selected worldwide, building AI tools to rescue endangered cultural archives in New Orleans.

She has been at this longer than most people realize. In 2016 she co-created what is believed to be the first human-centered AI curriculum, and her students' 300+ research projects have since been downloaded 95,000+ times from institutions in 198 countries. She is the author of The Shapes of Stories: Sentiment Analysis for Narrative (Cambridge University Press, 2022) and the AI industry expert for Bloomberg's AI Strategy course. Her research spans AI safety, computational social science, sentiment analysis, and AI governance. She is a professor at Kenyon College, where she directs the Integrated Program in Humane Studies. Ph.D., UC Berkeley. B.A., Yale.

Katherine Elkins — AI safety researcher
95,336
Downloads of mentored student research from 4,760 institutions in 198 countries
2016
Co-created the world's first human-centered AI curriculum
17+
Peer-reviewed publications across AI, humanities, and governance
61%
Women enrolled in AI courses · 13% Black · 11% Latinx

Selected Highlights

Feb 2026

Quoted in Christian Science Monitor

Commentary on AI safety research and the role of humanities in AI governance.

Feb 2026

NPR: Could AI Save Endangered Archives?

WOSU coverage of the Schmidt Sciences archival intelligence project in New Orleans.

Nov 2025

Forbes: Where AI Meets the Humanities

Feature on the human-centered AI program and curriculum at Kenyon College.

Oct 2025

OpenAI Higher Education Forum

Selected speaker at OpenAI's Education Guild, presenting computational humanities research in San Francisco.

2025

Schmidt Sciences HAVI Award

$330K grant — one of 23 teams worldwide — for AI-powered archival intelligence preserving endangered New Orleans heritage.

2025

RALLY Innovation Conference

Spoke on human algorithms and the intersection of narrative, emotion, and AI systems.

Areas of focus

Elkins' research bridges computational methods with humanistic and social-scientific inquiry, spanning AI safety, computational social science, narrative analysis, cultural heritage, and governance. View full research →

AI Safety

LLM Evaluation & Red-Teaming

How language models process negation, prohibition, and persuasion. Evaluation frameworks for the NIST AI Safety Institute Consortium.

Computational Social Science

Multi-Agent Behavioral Simulation

Benchmarking 90+ model/reasoning combinations for judicial, economic, and political decision-making. 300+ student research projects across every discipline.

Cultural Heritage

Archival Intelligence

Schmidt Sciences HAVI project rescuing endangered New Orleans heritage archives using AI. Community-governed data sovereignty.