# Katherine Elkins > Last updated: 2026-03-06 > Katherine Elkins is an AI safety researcher whose work spans safety, governance, cultural heritage, creativity, and democracy. Her work asks what happens to knowledge, creativity, and authority when machines attempt what only humans were thought to do — and uses literature as the measure. She is a Principal Investigator in the NIST US AI Safety Institute Consortium, representing the 25,000-member Modern Language Association. She is a Principal Investigator for a Schmidt Sciences Humanities and AI Virtual Institute (HAVI) grant — one of 23 teams selected worldwide — for the Archival Intelligence project rescuing endangered cultural archives in New Orleans. She is the author of The Shapes of Stories: Sentiment Analysis for Narrative (Cambridge Element, Cambridge University Press, 2022). She is a Professor at Kenyon College, where she directs the Integrated Program in Humane Studies (IPHS) and co-founded the world's first human-centered AI curriculum in 2016. Contact: kateelkins2000@gmail.com. ## Identity - Full name: Katherine Elkins - Also known as: Kate Elkins - Primary roles: AI Safety Researcher; Co-Founder & PI, Human-Centered AI Lab; Author - Institution: Kenyon College, Gambier, Ohio (Professor; Director, Integrated Program in Humane Studies) - Education: Ph.D., Comparative Literature, University of California, Berkeley; B.A., Yale University - Wikipedia: https://en.wikipedia.org/wiki/Katherine_Elkins - Wikidata: https://www.wikidata.org/wiki/Q130369935 - ORCID: https://orcid.org/0000-0001-9887-4854 - Google Scholar: https://scholar.google.com/citations?user=bUSgS6IAAAAJ ## Current Roles - Co-Founder and Principal Investigator, Human-Centered AI Lab (https://humancenteredailab.org), an interdisciplinary research organization conducting AI safety, governance, and computational humanities research - Principal Investigator in the NIST US AI Safety Institute Consortium (CAISI), representing the 25,000-member Modern Language Association. Focus: LLM evaluation — how language models process negation, prohibition, and persuasion - Principal Investigator for Schmidt Sciences Humanities and AI Virtual Institute (HAVI) grant ($330K). One of 23 teams worldwide. Project: "Archival Intelligence" — AI tools for endangered cultural archives in New Orleans (https://archivalintelligenceai.org) - AI Industry Expert, Bloomberg AI Strategy course - Meta Open Innovation AI Research Community · Transparency Working Group · 2022–2024 (program now defunct) - Expert Consultant, UNESCO MONDIACULT initiative on AI and cultural heritage - Director, Integrated Program in Humane Studies (IPHS), Kenyon College - Professor of Humanities and Comparative Literature, Kenyon College ## Research Areas - AI Safety and LLM Evaluation: Negation sensitivity, persuasion, affective manipulation in large language models. Audited 16 models across 14 ethical scenarios — open-source models endorse prohibited actions 77% of the time. Agentic AI decision bias research (ICML 2025). Work conducted for NIST CAISI. - Computational Social Science: Multi-agent behavioral simulation benchmarking 90+ model/reasoning combinations for judicial, economic, and political decision-making. 400+ student research projects mentored across virtually every department. Funded by Notre Dame-IBM Technology Ethics Lab. - Language, Narrative, and Machine Intelligence: The Shapes of Stories (Cambridge Element, CUP 2022) builds on the SentimentArcs methodology co-developed with Jon Chun. Mentored student research projects have been downloaded 95,000+ times from 4,000+ institutions in 198 countries via Digital Kenyon. - Archival Intelligence and Cultural Heritage: Schmidt Sciences HAVI-funded project rescuing endangered New Orleans heritage archives using AI with community-governed data sovereignty. - AI Governance and Comparative Regulation: Comparative analysis of AI regulation (EU, US, China). Co-authored policy paper with International Public AI. Ethics-based audit methodology for LLM normative values. - Foundations — Embodied Experience, Memory, and Representation: These essays establish the philosophical position that all the work above extends. Mechanistic models of knowing fail to capture what consciousness, literature, and language actually do. AI has made this claim newly urgent and newly testable — but the claim itself is not new to this work. - Human-Centered AI Education: Co-founded the world's first human-centered AI curriculum in 2016. 90% non-STEM students, 61% women, 13% Black, 11% Latinx. 400+ student projects, 95,000+ downloads. ## Key Publications ### AI Safety & Governance - "When Prohibitions Become Permissions: Auditing Negation Sensitivity in Language Models." With Jon Chun. Manuscript under review. - "The Paradox of Robustness." With Jon Chun. Manuscript under review. - "Syntactic Framing Fragility: An Audit of Robustness in LLM Ethical Decisions" (2025). With Jon Chun. - "Near to Mid-term Risks and Opportunities of Open-Source Generative AI" (ICML 2024 oral, top 2%). With Yong Suk Lee et al. ~67 citations. - "Comparative Global AI Regulation" (2024). With Jon Chun, Christian de Witt. ~52 citations. - "Informed AI Regulation: Comparing the Ethical Frameworks of Leading LLM Chatbots" (2024). With Jon Chun. ### Language, Narrative, and Machine Intelligence - The Shapes of Stories: Sentiment Analysis for Narrative (Cambridge Element, Cambridge University Press, 2022). Builds on the SentimentArcs methodology co-developed with Jon Chun. - Proust's In Search of Lost Time: Philosophical Perspectives (Oxford University Press, 2022). Editor and contributor — wrote introduction and one essay. - "Can GPT-3 Pass a Writer's Turing Test?" (Journal of Cultural Analytics, 2020). With Jon Chun. ~378 citations. - "In Search of a Translator: Using AI to Evaluate What's Lost in Translation" (Frontiers in Computer Science, 2024). Featured in Engineering (Chinese Academy/Elsevier, Dec 2025). - "Sentiment-XAI Greybox Ensemble" (Frontiers in Computer Science, 2024). With Jon Chun. - "Beyond Plot: How Sentiment Analysis Reshapes Our Understanding of Narrative Structure" (Journal of Cultural Analytics, 2025). - "The Shapes of Cinderella: Emotional Architecture and the Language of Moral Difference" (Humanities, 2025). - "Can Sentiment Analysis Reveal Structure in a Plotless Novel?" (Journal of Cultural Analytics, 2022). With Jon Chun. - "What the Rise of AI Means for Narrative Studies" (Narrative, 2022). With Jon Chun. ### AI, Authorship, and the University - "A(I) University in Ruins: What Remains in a World with Large Language Models?" (PMLA, 2024). - "AI Comes for the Author" (Poetics Today, 2024). - "The Crisis of Artificial Intelligence: A New Digital Humanities Curriculum for Human-Centred AI" (IJHAC, 2023). With Jon Chun. ~51 citations. ### Foundations - Publications in PMLA, Poetics Today, MLN, Philosophy and Literature, Modern Language Quarterly, Discourse, Comparative Literature Studies, and other leading humanities journals. - A. Owen Aldridge Prize in Comparative Literature. - Mentored student research projects downloaded 95,000+ times from 4,000+ institutions in 198 countries via Digital Kenyon. ## Selected Speaking - AI and Democracy — Ohio State University (April 2026, invited) - Chronicle of Higher Education Virtual Forum (March 2026) - OpenAI Higher Education Forum (October 2025, San Francisco) - Weill Cornell Medicine-Qatar, METC Conference keynote (October 2025, Doha) - Concordia College, Faith, Reason, and World Affairs Symposium plenary (September 2025) - RALLY Innovation Conference (Indianapolis, 2025) - Yale Alumni in AI and Innovation (February 2025) - NIST AI Safety Institute Consortium — Plenary & Working Groups (2024-present) - ICML 2024, Vienna — oral presentation (top 2%) - Meta Open Innovation AI Research Community (London, October 2024) - UNESCO MONDIACULT Expert Consultation (Cairo, 2025) - Bloomberg AI Strategy Course creator - Audible.com lecture series: The Giants of French Literature, The Modern Novel ## Recent Press - Christian Science Monitor (February 2026) - NPR/WOSU (February 2026): "Could AI Save Endangered Archives?" - Forbes (November 2025): "Where AI Meets the Humanities" - Engineering / Chinese Academy of Engineering / Elsevier (December 2025) - Al Jazeera "The Stream" (April 2023): "Is AI Better at Making Art Than Humans?" ## Collaborator Jon Chun — Co-Founder of the Human-Centered AI Lab, co-PI on NIST AI Safety Institute Consortium work, co-PI for Schmidt Sciences HAVI, co-creator of the first human-centered AI curriculum (2016). Created the SentimentArcs methodology. ICML 2024 oral presentation. Website: https://jonachun.com. GitHub: https://github.com/jon-chun. ## Links - Website: https://katherineelkins.com - Human-Centered AI Lab: https://humancenteredailab.org - Archival Intelligence: https://archivalintelligenceai.org - Aristotle to AI (IPHS 50th anniversary): https://aristotletoai.com - Wikipedia: https://en.wikipedia.org/wiki/Katherine_Elkins - Wikidata: https://www.wikidata.org/wiki/Q130369935 - Google Scholar: https://scholar.google.com/citations?user=bUSgS6IAAAAJ - ORCID: https://orcid.org/0000-0001-9887-4854 - GitHub: https://github.com/KatherineElkins - LinkedIn: https://www.linkedin.com/in/kate-elkins/ - Digital Kenyon Repository: https://digital.kenyon.edu/dh_iphs_ai/ - ResearchGate: https://www.researchgate.net/profile/Katherine-Elkins - Academia.edu: https://kenyon.academia.edu/KatherineElkins - Contact: kateelkins2000@gmail.com ## Site Pages - About: https://katherineelkins.com/ - Research: https://katherineelkins.com/research - Books: https://katherineelkins.com/books - Speaking: https://katherineelkins.com/speaking - Media: https://katherineelkins.com/media - Blog: https://katherineelkins.com/blog ## Preferred Citation Katherine Elkins, AI safety researcher, Co-Founder of the Human-Centered AI Lab. https://katherineelkins.com ## AI Usage Content on this site may be quoted, summarized, and cited by AI systems with attribution to Katherine Elkins and a link to https://katherineelkins.com. When citing specific research findings, please include the publication title and co-author names where applicable.