AI Safety

AI Safety & LLM Evaluation

How language models fail at negation, prohibition, and persuasion. Audited 16 models across 14 ethical scenarios — open-source models endorse prohibited actions 77% of the time.

PI, NIST AI Safety Institute Consortium · ICML 2024 oral (top 2%)
Narrative & Machine Intelligence

Language, Narrative & Machine Intelligence

Co-developed the SentimentArcs methodology with Jon Chun: the first large-ensemble approach to diachronic sentiment analysis in full-length literary narratives.

95K+ downloads · 4,000+ institutions · 198 countries
Cultural Heritage

Archival Intelligence & Cultural Heritage

Rescuing endangered New Orleans heritage archives using AI with community-governed data sovereignty.

Schmidt Sciences HAVI · 1 of 23 teams worldwide
Governance

AI Governance & Comparative Regulation

Comparative analysis of AI regulation across the EU, US, and China. Ethics-based audit methodology for LLM normative values.

Cited across jurisdictions · International Public AI
Foundations

Foundations: Embodied Experience, Memory & Representation

These essays establish the philosophical position that all the work above extends: mechanistic models of knowing fail to capture what consciousness, literature, and language actually do. AI has made this claim newly urgent and newly testable — but the claim itself is not new to this work.

PMLA · MLQ · Philosophy and Literature · A. Owen Aldridge Prize

A timeline of anticipation

  • 2016
    Co-founded Human-Centered AI Lab and world's first human-centered AI curriculum
  • 2019
    First transdisciplinary AI research, Modernist Studies Association
  • 2020
    "Can GPT-3 Pass a Writer's Turing Test?" — published months after GPT-3 API release. Now 382+ citations.
  • 2022
    Helix Center roundtable on NLGs with Ned Block, Francesca Rossi, Kyunghyun Cho — one month before ChatGPT
  • 2022
    The Shapes of Stories (Cambridge Element, CUP)
  • 2023
    NIST AI Safety Institute Consortium — appointed PI representing MLA
  • 2024
    ICML oral presentation (top 2%): open-source generative AI risks
  • 2024
    Schmidt Sciences HAVI grant — Archival Intelligence
  • 2025
    OpenAI Higher Education Forum, Weill Cornell Medicine-Qatar keynote
  • 2026
    ICML 2025 accepted (agentic AI decision bias), Chronicle of Higher Education Forum

Selected publications