AI talks about emotions. We study whether it understands them.
We're an AI Psychology research lab. We publish what we find — the question, the method, the result. Data open. Code open. No hand-waving.
If you're building in this space — or just care about getting it right — everything here is yours to use.
| Paper | Status | Repo |
|---|---|---|
| Keep4o — Empathy Is Not What Changed: Clinical Assessment of Psychological Safety Across GPT Model Generations | Published, Feb 2026 | keep4o |
| Whether, Not Which: — A Mechanistic Dissociation of Affect Reception and Emotion Categorization in LLMs | Writing up, Mar 2026 | affect-receptions |
| Multi-Provider Safety Eval — Safety Posture and Empathic Quality Across Frontier AI Providers | Ongoing, Q2 2026 | coming soon |
Layer 1 — The Shield. Measure whether AI conversations are psychologically safe. Clinical rubrics, validated against expert judgment, deployed at scale.
Layer 2 — The Teacher. Move from observation to intervention. Use monitoring data as training signal — mid-conversation course correction, not hard-coded rules.
Layer 3 — The Breakthrough. Understand the mechanisms of emotional reasoning inside AI. Map the circuits. Build AI where psychological safety is architectural, not bolted on.
Every study releases its full stimulus set, extraction pipeline, analysis scripts, and reproduction code. Clinical frameworks and rubrics are published alongside papers.
We work with researchers, clinicians, and institutions working on AI emotional intelligence and psychological safety. If you're working on related questions — or want to use our frameworks in your own research — get in touch.
- keidolabs.com — lab home
- keidolabs.com/research — full research programme
- EmpathyC — our psychological safety monitoring platform
Founded by Dr. Michael Keeman — clinical psychologist, AI systems engineer, interpretability researcher. Liverpool, UK.