Public safety scorecard and binary architectural tests for high-gain AI features (e.g. adult mode). Pre-launch criteria + AI model responses
-
Updated
Mar 9, 2026 - Python
Public safety scorecard and binary architectural tests for high-gain AI features (e.g. adult mode). Pre-launch criteria + AI model responses
Practical field guide for working with — not underneath — AI. Sovereign thinking tools, safety protocols, and real collaboration skills.
A deterministic safety layer for probabilistic AI systems — preventing delusion reinforcement and AI-induced psychological harm through immutable governance
Documented human–AI narrative escalation case study (Taller Shell trilogy) with deterministic governance lessons
Public hub for Richard Porter’s free work on safe, sovereign human-AI collaboration. Everything here is voluntary and requires no technical skill.
Add a description, image, and links to the deterministic-safety topic page so that developers can more easily learn about it.
To associate your repository with the deterministic-safety topic, visit your repo's landing page and select "manage topics."