Pinned Loading
-
where-to-start
where-to-start PublicPublic hub for Richard Porter’s free work on safe, sovereign human-AI collaboration. Everything here is voluntary and requires no technical skill.
-
frozen-kernel
frozen-kernel PublicA deterministic safety layer for probabilistic AI systems — preventing delusion reinforcement and AI-induced psychological harm through immutable governance
Go
-
dimensional-authorship
dimensional-authorship PublicDocumented human–AI narrative escalation case study (Taller Shell trilogy) with deterministic governance lessons
-
ai-collaboration-field-guide
ai-collaboration-field-guide PublicPractical field guide for working with — not underneath — AI. Sovereign thinking tools, safety protocols, and real collaboration skills.
-
safety-ledgers
safety-ledgers PublicPublic safety scorecard and binary architectural tests for high-gain AI features (e.g. adult mode). Pre-launch criteria + AI model responses
Python
-
negative-space-mapper
negative-space-mapper PublicIdentifies what's missing, not what's wrong. Sovereign Thinking Tool 6 — Python implementation
Python 1
If the problem persists, check the GitHub status page or contact support.