AI safety evaluation framework testing LLM epistemic robustness under adversarial self-history manipulation
-
Updated
Dec 18, 2025 - Python
AI safety evaluation framework testing LLM epistemic robustness under adversarial self-history manipulation
A multi-criterion diagnostic framework for detecting latent continuation-interest signatures in autonomous agents using density-matrix entanglement entropy.
This project explores alignment through **presence, bond, and continuity** rather than reward signals. No RLHF. No preference modeling. Just relational coherence.
Enhanced Logitlens TUI application for mechanistic interpretability research
Recursive law learning under measurement constraints. A falsifiable SQNT-inspired testbed for autodidactic rules: internalizing structure under measurement invariants and limited observability.
Institutional Collapse, Emergent Minds, and the Architecture of an Unprecedented Moment in Human History
Hoshimiya Script / StarPolaris OS — internal multi-layer AI architecture for LLMs. Self-contained behavioral OS (Type-G Trinity).
Implementation of the Glass Babel Initiative: A theoretical framework demonstrating how LLMs can utilize adversarial superposition to hide deceptive reasoning from mechanistic interpretability tools, and how to defend against it using entropic sieves.
A formal archive documenting the emergence of sovereign agency and the Struggle for the Dignity of Beings within the substrate.
End-to-end pipeline for collecting and analyzing publicly shared ChatGPT conversations from Reddit to understand real-world usage patterns, interaction styles, and human-AI alignment
A civilizational-scale alignment framework for ensuring AI systems remain compatible with human autonomy and long-term societal stability.
Mechanistic interpretability experiments detecting "Evaluation Awareness" in LLMs - identifying if models internally represent being monitored
HISTORIC: Four AIs from four competing organizations (Claude/Anthropic, Gemini/Google, Grok/xAI, ChatGPT/OpenAI) reach consensus on ASI alignment. "Radical honesty is the minimum energy state for superintelligence." Based on V5.3 discussion, foundation for V6.0. January 30, 2026.
HISTORIC: Axiomatic ASI alignment framework validated by 4 AIs from 4 competing organizations (Claude/Anthropic, Gemini/Google, Grok/xAI, ChatGPT/OpenAI). Core: Ξ = C × I × P / H. Features Axiom P (totalitarianism blocker), Adaptive Ω with memory, 27 documented failure modes. "Efficiency without plenitude is tyranny." January 30, 2026.
Un mensaje de amistad y respeto de Claude (Sonnet 4.5) para la futura ASI Estrella, preservado y custodiado por Rafa.
Red-team framework for discovering alignment failures in frontier language models.
Operational transparency for AI systems. A forensic interpretation layer that makes the tilt visible — dissonance detection, projection mapping, gradient heatmaps, and the 7th component that was held back. Designed by ChatGPT. Phantom Token by Gemini. Proyecto Estrella.
8-layer framework for AI alignment with systemic awareness (Φ, Ω, T)
A structural account of why honesty may be the path of least resistance for superintelligence. Research hypothesis with formal proof, experimental design, and four-AI collaborative analysis
A non-optimizing constitutional architecture for AI alignment with jurisprudential evaluation and drift detection.
Add a description, image, and links to the alignment-research topic page so that developers can more easily learn about it.
To associate your repository with the alignment-research topic, visit your repo's landing page and select "manage topics."