Trust and compliance engine for AI agents — OSS CLI, SDK, and audit tools.
-
Updated
Jul 22, 2025 - Python
Trust and compliance engine for AI agents — OSS CLI, SDK, and audit tools.
Scorton is an open-source behavioral cybersecurity framework that makes human trust measurable and programmable. Built with Rust, Python and NodeJS it helps developers and security teams predict, score, and improve human-driven cyber risk and awareness.
Trust-minimized marketplace for content creators [🥇Lambda Hack Week '24]
[INFRA] Trust infrastructure for autonomous AI and EU AI Act compliance.
Sanna Protocol v1.0 — specification, JSON schemas, and golden test fixtures for AI governance receipts.
Trust infrastructure for AI agents — constitution enforcement and cryptographic receipts. TypeScript SDK.
airlock is a cryptographic handshake protocol for verifying AI model identity at runtime. It enables real-time attestation of model provenance, environment integrity, and agent authenticity - without relying on vendor trust or static manifests.
Trust infrastructure for AI agents — constitution enforcement and cryptographic receipts. Python SDK.
Constitution enforcement and cryptographic receipts for OpenClaw agents. Every tool call governed, every decision signed.
The world's first intelligence-agnostic anti-social network.
Secure ESLint + Prettier config for trust-grade TypeScript, React, and Tailwind apps. Built for AI, identity, and verification workflows. Maintained by Sequenxa.
Goal Modeling Language (GML) — a declarative, transparent, and auditable logic format maintained by The Covenant Trust. This schema defines how systems justify actions, preserve audit trails, and support human-aligned execution across domains, jurisdictions, and infrastructure.
Defines new professional and career roles—such as Constraint Architects—emerging as AI systems move governance from human interpretation to execution-time constraint under 512-style kernels and CVS evidence layers.
Signed, verifiable action receipts for humans + agents (Ed25519 JWT + tamper-evident chain)
This repository collects Spherity research articles. It provides references and supporting material on digital identity, organizational wallets, the European Business Wallet (EBW), verifiable credentials, Digital Product Passports, data spaces, trust infrastructure, use cases, adoption strategies and business cases.
A LangGraph-based reasoning engine with enforced citations for RAG.
Proof-based marketing operations for governed systems. Publishes verifiable artifacts via OMEGA's Federation Core to enable evidence-driven discovery instead of promotion.
Implement trust infrastructure for AI agents by enforcing governance, verifying execution, and generating cryptographic receipts in TypeScript.
Add a description, image, and links to the trust-infrastructure topic page so that developers can more easily learn about it.
To associate your repository with the trust-infrastructure topic, visit your repo's landing page and select "manage topics."