Over the past 4 months we've been building an AI Codebase Maturity Model into our own project. The idea is simple: there are specific things a repo can have — CLAUDE.md, AGENTS.md, CI gates, test coverage thresholds, structured contribution guides — that make AI-assisted development actually work instead of producing garbage PRs.
We formalized those into a scoring framework (paper) and built a scanner that checks any public GitHub repo. Our own repo went from L1 to L4·21/31 by systematically adding the criteria, and the difference in AI PR quality is night and day.
We scanned notaryproject/notation and you're at L1 · 1/10. That's not a judgment — most repos start there.
How to use it
-
View your scan — console.kubestellar.io/acmm?repo=notaryproject/notation shows a breakdown of all 60 criteria grouped by category (CI/CD, testing, documentation, governance, etc.). Green = detected in your repo, gray = missing.
-
Improve your score — each missing criterion has an "Ask agent for help" button that launches a guided AI mission. For example, clicking it on "CLAUDE.md" walks you through creating one tailored to your repo's conventions — what to include, where to put it, what rules matter for your stack.
-
Track progress — re-scan anytime to see your updated score. The badge below updates automatically.
Add a badge to your README
[](https://console.kubestellar.io/acmm?repo=notaryproject%2Fnotation&utm_source=github&utm_medium=issue&utm_campaign=acmm-outreach)
Happy to open a PR with this change if you'd like — just say the word.
Feel free to close if not relevant.
Over the past 4 months we've been building an AI Codebase Maturity Model into our own project. The idea is simple: there are specific things a repo can have —
CLAUDE.md,AGENTS.md, CI gates, test coverage thresholds, structured contribution guides — that make AI-assisted development actually work instead of producing garbage PRs.We formalized those into a scoring framework (paper) and built a scanner that checks any public GitHub repo. Our own repo went from L1 to L4·21/31 by systematically adding the criteria, and the difference in AI PR quality is night and day.
We scanned
notaryproject/notationand you're at L1 · 1/10. That's not a judgment — most repos start there.How to use it
View your scan — console.kubestellar.io/acmm?repo=notaryproject/notation shows a breakdown of all 60 criteria grouped by category (CI/CD, testing, documentation, governance, etc.). Green = detected in your repo, gray = missing.
Improve your score — each missing criterion has an "Ask agent for help" button that launches a guided AI mission. For example, clicking it on "CLAUDE.md" walks you through creating one tailored to your repo's conventions — what to include, where to put it, what rules matter for your stack.
Track progress — re-scan anytime to see your updated score. The badge below updates automatically.
Add a badge to your README
Happy to open a PR with this change if you'd like — just say the word.
Feel free to close if not relevant.