Problem
The current signer requires the private JWK to live in process memory. The AdCP spec is explicitly key-storage-agnostic and recommends KMS/HSM ("Store keys in HSM or KMS rather than on disk" — docs/building/implementation/security.mdx), but the SDK forces in-memory storage in practice.
For production deployments this is real friction: operators who want to keep private keys in AWS KMS / GCP KMS / Azure Key Vault / Vault Transit have to either fork the signer or pull the key out of KMS into memory at boot — which defeats the point of using a managed key store.
Proposal
Add a SigningProvider Protocol with an async sign() operation. The in-memory path stays the default for testing. External providers (KMS / HSM / Vault) plug in by implementing the Protocol.
from typing import Protocol, Literal
class SigningProvider(Protocol):
async def sign(self, payload: bytes) -> bytes: ...
def key_id(self) -> str: ...
def algorithm(self) -> Literal["Ed25519", "ECDSA-P256", "RSA-PSS"]: ...
Built-ins to ship:
InMemorySigningProvider (current behavior, default, used in tests)
- Documentation/example for at least one cloud KMS (AWS KMS via
boto3 is the obvious starter)
Notes
- Interface must be async from the start — KMS sign latency is typically 10–50ms and isn't free in retry/backoff math.
- This is purely an SDK concern; no spec change needed. The spec already permits any signing path that produces a verifiable RFC 9421 signature against the keys published at
jwks_uri.
- RFC 9421 signing is recommended in 3.0, required for mutating/financial operations in 3.1+, and fully required at 4.0 (early 2027 target). Closing the KMS gap before 3.1 makes adoption a lot easier for serious deployments.
Context: came up while reviewing a buyer-side production deployment where KMS was the desired storage path; verifier side has the same issue if an operator wants to cache verified keys behind a managed store.
Problem
The current signer requires the private JWK to live in process memory. The AdCP spec is explicitly key-storage-agnostic and recommends KMS/HSM ("Store keys in HSM or KMS rather than on disk" —
docs/building/implementation/security.mdx), but the SDK forces in-memory storage in practice.For production deployments this is real friction: operators who want to keep private keys in AWS KMS / GCP KMS / Azure Key Vault / Vault Transit have to either fork the signer or pull the key out of KMS into memory at boot — which defeats the point of using a managed key store.
Proposal
Add a
SigningProviderProtocol with an asyncsign()operation. The in-memory path stays the default for testing. External providers (KMS / HSM / Vault) plug in by implementing the Protocol.Built-ins to ship:
InMemorySigningProvider(current behavior, default, used in tests)boto3is the obvious starter)Notes
jwks_uri.Context: came up while reviewing a buyer-side production deployment where KMS was the desired storage path; verifier side has the same issue if an operator wants to cache verified keys behind a managed store.