Problem
The current signer requires the private JWK to live in process memory. The AdCP spec is explicitly key-storage-agnostic and recommends KMS/HSM ("Store keys in HSM or KMS rather than on disk" — docs/building/implementation/security.mdx), but the SDK forces in-memory storage in practice.
For production deployments this is real friction: operators who want to keep private keys in AWS KMS / GCP KMS / Azure Key Vault / Vault Transit have to either fork the signer or pull the key out of KMS into memory at boot — which defeats the point of using a managed key store. This is especially relevant for the Go SDK since Go is the common choice for production agent surfaces (e.g. agentic-api) where KMS-backed signing is table stakes.
Proposal
Add a SigningProvider interface with a context-aware Sign() operation. The in-memory path stays the default for testing. External providers (KMS / HSM / Vault) plug in by implementing the interface.
type SigningProvider interface {
Sign(ctx context.Context, payload []byte) ([]byte, error)
KeyID() string
Algorithm() string // "Ed25519" | "ECDSA-P256" | "RSA-PSS"
}
Built-ins to ship:
InMemorySigningProvider (current behavior, default, used in tests)
- Documentation/example for at least one cloud KMS (AWS KMS via
aws-sdk-go-v2/service/kms is the obvious starter; GCP KMS via cloud.google.com/go/kms is a close second given common Go deployment targets)
Notes
- The interface takes
context.Context from the start — KMS sign latency is typically 10–50ms, calls can fail or deadline, and retries / backoff need cancellation.
- This is purely an SDK concern; no spec change needed. The spec already permits any signing path that produces a verifiable RFC 9421 signature against the keys published at
jwks_uri.
- RFC 9421 signing is recommended in 3.0, required for mutating/financial operations in 3.1+, and fully required at 4.0 (early 2027 target). Closing the KMS gap before 3.1 makes adoption a lot easier for serious deployments.
Context: came up while reviewing a buyer-side production deployment where KMS was the desired storage path; verifier side has the same issue if an operator wants to cache verified keys behind a managed store.
Problem
The current signer requires the private JWK to live in process memory. The AdCP spec is explicitly key-storage-agnostic and recommends KMS/HSM ("Store keys in HSM or KMS rather than on disk" —
docs/building/implementation/security.mdx), but the SDK forces in-memory storage in practice.For production deployments this is real friction: operators who want to keep private keys in AWS KMS / GCP KMS / Azure Key Vault / Vault Transit have to either fork the signer or pull the key out of KMS into memory at boot — which defeats the point of using a managed key store. This is especially relevant for the Go SDK since Go is the common choice for production agent surfaces (e.g. agentic-api) where KMS-backed signing is table stakes.
Proposal
Add a
SigningProviderinterface with a context-awareSign()operation. The in-memory path stays the default for testing. External providers (KMS / HSM / Vault) plug in by implementing the interface.Built-ins to ship:
InMemorySigningProvider(current behavior, default, used in tests)aws-sdk-go-v2/service/kmsis the obvious starter; GCP KMS viacloud.google.com/go/kmsis a close second given common Go deployment targets)Notes
context.Contextfrom the start — KMS sign latency is typically 10–50ms, calls can fail or deadline, and retries / backoff need cancellation.jwks_uri.Context: came up while reviewing a buyer-side production deployment where KMS was the desired storage path; verifier side has the same issue if an operator wants to cache verified keys behind a managed store.