Summary
The AdCP governance protocol records what was bought but not why. When an AI agent evaluates 23 ad products and selects 3, the evaluation rationale disappears — the scores, the weights, the elimination logic exist only in an ephemeral context window that ceases to exist when the session ends.
This proposal adds two interconnected capabilities to the campaign governance specification for 3.1:
- Decision provenance — A first-class
check_governance finding category (provenance_compliance) and a required decision_provenance field on seller responses that attests to evaluation process compliance without disclosing competitive intelligence
- Decision lineage — A structured model for how decisions flow through the protocol, recording the chain of agents, inputs, and outputs at each step — so the audit trail shows not just "what happened" but "how each step connected to the next"
These are not extensions. They are proposed additions to the core protocol, completing the governance model by making the decision process as auditable as the decision outcome.
From human negotiation to machine negotiation
When humans negotiate media deals, reasoning is continuous and remembered. A media planner evaluates five proposals, picks two, and can explain six months later: "I chose Package A because the audience overlap was strong and the publisher had delivered for us before. I passed on Package C because the CPMs looked good but the inventory was remnant." The reasoning lives in the planner's head, in email threads, in margin notes on a spreadsheet.
When agents negotiate, reasoning is ephemeral. A buyer agent evaluates 23 products in a context window, scores them on four dimensions, eliminates 20, and selects 3. The session ends. The context window is freed. The scores, the weights, the elimination rationale — all of it ceases to exist. The agent has no memory of why it chose what it chose.
In the human world, decision provenance was implicit — it lived in people. Decision lineage was informal — you could reconstruct the chain by talking to the people involved. In the agent world, both must be explicit protocol constructs. If the reasoning isn't persisted at transaction time, it's gone forever. If the chain of decisions isn't recorded structurally, no amount of log forensics can reconstruct it.
This is not an edge case. It is the defining characteristic of agent-mediated transactions: agents don't remember. The protocol must remember for them. That's what decision provenance and lineage provide — structured memory for a system that has none.
What AdCP 3.0 already provides
AdCP 3.0 is not starting from zero. The protocol already has significant provenance and accountability infrastructure:
- Creative provenance: AdCP 3.0 has a full C2PA-aligned provenance model for creative assets —
digital_source_type, ai_tool, human_oversight, c2pa.manifest_url, disclosure. This proves AI involvement in content CREATION.
- Campaign audit trail:
get_plan_audit_logs returns checks performed, approvals/denials, conditions, human reviews, drift metrics, and categories_evaluated. This records WHAT governance checked and HOW it decided.
- Governance context tokens: The
governance_context JWS token chain provides cryptographic proof of the approval chain — intent-phase tokens from buyers, purchase/modification/delivery-phase tokens from sellers.
What's missing
The gap becomes precise when you distinguish what kind of provenance each capability addresses:
- Creative provenance covers HOW CONTENT was made. Decision provenance covers HOW DECISIONS were made. AdCP has the first but not the second.
- The audit trail records what governance checked (
categories_evaluated) and what it found (findings). But it does NOT record what the SELLER evaluated when selecting products, or what the BUYER evaluated when choosing among them.
- The governance_context token chain proves the approval chain was followed. But it says nothing about the evaluation process that produced the inputs TO that chain.
- In short: AdCP 3.0 has provenance for creatives and accountability for governance decisions. This proposal adds provenance for commercial decisions (product selection, evaluation rationale) and lineage connecting all decisions into a traceable chain.
The problem: decisions without memory
Scenario A — "Why did you recommend that?"
Acme Outdoor's procurement team is reviewing Q2 media spend. Pinnacle Agency's buyer agent evaluated 23 products from StreamHaus and selected 3 for a $50K campaign. The campaign delivered, but performance was mixed — two packages hit benchmarks, one underperformed significantly.
Procurement asks a reasonable question: "Product 17 had a better CPM and higher projected reach than what you selected. Why wasn't it chosen?"
The agency can't answer. The buyer agent scored all 23 products on relevance, cost efficiency, brand safety, and audience match. It eliminated Product 17 because the audience overlap score was 0.41 — well below the 0.65 threshold. But that evaluation happened in an ephemeral context window. The scores, the weights, the elimination rationale — none of it was persisted. The session ended and the decision logic ceased to exist.
All the protocol recorded was the outcome: three packages, three budgets, one media buy ID. The reasoning that produced that outcome left no trace.
Acme Outdoor loses confidence in the agency. A $2M annual relationship is strained over a $50K decision that nobody can explain — not because the decision was bad, but because the system wasn't designed to remember it.
Scenario B — A regulator asks for an explanation
The EU AI Act (Article 50) requires transparency for AI systems making consequential decisions. A regulator auditing agentic advertising platforms sends a request to StreamHaus: "Your AI agent recommended Package A to 47 different buyers last quarter. Explain the decision criteria and demonstrate that recommendations were appropriate for each buyer's brief."
StreamHaus has API logs showing which packages were returned in each get_products response. They have transaction records from create_media_buy. But they have no structured records of the evaluation — which products were considered, what criteria were applied, why some were ranked higher than others, whether human review occurred.
They can prove what was sold. They cannot prove why it was recommended. Under the AI Act, "we don't know" is not a compliant answer.
Scenario C — Inconsistency across sellers
A luxury brand runs the same $500K brief across five sellers. Post-campaign, performance varies 4x: Seller A delivers 0.18% CTR, Seller D delivers 0.04%.
The brand asks: "Did all five sellers evaluate products with similar rigor, or did some just dump underperforming inventory?"
There's no way to answer this. The protocol records delivery outcomes but nothing about evaluation quality. Did Seller D consider 50 products and pick the best 3, or consider 3 and pick all of them? Did it apply audience scoring, or sort by available inventory? Did a human review the selection?
The brand can compare outcomes across sellers but not processes. They can see that Seller D underperformed, but can't determine whether it was bad luck or bad evaluation.
Scenario D — The broken chain
Even when individual decisions are recorded, the connections between them are not. A campaign goes through six protocol steps: brief submitted → plan registered → products searched → governance checked → buy created → delivery reported. At each step, an agent makes decisions. But the protocol doesn't record how Step 3's output became Step 4's input, or whether the products that governance approved were the same ones the buyer agent originally evaluated.
Six months later, an auditor asks: "Show me the decision chain from brief to delivery." The answer is a set of disconnected records — a plan here, a governance check there, a buy somewhere else — with no formal linkage between them. Reconstructing the chain requires forensic log analysis, not a protocol query. The lineage of the decision was never recorded.
Why disclosure is the wrong solution
The instinct is to require sellers to share decision traces with buyers. Full transparency.
This doesn't work. Working group feedback was unambiguous:
If you approached a seller in any B2B market and said "when you use agentic, you need to tell the buyer all the reasons you pitched Package A over Package B, including whether you think this is the best for them vs the most profitable" — they would say no.
Decision traces contain competitive intelligence:
| What's in a trace |
Why sellers won't share it |
| Candidate scoring weights |
Reveals optimization strategy |
| Margin and profitability factors |
Exposes pricing strategy |
| Inventory pressure signals |
Shows supply/demand position |
| Elimination rationale |
Reveals what the seller deprioritizes |
| Human override history |
Exposes internal review thresholds |
Requiring disclosure means sellers either refuse to adopt agentic protocols, or they game their traces to be presentable rather than honest. Neither serves governance.
The gap isn't just "traces don't exist." It's that the protocol has no model for who gets to see what and how decisions connect across steps. Those are the provenance and lineage problems.
Proposed additions to the 3.1 specification
1. Decision provenance on seller responses
A new decision_provenance field on get_products and create_media_buy responses. This is not an extension — it is a core response field. Sellers attest to process compliance without disclosing the process itself:
{
"products": [ ... ],
"decision_provenance": {
"attestation_id": "att-streamhaus-2026-q2-001",
"candidates_evaluated": 23,
"evaluation_policies": [
{ "policy_id": "iab-transparency-1.0", "source": "iab.com", "version": "1.0" },
{ "policy_id": "eu-ai-act-art50", "source": "eur-lex.europa.eu" },
{ "policy_id": "streamhaus-eval-v3", "source": "internal" }
],
"trace_storage": {
"stored": true,
"retention_days": 365,
"format": "structured_json",
"human_reviewed": true,
"review_timestamp": "2026-04-17T15:00:00Z",
"audit_available_on_request": true
},
"timestamp": "2026-04-17T14:30:00Z"
}
}
What the buyer and governance agent see: The seller evaluated 23 candidates, followed three named policies, stored traces in structured JSON for 365 days, and had humans review the selection.
What they do NOT see: The actual scores, weights, margin data, or elimination rationale. Those stay private. The protocol attests that a defensible process occurred and that records exist — the same way C2PA proves content went through a pipeline without revealing the pipeline internals.
| What the seller stores (private) |
What the seller attests (shared) |
| Candidate list with scores |
"23 candidates were evaluated" |
| Scoring weights and rationale |
"Evaluation followed IAB Transparency 1.0 and EU AI Act Article 50 policies" |
| Elimination reasons |
"Decision traces are stored with 365-day retention" |
| Margin and profitability factors |
"Human review was conducted per internal policy" |
| Full decision trace |
"Traces are available for regulatory audit on request" |
2. Decision lineage across protocol steps
A new decision_lineage field that records how decisions connect across protocol steps. Each step in the media buy workflow references the inputs it consumed and the outputs it produced, creating a traceable chain:
{
"decision_lineage": {
"lineage_id": "lin-acme-q2-2026-001",
"chain": [
{
"step": "plan_registration",
"task": "sync_plans",
"agent": "orchestrator.pinnacle-agency.example",
"timestamp": "2026-04-17T14:00:00Z",
"inputs": { "brief_id": "brief-acme-q2-001" },
"outputs": { "plan_id": "acme-q2-trail-pro" },
"intent_declared": true
},
{
"step": "product_search",
"task": "get_products",
"agent": "sales.streamhaus.example",
"timestamp": "2026-04-17T14:15:00Z",
"inputs": { "plan_id": "acme-q2-trail-pro", "brief_id": "brief-acme-q2-001" },
"outputs": { "candidates_returned": 23, "products_ids": ["sp-001", "sp-002", "..."] },
"provenance_attestation_id": "att-streamhaus-2026-q2-001"
},
{
"step": "candidate_evaluation",
"task": "buyer_internal",
"agent": "orchestrator.pinnacle-agency.example",
"timestamp": "2026-04-17T14:30:00Z",
"inputs": { "candidates_count": 23 },
"outputs": { "selected": ["sp-001", "sp-002", "fn-001"], "eliminated": 20 },
"provenance_attestation_id": "att-pinnacle-2026-q2-001"
},
{
"step": "governance_check",
"task": "check_governance",
"agent": "governance.pinnacle-agency.example",
"timestamp": "2026-04-17T14:35:00Z",
"inputs": { "plan_id": "acme-q2-trail-pro", "proposed_spend": 25000 },
"outputs": { "check_id": "chk-q2-acme-001", "status": "approved", "findings_count": 4 }
},
{
"step": "media_buy",
"task": "create_media_buy",
"agent": "orchestrator.pinnacle-agency.example",
"timestamp": "2026-04-17T14:40:00Z",
"inputs": { "check_id": "chk-q2-acme-001", "packages": 3 },
"outputs": { "media_buy_id": "mb-streamhaus-001" }
}
]
}
}
What lineage gives you that provenance alone doesn't:
- Traceability: An auditor can follow the chain from brief to buy and verify that each step consumed the output of the previous step — no gaps, no unexplained jumps
- Accountability: Each step records which agent acted. The buyer agent evaluated candidates, the governance agent approved, the seller fulfilled. If something went wrong, the lineage shows where in the chain it happened
- Completeness verification: The governance agent can verify that the lineage chain is complete — every required step occurred, in the expected order, with the expected inputs
- Cross-campaign comparison: Lineage records enable comparison of decision processes across campaigns, sellers, and time periods
Who assembles and persists the lineage: The governance agent is the natural owner. It already observes every protocol step (plans, product searches, governance checks, media buys) and has the structural position to assemble the chain. In crawl mode, the governance agent builds lineage from existing task IDs and timestamps with no new input from callers. In walk/run modes, callers may contribute explicit lineage nodes (e.g., the buyer-side evaluation step, which the governance agent doesn't directly observe) via provenance attestations that include step references.
3. Provenance compliance as a core finding category
provenance_compliance becomes a specified category_id in check_governance findings:
{
"findings": [
{ "category_id": "budget_authority", "severity": "must", "explanation": "Within limit", "confidence": 1.0 },
{ "category_id": "brand_policy", "severity": "must", "explanation": "Publishers approved", "confidence": 1.0 },
{
"category_id": "provenance_compliance",
"severity": "should",
"explanation": "Seller attested: 23 candidates evaluated, 3 policies followed, traces stored 365 days with human review. Lineage chain complete from plan registration through product search. Buyer-side evaluation provenance: attested, 23 candidates scored, 3 selected.",
"confidence": 0.95,
"details": {
"seller_attestation": {
"attestation_id": "att-streamhaus-2026-q2-001",
"candidates_evaluated": 23,
"policies_declared": 3,
"trace_stored": true,
"human_reviewed": true
},
"buyer_attestation": {
"attestation_id": "att-pinnacle-2026-q2-001",
"candidates_evaluated": 23,
"selected": 3,
"trace_stored": true
},
"lineage_complete": true,
"lineage_steps": 5
}
}
]
}
The governance agent validates both sides: the seller's provenance (how products were recommended) and the buyer's provenance (how products were evaluated and selected). Both attest to process without disclosing process details.
How provenance and lineage resolve each scenario
Scenario A (post-campaign audit): Procurement can't see why Product 17 was eliminated. But the lineage shows the full decision chain, the provenance attests that 23 candidates were evaluated against 3 policies, and traces exist for 365 days. If they need the specific rationale, they request the trace through a bilateral agreement.
Scenario B (regulatory audit): The regulator sees structured attestations for all 47 recommendations plus complete lineage chains showing how each brief became a buy. For deeper investigation, traces are available on request.
Scenario C (cross-seller consistency): The brand compares provenance across sellers. Seller A: 50 candidates, 3 policies, human reviewed. Seller D: 8 candidates, 1 policy, no human review. The process gap is now visible and explains the performance gap.
Scenario D (broken chain): The lineage chain explicitly connects every step. An auditor runs get_plan_audit_logs and sees the complete chain from brief to delivery with every agent, timestamp, and input/output linkage. No forensic log analysis required.
4. Crawl, walk, run
- Crawl:
decision_provenance is optional. Sellers that include it get an informational provenance_compliance finding. Sellers that don't get "No provenance attestation provided." Minimum attestation: "stored": true and timestamp. candidates_evaluated is optional — sellers concerned about revealing inventory depth can omit it. decision_lineage is assembled by the governance agent from existing task IDs — no new input required from callers. No minimum retention period specified.
- Walk:
decision_provenance is expected on get_products and create_media_buy responses. candidates_evaluated is expected but not required. Governance validates that listed policies exist in the policy registry. Lineage completeness is checked but gaps produce should findings, not failures. Recommended minimum retention: 90 days.
- Run:
decision_provenance is required. candidates_evaluated is required. Missing attestations are must-severity findings. Lineage must be complete. Regulatory audit mechanisms (trace request protocols) are standardized. Minimum retention: 365 days or as required by applicable regulation.
C2PA alignment
The Content Authenticity Initiative (C2PA) has established provenance standards for digital content. Decision provenance and lineage apply the same principles to advertising decisions:
| C2PA Concept |
AdCP Decision Analog |
| Content provenance |
Decision provenance attestation |
| Creation pipeline attestation |
Evaluation pipeline attestation |
| Tamper-evident manifests |
Immutable decision trace storage |
| Claim generators (who created vs attested) |
Multi-party lineage (seller evaluates → governance records → buyer receives) |
| Ingredient manifests (provenance chain) |
Decision lineage chain (brief → search → evaluate → govern → buy) |
| Verifiable credentials |
Policy compliance attestation |
AdCP's existing creative provenance (the provenance.json schema) already demonstrates the protocol's commitment to C2PA principles. Decision provenance extends that commitment from content creation to commercial decision-making — applying the same attestation-without-disclosure philosophy to how products are evaluated and selected, not just how creatives are produced.
AAO has an opportunity to lead on AI decision provenance before regulators define it. The EU AI Act and California SB 942 are heading toward mandatory audit trails. A protocol-level framework grounded in C2PA principles gives regulators a standard to adopt rather than invent.
Trust model questions
- Attestation identity: Who issues attestation IDs? Seller-generated risks collision in multi-seller buys. Governance-agent-assigned ensures uniqueness but adds a round-trip.
- Root of trust: How does the governance agent verify an attestation is authentic? Signing? Hashing? C2PA uses X.509 certificate chains.
- Binding: How is an attestation cryptographically bound to the specific response it describes? A hash of the response payload included in the attestation would provide tamper evidence.
- Lineage integrity: How do we ensure lineage chains haven't been modified after the fact? Chained hashes (each step includes the hash of the previous step) would provide tamper evidence similar to C2PA's ingredient manifests.
Regulatory context
- EU AI Act (Article 50) — Transparency obligations for AI systems, including audit trail requirements
- GDPR (Article 22) — Rights related to automated individual decision-making
- EU DSA (Article 26) — Transparency in online advertising targeting parameters
- California SB 942 — Required explanations for AI system decisions
Decision provenance and lineage create the mechanism for compliance. The attestation answers: "Can you prove your AI followed a defensible process?" The lineage answers: "Can you show how the decision flowed from brief to buy?" Neither requires disclosing proprietary logic.
Stakeholder considerations
| Stakeholder |
Benefit |
Concern |
| Buyer agent / DSP |
Can demonstrate evaluation rigor to advertisers; lineage proves the full decision chain from brief to buy |
Buyer-side evaluation step in the lineage exposes buyer's selection process. Buyer agents face the same disclosure tension as sellers — candidates_evaluated and selected counts reveal evaluation strategy. Buyer-side attestation must follow the same privacy model as seller-side. |
| Seller agent / SSP |
Structured way to demonstrate process compliance without revealing competitive intelligence |
candidates_evaluated reveals inventory depth — a seller returning "3 of 3" signals thin inventory vs "3 of 50." Consider making candidate counts optional in crawl/walk modes. retention_days: 365 creates storage mandates that may burden smaller sellers. |
| Governance agent |
Gets structured provenance and lineage inputs to validate process quality, not just outcomes |
Must assemble and persist lineage chains. Who owns the lineage — the governance agent or the orchestrator? Proposal recommends governance agent assembly from existing task IDs, but this must be explicit. |
| Brand safety provider |
Provenance attestation demonstrates that brand safety evaluation occurred as part of the decision process |
Brand safety evaluation should be attestable as a named policy in evaluation_policies, not just a separate finding category |
| 3P Orchestrator |
Lineage provides end-to-end visibility across multi-seller campaigns; provenance enables cross-seller comparison |
Orchestrator must route provenance attestations from multiple sellers to the governance agent; lineage assembly may require orchestrator cooperation |
| Advertiser procurement |
Structured evidence for post-campaign audits; can compare evaluation rigor across sellers and time periods |
Audit trail needs human-readable rendering, not just API responses. Procurement teams reviewing get_plan_audit_logs need a format that supports narrative explanation, not raw JSON. |
| Regulator |
Protocol-level audit trail aligned with EU AI Act and SB 942 transparency requirements |
audit_available_on_request: true is a protocol-level claim that the protocol itself cannot enforce — trace availability depends on bilateral agreements and seller infrastructure. This should be acknowledged explicitly. |
Attestation verification and enforceability
Two fields in the provenance attestation deserve special scrutiny:
-
human_reviewed: true — This is self-reported and unverifiable at the protocol level. The governance agent records the attestation but cannot confirm a human actually reviewed the selection. In crawl/walk modes, this is acceptable — the attestation creates accountability even if unverifiable. In run mode, the working group should consider whether external audit certification (third-party attestation) is required for human_reviewed claims.
-
audit_available_on_request: true — This is a commitment that the protocol cannot enforce. Whether traces are actually available depends on the seller's internal infrastructure and retention practices. The protocol should acknowledge this as a declared intent rather than a guaranteed capability, and regulatory access mechanisms should be defined through bilateral agreements rather than protocol-level enforcement.
Buyer-side provenance parity
The provenance model applies symmetrically. The buyer agent's evaluation step (Step 3 in the lineage — "candidate_evaluation") faces the same disclosure tension as the seller's product search. The buyer evaluated 23 candidates and selected 3. That evaluation involved scoring weights, elimination rationale, and possibly margin considerations — all competitive intelligence from the buyer's perspective.
The buyer_attestation in the provenance_compliance finding follows the same privacy model: attest to process ("23 evaluated, 3 selected, traces stored") without disclosing process details ("Product 17 eliminated for audience overlap score 0.41"). This symmetry is important — provenance is not something buyers impose on sellers. It's a protocol-level expectation that both parties attest to defensible processes.
Open questions for the working group
- Attestation granularity: What's the minimum viable attestation? Just "traces stored" or also candidate count, policy list, human review status? Should
candidates_evaluated be optional in crawl/walk to avoid revealing inventory depth?
- Lineage assembly and ownership: Does the governance agent assemble the lineage chain from existing task references, or do callers explicitly contribute lineage nodes? The governance agent is the natural owner (it already sees all protocol steps), but this creates a dependency on governance agent capabilities.
- Regulatory access mechanism: Should the protocol define how regulators request actual traces? Or is that handled by bilateral agreements? Given that
audit_available_on_request is unenforceable at the protocol level, what's the minimum protocol-level support needed?
- Governance validation depth: Does governance validate attestations (check policies exist, verify retention claims) or just record them passively? In crawl mode, passive recording is sufficient. What triggers active validation?
- Policy registry integration: Should
evaluation_policies reference IDs from the existing AdCP policy registry? A registry would enable governance to validate policy claims and prevent fabricated policy references.
- C2PA format alignment: Adopt C2PA manifest format directly, define an AdCP-native format that maps to C2PA, or keep independent and align later?
- Trust infrastructure: X.509 certificates (C2PA's approach), DIDs, or a lighter-weight scheme for the transaction volumes involved?
- Lineage integrity: Chained hashes for tamper evidence? Or is that over-engineering for the current maturity level?
- Retention burden: Should the protocol specify minimum retention periods, or leave this to policy?
retention_days: 365 may create unfunded mandates for smaller sellers. Should crawl/walk modes accept shorter retention?
- Human-readable audit:
get_plan_audit_logs should return provenance and lineage data in a format that supports human-readable rendering for procurement teams. What format — structured JSON with rendering hints, or a separate narrative endpoint?
Working group context and prior decisions
This proposal directly incorporates feedback from the governance working group and builds on protocol decisions already merged:
Governance WG Slack discussion (April 2025): The working group's central finding — that the level of detail shared with counterparties remains open to interpretation where proprietary or commercial considerations apply — is the genesis of the attestation-not-disclosure model proposed here. The working group agreed to "design a logging approach that supports both internal and shareable views." Decision provenance implements exactly this: sellers store full traces internally (the "internal view") and attest to process compliance externally (the "shareable view").
Brian O'Kelley's response provided three design decisions this proposal incorporates directly:
Issues and PRs already resolved:
The C2PA alignment was validated by working group feedback. Brian's framing — "sellers won't reveal scoring rationale" — maps precisely to C2PA's model: content provenance proves a creative went through a pipeline without revealing pipeline internals. Decision provenance applies the same principle to commercial decisions.
Production validation: Yahoo has run live agentic campaigns where the decision chain from brief to buy crossed multiple agent boundaries. The lineage model proposed here was validated against real protocol traces — plan registration → product search → candidate evaluation → governance check → media buy — with each step producing attestable provenance.
Industry collaboration: Yahoo is co-leading the context graph for governance workstream within AdCP, and partnering with Google on the open-source BigQuery Agent Analytics SDK where decision lineage is implemented as temporal lineage in property/context graphs. The lineage chain model proposed here (step → inputs → outputs → attestation) is architecturally consistent with the SDK's context graph primitives.
AAO Spotlight: This proposal is part of Yahoo's use case submission for the AAO/AdCP Foundry sizzle reel — a <1 minute showcase demonstrating how AdCP enables trusted agentic advertising through semantic alignment and decision lineage. The spotlight demonstrates the full AdCP 3.0 tool flow (get_adcp_capabilities → get_products → create_media_buy → get_media_buy_delivery) with these 3.1 governance extensions layered on top, showing the audience both "the protocol working today" and "the governance layer being proposed for tomorrow."
Relationship to other tracks
- Semantic fidelity: Separate concern. Semantic fidelity is buyer-facing — governance validates interpretation matches intent. Decision provenance is seller-facing — governance validates process compliance. See: Proposal: Semantic fidelity as a core governance capability (3.1).
- Taxonomy declaration: Companion concern. Taxonomy declaration provides context for provenance — it tells governance what classification systems were in play when decisions were made. When a provenance attestation references evaluation policies and scoring criteria, taxonomy declaration ensures those references are grounded in declared, versioned classification systems rather than ambiguous labels. See: Proposal: Taxonomy declaration as a core capability (3.1).
Prior work
The feat/semantic-governance-extensions branch on mikulbhatt/adcp contains a prototype ext.decision_trace schema with full candidate scoring, outcome tracking, and rationale fields. That schema informed the provenance/lineage reframe — the internal trace format could serve as the private record sellers store, while the provenance attestation and lineage chain proposed here are the protocol-level representations. Working group feedback recommended reframing from buyer-facing disclosure to seller-side provenance with C2PA alignment.
Related: See also #3362 — Proposal: Taxonomy declaration as a core capability (3.1) | #3363 — Proposal: Semantic fidelity as a core governance capability (3.1) | #3365 — Proposal: AdCP Reference Media Ontology — a shared vocabulary for agentic advertising (3.1)
Summary
The AdCP governance protocol records what was bought but not why. When an AI agent evaluates 23 ad products and selects 3, the evaluation rationale disappears — the scores, the weights, the elimination logic exist only in an ephemeral context window that ceases to exist when the session ends.
This proposal adds two interconnected capabilities to the campaign governance specification for 3.1:
check_governancefinding category (provenance_compliance) and a requireddecision_provenancefield on seller responses that attests to evaluation process compliance without disclosing competitive intelligenceThese are not extensions. They are proposed additions to the core protocol, completing the governance model by making the decision process as auditable as the decision outcome.
From human negotiation to machine negotiation
When humans negotiate media deals, reasoning is continuous and remembered. A media planner evaluates five proposals, picks two, and can explain six months later: "I chose Package A because the audience overlap was strong and the publisher had delivered for us before. I passed on Package C because the CPMs looked good but the inventory was remnant." The reasoning lives in the planner's head, in email threads, in margin notes on a spreadsheet.
When agents negotiate, reasoning is ephemeral. A buyer agent evaluates 23 products in a context window, scores them on four dimensions, eliminates 20, and selects 3. The session ends. The context window is freed. The scores, the weights, the elimination rationale — all of it ceases to exist. The agent has no memory of why it chose what it chose.
In the human world, decision provenance was implicit — it lived in people. Decision lineage was informal — you could reconstruct the chain by talking to the people involved. In the agent world, both must be explicit protocol constructs. If the reasoning isn't persisted at transaction time, it's gone forever. If the chain of decisions isn't recorded structurally, no amount of log forensics can reconstruct it.
This is not an edge case. It is the defining characteristic of agent-mediated transactions: agents don't remember. The protocol must remember for them. That's what decision provenance and lineage provide — structured memory for a system that has none.
What AdCP 3.0 already provides
AdCP 3.0 is not starting from zero. The protocol already has significant provenance and accountability infrastructure:
digital_source_type,ai_tool,human_oversight,c2pa.manifest_url,disclosure. This proves AI involvement in content CREATION.get_plan_audit_logsreturns checks performed, approvals/denials, conditions, human reviews, drift metrics, andcategories_evaluated. This records WHAT governance checked and HOW it decided.governance_contextJWS token chain provides cryptographic proof of the approval chain — intent-phase tokens from buyers, purchase/modification/delivery-phase tokens from sellers.What's missing
The gap becomes precise when you distinguish what kind of provenance each capability addresses:
categories_evaluated) and what it found (findings). But it does NOT record what the SELLER evaluated when selecting products, or what the BUYER evaluated when choosing among them.The problem: decisions without memory
Scenario A — "Why did you recommend that?"
Acme Outdoor's procurement team is reviewing Q2 media spend. Pinnacle Agency's buyer agent evaluated 23 products from StreamHaus and selected 3 for a $50K campaign. The campaign delivered, but performance was mixed — two packages hit benchmarks, one underperformed significantly.
Procurement asks a reasonable question: "Product 17 had a better CPM and higher projected reach than what you selected. Why wasn't it chosen?"
The agency can't answer. The buyer agent scored all 23 products on relevance, cost efficiency, brand safety, and audience match. It eliminated Product 17 because the audience overlap score was 0.41 — well below the 0.65 threshold. But that evaluation happened in an ephemeral context window. The scores, the weights, the elimination rationale — none of it was persisted. The session ended and the decision logic ceased to exist.
All the protocol recorded was the outcome: three packages, three budgets, one media buy ID. The reasoning that produced that outcome left no trace.
Acme Outdoor loses confidence in the agency. A $2M annual relationship is strained over a $50K decision that nobody can explain — not because the decision was bad, but because the system wasn't designed to remember it.
Scenario B — A regulator asks for an explanation
The EU AI Act (Article 50) requires transparency for AI systems making consequential decisions. A regulator auditing agentic advertising platforms sends a request to StreamHaus: "Your AI agent recommended Package A to 47 different buyers last quarter. Explain the decision criteria and demonstrate that recommendations were appropriate for each buyer's brief."
StreamHaus has API logs showing which packages were returned in each
get_productsresponse. They have transaction records fromcreate_media_buy. But they have no structured records of the evaluation — which products were considered, what criteria were applied, why some were ranked higher than others, whether human review occurred.They can prove what was sold. They cannot prove why it was recommended. Under the AI Act, "we don't know" is not a compliant answer.
Scenario C — Inconsistency across sellers
A luxury brand runs the same $500K brief across five sellers. Post-campaign, performance varies 4x: Seller A delivers 0.18% CTR, Seller D delivers 0.04%.
The brand asks: "Did all five sellers evaluate products with similar rigor, or did some just dump underperforming inventory?"
There's no way to answer this. The protocol records delivery outcomes but nothing about evaluation quality. Did Seller D consider 50 products and pick the best 3, or consider 3 and pick all of them? Did it apply audience scoring, or sort by available inventory? Did a human review the selection?
The brand can compare outcomes across sellers but not processes. They can see that Seller D underperformed, but can't determine whether it was bad luck or bad evaluation.
Scenario D — The broken chain
Even when individual decisions are recorded, the connections between them are not. A campaign goes through six protocol steps: brief submitted → plan registered → products searched → governance checked → buy created → delivery reported. At each step, an agent makes decisions. But the protocol doesn't record how Step 3's output became Step 4's input, or whether the products that governance approved were the same ones the buyer agent originally evaluated.
Six months later, an auditor asks: "Show me the decision chain from brief to delivery." The answer is a set of disconnected records — a plan here, a governance check there, a buy somewhere else — with no formal linkage between them. Reconstructing the chain requires forensic log analysis, not a protocol query. The lineage of the decision was never recorded.
Why disclosure is the wrong solution
The instinct is to require sellers to share decision traces with buyers. Full transparency.
This doesn't work. Working group feedback was unambiguous:
Decision traces contain competitive intelligence:
Requiring disclosure means sellers either refuse to adopt agentic protocols, or they game their traces to be presentable rather than honest. Neither serves governance.
The gap isn't just "traces don't exist." It's that the protocol has no model for who gets to see what and how decisions connect across steps. Those are the provenance and lineage problems.
Proposed additions to the 3.1 specification
1. Decision provenance on seller responses
A new
decision_provenancefield onget_productsandcreate_media_buyresponses. This is not an extension — it is a core response field. Sellers attest to process compliance without disclosing the process itself:{ "products": [ ... ], "decision_provenance": { "attestation_id": "att-streamhaus-2026-q2-001", "candidates_evaluated": 23, "evaluation_policies": [ { "policy_id": "iab-transparency-1.0", "source": "iab.com", "version": "1.0" }, { "policy_id": "eu-ai-act-art50", "source": "eur-lex.europa.eu" }, { "policy_id": "streamhaus-eval-v3", "source": "internal" } ], "trace_storage": { "stored": true, "retention_days": 365, "format": "structured_json", "human_reviewed": true, "review_timestamp": "2026-04-17T15:00:00Z", "audit_available_on_request": true }, "timestamp": "2026-04-17T14:30:00Z" } }What the buyer and governance agent see: The seller evaluated 23 candidates, followed three named policies, stored traces in structured JSON for 365 days, and had humans review the selection.
What they do NOT see: The actual scores, weights, margin data, or elimination rationale. Those stay private. The protocol attests that a defensible process occurred and that records exist — the same way C2PA proves content went through a pipeline without revealing the pipeline internals.
2. Decision lineage across protocol steps
A new
decision_lineagefield that records how decisions connect across protocol steps. Each step in the media buy workflow references the inputs it consumed and the outputs it produced, creating a traceable chain:{ "decision_lineage": { "lineage_id": "lin-acme-q2-2026-001", "chain": [ { "step": "plan_registration", "task": "sync_plans", "agent": "orchestrator.pinnacle-agency.example", "timestamp": "2026-04-17T14:00:00Z", "inputs": { "brief_id": "brief-acme-q2-001" }, "outputs": { "plan_id": "acme-q2-trail-pro" }, "intent_declared": true }, { "step": "product_search", "task": "get_products", "agent": "sales.streamhaus.example", "timestamp": "2026-04-17T14:15:00Z", "inputs": { "plan_id": "acme-q2-trail-pro", "brief_id": "brief-acme-q2-001" }, "outputs": { "candidates_returned": 23, "products_ids": ["sp-001", "sp-002", "..."] }, "provenance_attestation_id": "att-streamhaus-2026-q2-001" }, { "step": "candidate_evaluation", "task": "buyer_internal", "agent": "orchestrator.pinnacle-agency.example", "timestamp": "2026-04-17T14:30:00Z", "inputs": { "candidates_count": 23 }, "outputs": { "selected": ["sp-001", "sp-002", "fn-001"], "eliminated": 20 }, "provenance_attestation_id": "att-pinnacle-2026-q2-001" }, { "step": "governance_check", "task": "check_governance", "agent": "governance.pinnacle-agency.example", "timestamp": "2026-04-17T14:35:00Z", "inputs": { "plan_id": "acme-q2-trail-pro", "proposed_spend": 25000 }, "outputs": { "check_id": "chk-q2-acme-001", "status": "approved", "findings_count": 4 } }, { "step": "media_buy", "task": "create_media_buy", "agent": "orchestrator.pinnacle-agency.example", "timestamp": "2026-04-17T14:40:00Z", "inputs": { "check_id": "chk-q2-acme-001", "packages": 3 }, "outputs": { "media_buy_id": "mb-streamhaus-001" } } ] } }What lineage gives you that provenance alone doesn't:
Who assembles and persists the lineage: The governance agent is the natural owner. It already observes every protocol step (plans, product searches, governance checks, media buys) and has the structural position to assemble the chain. In crawl mode, the governance agent builds lineage from existing task IDs and timestamps with no new input from callers. In walk/run modes, callers may contribute explicit lineage nodes (e.g., the buyer-side evaluation step, which the governance agent doesn't directly observe) via provenance attestations that include step references.
3. Provenance compliance as a core finding category
provenance_compliancebecomes a specifiedcategory_idincheck_governancefindings:{ "findings": [ { "category_id": "budget_authority", "severity": "must", "explanation": "Within limit", "confidence": 1.0 }, { "category_id": "brand_policy", "severity": "must", "explanation": "Publishers approved", "confidence": 1.0 }, { "category_id": "provenance_compliance", "severity": "should", "explanation": "Seller attested: 23 candidates evaluated, 3 policies followed, traces stored 365 days with human review. Lineage chain complete from plan registration through product search. Buyer-side evaluation provenance: attested, 23 candidates scored, 3 selected.", "confidence": 0.95, "details": { "seller_attestation": { "attestation_id": "att-streamhaus-2026-q2-001", "candidates_evaluated": 23, "policies_declared": 3, "trace_stored": true, "human_reviewed": true }, "buyer_attestation": { "attestation_id": "att-pinnacle-2026-q2-001", "candidates_evaluated": 23, "selected": 3, "trace_stored": true }, "lineage_complete": true, "lineage_steps": 5 } } ] }The governance agent validates both sides: the seller's provenance (how products were recommended) and the buyer's provenance (how products were evaluated and selected). Both attest to process without disclosing process details.
How provenance and lineage resolve each scenario
Scenario A (post-campaign audit): Procurement can't see why Product 17 was eliminated. But the lineage shows the full decision chain, the provenance attests that 23 candidates were evaluated against 3 policies, and traces exist for 365 days. If they need the specific rationale, they request the trace through a bilateral agreement.
Scenario B (regulatory audit): The regulator sees structured attestations for all 47 recommendations plus complete lineage chains showing how each brief became a buy. For deeper investigation, traces are available on request.
Scenario C (cross-seller consistency): The brand compares provenance across sellers. Seller A: 50 candidates, 3 policies, human reviewed. Seller D: 8 candidates, 1 policy, no human review. The process gap is now visible and explains the performance gap.
Scenario D (broken chain): The lineage chain explicitly connects every step. An auditor runs
get_plan_audit_logsand sees the complete chain from brief to delivery with every agent, timestamp, and input/output linkage. No forensic log analysis required.4. Crawl, walk, run
decision_provenanceis optional. Sellers that include it get an informationalprovenance_compliancefinding. Sellers that don't get "No provenance attestation provided." Minimum attestation:"stored": trueandtimestamp.candidates_evaluatedis optional — sellers concerned about revealing inventory depth can omit it.decision_lineageis assembled by the governance agent from existing task IDs — no new input required from callers. No minimum retention period specified.decision_provenanceis expected onget_productsandcreate_media_buyresponses.candidates_evaluatedis expected but not required. Governance validates that listed policies exist in the policy registry. Lineage completeness is checked but gaps produceshouldfindings, not failures. Recommended minimum retention: 90 days.decision_provenanceis required.candidates_evaluatedis required. Missing attestations aremust-severity findings. Lineage must be complete. Regulatory audit mechanisms (trace request protocols) are standardized. Minimum retention: 365 days or as required by applicable regulation.C2PA alignment
The Content Authenticity Initiative (C2PA) has established provenance standards for digital content. Decision provenance and lineage apply the same principles to advertising decisions:
AdCP's existing creative provenance (the
provenance.jsonschema) already demonstrates the protocol's commitment to C2PA principles. Decision provenance extends that commitment from content creation to commercial decision-making — applying the same attestation-without-disclosure philosophy to how products are evaluated and selected, not just how creatives are produced.AAO has an opportunity to lead on AI decision provenance before regulators define it. The EU AI Act and California SB 942 are heading toward mandatory audit trails. A protocol-level framework grounded in C2PA principles gives regulators a standard to adopt rather than invent.
Trust model questions
Regulatory context
Decision provenance and lineage create the mechanism for compliance. The attestation answers: "Can you prove your AI followed a defensible process?" The lineage answers: "Can you show how the decision flowed from brief to buy?" Neither requires disclosing proprietary logic.
Stakeholder considerations
candidates_evaluatedandselectedcounts reveal evaluation strategy. Buyer-side attestation must follow the same privacy model as seller-side.candidates_evaluatedreveals inventory depth — a seller returning "3 of 3" signals thin inventory vs "3 of 50." Consider making candidate counts optional in crawl/walk modes.retention_days: 365creates storage mandates that may burden smaller sellers.evaluation_policies, not just a separate finding categoryget_plan_audit_logsneed a format that supports narrative explanation, not raw JSON.audit_available_on_request: trueis a protocol-level claim that the protocol itself cannot enforce — trace availability depends on bilateral agreements and seller infrastructure. This should be acknowledged explicitly.Attestation verification and enforceability
Two fields in the provenance attestation deserve special scrutiny:
human_reviewed: true— This is self-reported and unverifiable at the protocol level. The governance agent records the attestation but cannot confirm a human actually reviewed the selection. In crawl/walk modes, this is acceptable — the attestation creates accountability even if unverifiable. In run mode, the working group should consider whether external audit certification (third-party attestation) is required forhuman_reviewedclaims.audit_available_on_request: true— This is a commitment that the protocol cannot enforce. Whether traces are actually available depends on the seller's internal infrastructure and retention practices. The protocol should acknowledge this as a declared intent rather than a guaranteed capability, and regulatory access mechanisms should be defined through bilateral agreements rather than protocol-level enforcement.Buyer-side provenance parity
The provenance model applies symmetrically. The buyer agent's evaluation step (Step 3 in the lineage — "candidate_evaluation") faces the same disclosure tension as the seller's product search. The buyer evaluated 23 candidates and selected 3. That evaluation involved scoring weights, elimination rationale, and possibly margin considerations — all competitive intelligence from the buyer's perspective.
The
buyer_attestationin theprovenance_compliancefinding follows the same privacy model: attest to process ("23 evaluated, 3 selected, traces stored") without disclosing process details ("Product 17 eliminated for audience overlap score 0.41"). This symmetry is important — provenance is not something buyers impose on sellers. It's a protocol-level expectation that both parties attest to defensible processes.Open questions for the working group
candidates_evaluatedbe optional in crawl/walk to avoid revealing inventory depth?audit_available_on_requestis unenforceable at the protocol level, what's the minimum protocol-level support needed?evaluation_policiesreference IDs from the existing AdCP policy registry? A registry would enable governance to validate policy claims and prevent fabricated policy references.retention_days: 365may create unfunded mandates for smaller sellers. Should crawl/walk modes accept shorter retention?get_plan_audit_logsshould return provenance and lineage data in a format that supports human-readable rendering for procurement teams. What format — structured JSON with rendering hints, or a separate narrative endpoint?Working group context and prior decisions
This proposal directly incorporates feedback from the governance working group and builds on protocol decisions already merged:
Governance WG Slack discussion (April 2025): The working group's central finding — that the level of detail shared with counterparties remains open to interpretation where proprietary or commercial considerations apply — is the genesis of the attestation-not-disclosure model proposed here. The working group agreed to "design a logging approach that supports both internal and shareable views." Decision provenance implements exactly this: sellers store full traces internally (the "internal view") and attest to process compliance externally (the "shareable view").
Brian O'Kelley's response provided three design decisions this proposal incorporates directly:
get_plan_audit_logsis the "internal log" — it already returns budget tracking, validation history, and compliance summary. Decision provenance extends this with the reasoning layer: not just what was approved, but how many candidates were evaluated, what policies governed the evaluation, and whether humans reviewed the selection.decision_provenanceattestation is the shareable view (process compliance), while the full decision trace is the internal view (competitive intelligence). The visibility model from docs(governance): audit-trail internal-vs-shareable views + Addie anonymous knowledge tools #3175 applies directly.Issues and PRs already resolved:
evaluation_policiesin the provenance attestation reference Policy Registry entries; Document Policy Registry sync pattern: version pinning, change handling, effective_date transitions #3140's version pinning ensures those references remain stable.The C2PA alignment was validated by working group feedback. Brian's framing — "sellers won't reveal scoring rationale" — maps precisely to C2PA's model: content provenance proves a creative went through a pipeline without revealing pipeline internals. Decision provenance applies the same principle to commercial decisions.
Production validation: Yahoo has run live agentic campaigns where the decision chain from brief to buy crossed multiple agent boundaries. The lineage model proposed here was validated against real protocol traces — plan registration → product search → candidate evaluation → governance check → media buy — with each step producing attestable provenance.
Industry collaboration: Yahoo is co-leading the context graph for governance workstream within AdCP, and partnering with Google on the open-source BigQuery Agent Analytics SDK where decision lineage is implemented as temporal lineage in property/context graphs. The lineage chain model proposed here (step → inputs → outputs → attestation) is architecturally consistent with the SDK's context graph primitives.
AAO Spotlight: This proposal is part of Yahoo's use case submission for the AAO/AdCP Foundry sizzle reel — a <1 minute showcase demonstrating how AdCP enables trusted agentic advertising through semantic alignment and decision lineage. The spotlight demonstrates the full AdCP 3.0 tool flow (get_adcp_capabilities → get_products → create_media_buy → get_media_buy_delivery) with these 3.1 governance extensions layered on top, showing the audience both "the protocol working today" and "the governance layer being proposed for tomorrow."
Relationship to other tracks
Prior work
The
feat/semantic-governance-extensionsbranch on mikulbhatt/adcp contains a prototypeext.decision_traceschema with full candidate scoring, outcome tracking, and rationale fields. That schema informed the provenance/lineage reframe — the internal trace format could serve as the private record sellers store, while the provenance attestation and lineage chain proposed here are the protocol-level representations. Working group feedback recommended reframing from buyer-facing disclosure to seller-side provenance with C2PA alignment.Related: See also #3362 — Proposal: Taxonomy declaration as a core capability (3.1) | #3363 — Proposal: Semantic fidelity as a core governance capability (3.1) | #3365 — Proposal: AdCP Reference Media Ontology — a shared vocabulary for agentic advertising (3.1)