close

DEV Community

Aamer Mihaysi
Aamer Mihaysi

Posted on

Mastercard and Google Are Building the Trust Layer for AI That Spends Money

Mastercard and Google Are Building the Trust Layer for AI That Spends Money

16% of U.S. consumers trust AI to make payments on their behalf. Not because they don't understand the technology—because they don't understand what the AI will actually do.

Will it book the flight I asked for, or also add travel insurance I didn't authorize? Will it buy the specific product I selected, or the "best" one according to criteria I never approved?

This isn't an AI capability problem. It's a trust infrastructure problem.

Mastercard and Google just open-sourced a piece of that infrastructure: Verifiable Intent.


What Verifiable Intent Actually Does

The framework creates cryptographic proof that an AI agent is operating within bounds a human explicitly authorized.

Think of it as a digitally signed power of attorney with machine-enforceable constraints:

  • Amount caps: The agent can't spend more than $X without re-authorization
  • Merchant allowlists: The agent can only transact with approved vendors
  • Category restrictions: The agent can't drift from "book my flight" to "book my vacation package"
  • Time windows: Authorization expires automatically

Each transaction carries proof that the specific action was within the scope of what the human approved.

No more "the AI decided to upgrade my booking because it seemed like what I'd want."


Why This Matters More Than Agent Payments Protocols

Stripe's Machine Payments Protocol (MPP) lets agents respond to HTTP 402 challenges and pay programmatically.

Visa's agent credit cards give autonomous spending power.

Ramp's corporate cards for AI book flights and software subscriptions.

All of these assume the transaction was authorized.

But what does "authorized" mean when:

  • The human gave vague instructions
  • The model interpreted those instructions creatively
  • The business logic layer added upsells
  • The checkout flow had dark patterns

Verifiable Intent answers a different question than payment rails:

  • Payment protocols: Can this agent spend money?
  • Verifiable Intent: Did this agent stay within the bounds the human specified?

Both layers are necessary. Neither replaces the other.


The Technical Architecture

The framework works through signed authorization objects.

When you ask an agent to book a flight, you're not just giving natural language instructions. You're approving a structured authorization:

{
  "intent": "book_flight",
  "constraints": {
    "max_amount": 500,
    "allowed_airlines": ["united", "delta", "american"],
    "departure_date": "2026-04-10",
    "return_date": "2026-04-15",
    "class": "economy"
  },
  "valid_until": "2026-04-06T23:59:59Z",
  "signature": "<human_approval_signature>"
}
Enter fullscreen mode Exit fullscreen mode

The agent can't exceed the constraints without invalidating its proof. Merchants can verify the signature against the authorization. If the transaction doesn't match, it gets rejected—not by the payment network, but by the trust layer.


Why Mastercard and Google Open-Sourced This

They could have kept it proprietary. Made it a differentiator for Google Pay or Mastercard's agent payment products.

Instead, they released it as open source.

Because network effects matter more than moats in the agent economy.

Every agent transaction that fails due to trust issues hurts the entire ecosystem. Users lose confidence. Merchants lose sales. Payment volumes stagnate.

The more players adopt Verifiable Intent, the more:

  • Merchants trust agent-initiated transactions
  • Users feel comfortable delegating spending
  • Agent frameworks standardize on the same authorization model
  • Regulators accept that AI spending has guardrails

This is infrastructure, not product. Making it open grows the market for everyone.


What's Still Missing

Verifiable Intent solves one piece of the puzzle. It answers "did this specific agent action match what the human authorized?"

Two other pieces remain:

1. Agent Identity

Verifiable Intent doesn't prove who is running the agent. Is it the human who authorized it, or someone who compromised their credentials?

This is where Sam Altman's World AgentKit comes in—verifiable identity for AI agents linked to human owners.

2. Transaction Context

Authorization proofs work for structured purchases. They don't work well for:

  • Open-ended requests ("find me the best deal")
  • Multi-step transactions that compound
  • Agents that learn preferences over time

The framework handles bounded tasks well. Fuzzy tasks still need human judgment or a different authorization model.


The Takeaway

The agent payments conversation has been dominated by spending capability: Can agents pay? What payment rails support machine-to-machine transactions?

The real conversation should be about spending trust: How do humans know agents will do what they asked, and nothing more?

Verifiable Intent is the first credible answer to that question that's open, interoperable, and cryptographically sound.

Payment rails are coming fast. Stripe, Visa, and Ramp are racing to let agents spend.

The trust layer is what makes that spending safe enough for mainstream adoption.

Mastercard and Google didn't build the payment rail. They built the rail's guardrails.


The agent economy will work when users can answer one question with confidence: "If I let this agent spend my money, what exactly will it do?" Verifiable Intent makes that question answerable.

Top comments (1)

Collapse
 
sherifkozman profile image
Sherif Kozman

The trust layer framing is right. The scope might need expanding, though.

Verifiable Intent solves pre-authorization trust: can the agent spend, within what limits, on whose authority. That's the consumer-confidence problem and it's the right thing to tackle first.

The gap I keep running into is post-settlement trust. Once the authorized transaction clears, you need to verify that what settled actually matches what was authorized. That sounds obvious until you're dealing with it in production: the amount that clears is rarely the approved amount (fees, FX, processor deductions), settlement timing varies by 1-3 days depending on the rail, and the transaction identifiers don't match across your payment processor, bank feed, and ledger.

For a human finance team, this is a manageable exception workflow. For a system running agent transactions at volume, it needs to be deterministic.

The authorization standard makes sense to open-source. The reconciliation layer is harder to standardize because it has to adapt to each company's ledger structure and each processor's data format. That's probably why it's getting less attention right now. It's also where the operational failures will show up as agent transaction volumes scale.

(I build in this space at NAYA, so I'm not a neutral observer.)