From AI in Wallets to Wallet for AI Agents

published on 14 October 2025

Building privacy‑preserving trust architectures for distributed intelligence in regulated ecosystems.

1. Introduction — Identity & Trust in the Agentic Web

The web is entering a phase where intelligence is not only served from a central model but distributed among autonomous pieces of software. These agents reason, negotiate, and transact on behalf of people and organizations. At the same time, Europe is rolling out the European Digital Identity Wallet (EUDI Wallet) under eIDAS v2, a harmonized trust layer that lets citizens and businesses hold and present verifiable credentials. These two trajectories are converging into what many call the agentic web: a network where humans, organizations, and AI systems exchange proofs — not just data — under shared principles of privacy, accountability, and interoperability.

The question in front of us is no longer how do we verify humans online? It is: how do we let AI Agents operate inside the same trusted digital ecosystem, with privacy, verifiable delegation, and accountability by design? This article charts the shift from AI in a wallet to a wallet for an AI Agent, and explores what changes in architecture and regulation occur when we extend the EUDI Wallet Architecture and Reference Framework (ARF) to non‑human actors.

2. Why a Wallet for an AI Agent? — From Identity‑Centric to Agentic Web

Traditional identity systems — centralized or decentralized — treat natural and legal persons as the primary subjects. The agentic web adds a third: software agents acting under delegated authority. Imagine a legal‑assistant AI drafting and signing routine contracts within pre‑approved parameters; a personal AI negotiating energy tariffs against a published household policy; or a corporate digital twin executing procurement while respecting budget and compliance rules. In each case, the agent must authenticate itself, prove its mandate, minimize disclosure, and leave a verifiable trail.

A wallet for an AI Agent is the missing piece. Like a human wallet, it holds cryptographic keys and verifiable credentials, but it also manages delegation proofs that show what the agent is allowed to do, for how long, and under which policy. The wallet becomes the trust interface between an autonomous system and the rest of the web.

3. Privacy by Design for Distributed Agents

Privacy in an agentic environment is not a checkbox; it is an engineering constraint. Every interaction leaks context — identity, purpose, intent — unless the architecture is designed to limit it. That calls for selective disclosure so that only the necessary attributes are revealed; zero‑knowledge proofs so that claims can be verified without exposing underlying data; and ephemeral, unlinkable identifiers that prevent routine correlation across sessions.

Equally important is where reasoning happens. Agents should compute locally and exchange verifiable results (proofs, signatures, attestations) rather than raw data. This shifts the emphasis from federated learning toward federated decision‑making, which better preserves sovereignty. Because agents act on behalf of controllers, their delegations must be cryptographically provable, time‑bounded, and revocable. Current ARF protocols, such as OIDC4VC and Passkey, are excellent for human presentations but were not designed for multi‑hop delegation, where an agent can sub‑delegate a subset of its authority. Filling this gap points to capability‑based models is needed..

4. The EUDI Wallet ARF — What Fits and What Doesn’t (Yet)

The EUDI Wallet ARF sets a robust baseline for issuance, authentication, consent, and interoperability in human‑centric wallets. Extending it to autonomous agents surfaces three tensions. First, ARF relies on central trust anchors (trust lists, QTSPs). That scales for citizens and companies, but highly dynamic multi‑agent ecosystems also need peer‑to‑peer cryptographic trust, where authority can be verified without round‑trips to central lists. Second, OIDC‑derived flows that suit login and presentation workflows are less expressive for complex delegation, lacking proof chaining, purpose scoping, and fine‑grained revocation with auditable context. Third, the ARF advances format and transport interoperability, yet semantic interoperability is thin: agents need shared vocabularies to express consent, roles, and obligations in machine‑interpretable policy.

These gaps do not argue against ARF; they show where it must be extended to support non‑human actors while keeping Europe’s trust guarantees.

5. Technical Foundations of a Wallet for AI Agent

A practical agent wallet should implement four capabilities.

Agent identity and provenance. Each agent carries a Decentralized Identifier (DID) bound to its origin, version, and governance entity. Verifiable credentials attest to provenance, certification/compliance status, and operational scope.

Delegation by design. The wallet stores and presents verifiable delegation proofs, capturing who authorized what, for which purpose, and until when. Delegations must be granular, renewable, and revocable, and they should support chaining for controlled sub‑delegation.

Privacy‑preserving exchange. Selective disclosure, encrypted channels, and minimal‑disclosure protocols ensure only the necessary facts are revealed in each interaction.

Verifiable accountability. Every significant action generates a cryptographically verifiable audit trail — using Linked Data Proofs and confidential logs — so that accountability is demonstrable without exposing sensitive content.

A modern agentic stack combines complementary protocols. GNAP4VP focuses on the negotiation of grants between clients, resource servers, and authorization servers, making delegation explicit and rich in context. ZCAP‑LD represents capabilities as signed Linked Data documents, supporting attenuation (scoping) and delegation with verifiable chains. UMA 2.0 provides policy‑driven, user‑managed access across domains. OIDC extensions can bring purpose‑bound, agent‑to‑agent data access aligned with Solid’s pod model. None of these replaces the ARF; rather, they augment it with delegation semantics and privacy properties better suited to autonomous agents.

7. Regulatory Grounding — Compliance as Architecture

In Europe, technology scales only when it aligns with regulation. The ARF defines the deployability perimeter: if your credentials, signatures, and delegations do not fit its trust model, they are unlikely to reach production. Instead of treating compliance as paperwork, treat it as system design. This is the logic behind Trust by Design. Where privacy‑by‑design protects individuals, trust‑by‑design extends to the system: provenance is captured by protocol, consent for delegation is verifiable, and accountability is continuous through revocation registries and verifiable logs. With this posture, compliance becomes measurable, auditable, and portable across domains.

In Europe, technological feasibility without legal deployability is meaningless. Innovation must be built inside the regulatory framework, not around it.

The EU AI Act introduces obligations around traceability, transparency, human oversight, and conformity assessment. Today, many of these obligations are evidenced by static documentation. By aligning the EUDI Wallet, eIDAS trust services, and new AI Trust Service Providers (AITSPs), we can make compliance cryptographically verifiable. AI credentials can encode the model’s classification, governance entity, and intended use. Delegation proofs can record who authorized which action and under what scope. Verifiable logs can prove that safeguards were active at the time of a decision. The result is a shift from after‑the‑fact audits to continuous verification — regulation rendered as a machine‑readable protocol.

The agentic web will not stop at borders. Private‑sector initiatives — from payments to logistics — are already piloting agent‑driven interactions. Without cross‑border interoperability, however, each ecosystem risks becoming a silo. A wallet for AI Agents should therefore bridge public and private trust frameworks and align with global standards: W3C Verifiable Credentials for attestations, ISO/IEC identity standards for baseline interoperability, and mutual‑recognition mechanisms so that capability‑based delegations can be honored across jurisdictions without sacrificing privacy.

8. Actors & Architecture — AITSPs, DaaS, and the Trust Stack

The Qualified Trust Service Provider (QTSP) model under eIDAS v2 offers a template for AITSPs, specialized entities that certify agent identity, provenance, and governance. Around them, a neutral Delegation‑as‑a‑Service (DaaS) layer can manage issuance, chaining, and revocation of delegations at scale, exposing verifiable interfaces to relying parties. This layer should interoperate with GNAP/GNAP4VP, ZCAP‑LD, UMA 2.0, and OIDC so that policy and proof remain portable across ecosystems.

Think of the architecture as five interacting layers. At the base sits human and organizational identity, anchored in eIDAS v2 and the EUDI Wallet, establishing legal accountability. Above it is the agent identity layer, which records AI provenance and attestation through DIDs, verifiable credentials, and AITSP certification. The delegation layer captures verifiable authority transfer — who may do what, for whom, and under what conditions. The privacy and proof layer provides selective disclosure, zero‑knowledge techniques, and confidential computation to minimize exposure while maximizing verifiability. Finally, the compliance graph layer aggregates proofs and logs into continuous audit and policy enforcement, so that assurances can be checked without disclosing sensitive content.

9. Research Priorities — Toward Verifiable AI Agents

Several research threads deserve coordinated investment. We need formal accountability models that describe responsibility chains when multiple agents collaborate. We need credential schemas for AI — issued by AITSPs — that capture provenance, versioning, and certification in a vendor‑neutral way. We need delegation verification registries that are privacy‑preserving yet responsive enough for real‑time checks. We need zero‑knowledge audit mechanisms that let authorities verify constraints without reading raw operational data. And we need interoperability sandboxes that connect EUDI, GNAP, UMA, OIDC, and AI‑Act workflows under realistic conditions.

10. Conclusion — Building Verifiable AI Agents, Not Just Smarter Software

The shift from AI in a wallet to a wallet for AI Agents is not a feature upgrade; it’s a governance redesign for the web. If agents are to negotiate, transact, and decide on our behalf, they must do so inside a fabric where identity is proven, delegation is explicit, privacy is preserved, and accountability is verifiable. Europe already has much of the scaffolding in place: the EUDI Wallet and eIDAS trust services give us deployable anchors; what’s missing is extending that trust to non-human actors without sacrificing the guarantees that make it valuable.

The architecture is clear. Give agents provenance (DIDs and verifiable credentials), give them capabilities (scoped, time-bound, revocable delegations), protect interactions with selective disclosure and zero-knowledge, and record outcomes as cryptographic audit trails. Rather than displacing the ARF, a capability-based layer augments it — bringing multi-hop, purpose-aware authorization to workflows that were designed for people, not software.

Regulation should not slow this down; it should shape it. Treat the EU AI Act and the ARF as protocols as much as policies: machine-readable credentials for classification and governance, verifiable proofs for who authorized what, and continuous logs that demonstrate safeguards were active at the moment of decision. This is Trust by Design in practice — compliance that computes, not paperwork after the fact.

To make it real, we need new actors and rails. AI Trust Service Providers (AITSPs) can certify agent identity, provenance, and governance. A neutral Delegation-as-a-Service (DaaS) layer can issue, chain, and revoke authority at internet scale while remaining privacy-preserving. And because the agentic web won’t stop at borders, we must align with global standards so proofs are portable and silos don’t reappear under new names.

If we build this stack — identity, delegation, privacy proofs, and continuous compliance — agents become first-class, accountable participants in the digital economy. The prize is not just safer automation; it’s a market where trustworthy autonomy scales. That is how Europe can lead: by proving that privacy and accountability are not trade-offs, but the operating system of verifiable agency.

Read more

English 🇺🇸🇬🇧