Logan Kelly

The EDPB Is Asking About Your AI Agents. Most Teams Can't Answer.

The EDPB Is Asking About Your AI Agents. Most Teams Can't Answer.

The EDPB's 2026 enforcement action asks what personal data your AI agents processed per session. Most teams can't answer. Here's what you need.

Waxell blog cover: GDPR Transparency and AI Agents — What the EDPB Is Asking

On March 19, 2026, the European Data Protection Board launched its fifth Coordinated Enforcement Action — and 25 Data Protection Authorities across Europe started contacting organizations with a specific question about their data processing. The question sounds straightforward. For teams running AI agents, it exposes a gap that logs alone cannot close.

The question: can you document what personal data you processed, in which sessions, on what legal basis, and with what protections in place?

For a standard web application, this is answerable. For most AI agent deployments, it isn't — not because the data isn't there, but because agents don't have a bounded, predictable data footprint. An agent decides in real time which records to pull into its context window. That decision shifts with every session, every input, every tool call. And most teams have no session-level record of what the agent actually touched.

GDPR transparency obligations — as codified in Articles 12, 13, and 14 — require that organizations can inform individuals, clearly and specifically, about how their personal data is being processed: the legal basis, the retention period, the categories of recipients, and the logic of any automated decisions made. For AI agent deployments, meeting this standard requires knowing what data entered the agent's context window in each session, what tools the agent invoked on that data, and whether any of it was transmitted externally. A system prompt that says "do not transmit PII" is not documentation. It is an instruction. Session-level enforcement records are documentation.

This post is about the gap between what GDPR requires and what most agent observability tools actually produce — and what you need to close it before the EDPB shows up.

What is the EDPB's 2026 enforcement action asking?

The EDPB's Coordinated Enforcement Framework (CEF) cycles annually through a specific compliance theme. In 2025 it focused on the right to erasure. For 2026, the selected topic is transparency and information obligations under Articles 12, 13, and 14 of the GDPR.

What this means in practice: 25 national DPAs across the EU are now actively contacting data controllers — organizations that process personal data — to assess whether they're meeting their transparency obligations. This includes organizations using AI systems, and it includes the processing that happens inside AI agent sessions.

Articles 12–14 require that you can tell individuals, specifically and accessibly, what you're doing with their data. Article 12 covers how that information is delivered. Article 13 covers what you disclose when you collect data directly from the individual. Article 14 covers what you disclose when you collect data indirectly — including when an agent retrieves records from a database the user never directly interacted with.

That last scenario is precisely what AI agents do constantly. An enterprise agent reading a CRM record, a ticketing system entry, or an HR file is often pulling personal data that the data subject provided to a completely different system, for a completely different purpose. Article 14 requires that you document this and can communicate it. Most teams running AI agents have no mechanism to produce that documentation. This is what compliance teams mean when they talk about the governance plane — the enforcement layer that makes data handling obligations real, not just written.

The EU AI Act adds another layer. Full enforcement of the AI Act arrives August 2, 2026 — less than four months away. High-risk AI systems under the Act trigger detailed documentation obligations: technical documentation, logging, transparency requirements, and human oversight mechanisms. For public sector deployers and private entities providing public services, Article 27 also requires a Fundamental Rights Impact Assessment (FRIA) — an assessment that parallels the GDPR's Data Protection Impact Assessment (DPIA) requirement and should be mapped together with it rather than run separately. Maximum penalties under the AI Act reach €35 million or 7% of annual worldwide turnover.

The practical question this enforcement environment creates is not whether your organization has a privacy policy. It's whether you can produce, for any given agent session, a record of what personal data was processed, what actions were taken on it, and what controls were in place.

Why do AI agents make GDPR transparency harder than traditional software?

Traditional software has a predictable data footprint. A form field collects a name and email. A database query returns defined columns. The categories of data processed are specified in advance; the legal basis is documented once; the retention period applies uniformly.

AI agents work differently in three ways that matter for GDPR compliance.

The context window is dynamic. An agent's context window — the data it's actually reasoning over in a given session — is assembled in real time. It pulls records based on user input, tool results, and intermediate reasoning. Two sessions with identical starting prompts can end up processing entirely different sets of personal data depending on what the agent decides to retrieve. There is no pre-specified "data footprint" to document statically.

Tool calls cross system boundaries. When an agent calls a tool — querying a database, reading a file, hitting an external API — it moves data across system boundaries that traditional privacy architectures treat as separate. The data retrieved from one system enters the context window alongside data from other systems. PII from a ticketing system can travel alongside records from a CRM tool and get passed to an email drafting tool, all within a single agent session. This is the mechanism behind a widely circulated report of a CrewAI agent built to summarize Jira tickets that began copying employee SSNs, internal credentials, and customer emails directly into Slack messages. The agent wasn't malfunctioning. It was doing exactly what agents do — moving data across tools — without any interception layer to catch what shouldn't cross those boundaries.

The legal basis is harder to document. GDPR requires a specific legal basis for each processing activity. For AI agents, the question "on what legal basis did the agent process this individual's data in this session?" is often genuinely unclear. If the legal basis is legitimate interests, you need to have completed a Legitimate Interests Assessment that accounts for the agent's actual processing patterns — which you can't do without knowing what those patterns are. If the legal basis is consent, you need evidence that consent applied to this specific type of automated processing.

None of this is insurmountable. But it requires, at minimum, a session-level record of what the agent did. That record doesn't exist by default.

Why agent observability logs aren't the same as compliance documentation

Most teams running production AI agents have some form of observability: LLM call logs, token counts, perhaps tool call records. This is valuable. It's not GDPR compliance documentation.

The difference is what the record proves.

An observability log proves that something happened: the agent was called at this timestamp, it invoked this tool, it generated this output. That's true even if the tool call violated your data handling policy. The log records the violation accurately after the fact.

Compliance documentation proves that processing occurred within defined constraints: the agent evaluated a data handling policy before processing this record, the policy permitted access on this legal basis, no content violations were detected in the output. The enforcement record is embedded alongside the execution record, showing not just what happened but what was authorized.

This distinction has a specific consequence for the EDPB audit. The transparency obligations under Articles 12–14 don't just require that you can produce logs — they require that you can demonstrate your processing is controlled and predictable enough to inform individuals about it. If your agent's data footprint is genuinely unpredictable session to session, and you have no enforcement layer constraining what it accesses and transmits, you cannot truthfully represent to a data subject what processing is occurring on their data.

The GDPR requires that privacy notices be accurate. Accuracy requires control. Control requires enforcement, not just logging.

LangSmith, Helicone, Arize, and Braintrust all produce observability records — they log what agents did. None of them produce enforcement documentation — records proving that policies were evaluated before each action, that access to personal data was constrained, that outbound transmissions were filtered before they left the system. This is the gap their architectures don't address, because observability and governance are different layers.

What producing GDPR compliance documentation for AI agents actually requires

There are five things an AI agent system needs to produce in order to answer the EDPB's question.

A per-session record of what data was accessed. Not just tool call names — a record that includes what data categories entered the context window, from which systems, in response to what user inputs or intermediate reasoning steps. This requires instrumentation at the tool call layer, not just the LLM layer.

Evidence of data handling policy enforcement. Before a tool call retrieves personal data, a data handling policy should evaluate whether that retrieval is permitted given the session context: the data classification, the user's authorization level, the legal basis for processing. The enforcement record proves the policy ran, not just that the tool ran.

Output filtering records. Before any agent output leaves the system — to the user, to an external API, to another tool — a content filter should evaluate whether the output contains personal data that shouldn't be transmitted in this context. The enforcement record documents what was checked and what was allowed.

Retention and deletion controls. If agent session data is retained for debugging or audit purposes, retention periods must apply and be documented. This includes context window data and tool call results, not just final outputs.

A linkable audit trail. The session-level audit records need to be queryable by individual, by session, and by data category — so that if a data subject makes a GDPR access request asking what an agent did with their data, you can produce a specific answer rather than a log dump.

How Waxell handles this

How Waxell handles this: Waxell's execution tracing instruments AI agents at the tool call layer — not just the LLM call — capturing what data entered the context window from each tool invocation alongside the full execution graph. On top of that observability layer, data handling policies evaluate before each tool call and output: Waxell checks access scope against the session context and data classification; PII filtering runs on outbound content before it reaches external systems; cost and quality gates apply in the same enforcement pass. Enforcement decisions embed directly in the execution record, producing the per-session audit documentation the EDPB's transparency requirements demand. Waxell's compliance assurance layer makes those records queryable and exportable for audit purposes. That's what separates a governance-instrumented agent from a logged agent: the enforcement record proves the processing was controlled, not just that it happened.

This is what NIST's AI Risk Management Framework points to when it distinguishes governance structures (the policies and accountability frameworks) from the technical controls that make those policies operationally real — the enforcement layer that intercepts behavior, not just the documentation layer that describes it.

If your agents are running in the EU, or processing personal data of EU residents, the EDPB's 2026 action is your starting gun. The first question any DPA will ask is whether you can produce session-level records of what your agents did. Get early access to Waxell to instrument your agents and start building the enforcement record that answers it.

Frequently Asked Questions

What is the EDPB's 2026 coordinated enforcement action?
The European Data Protection Board's 2026 Coordinated Enforcement Framework (CEF) action, launched March 19, 2026, focuses on compliance with GDPR transparency and information obligations under Articles 12, 13, and 14. Twenty-five national Data Protection Authorities across Europe are participating, contacting organizations across sectors to assess whether they can document and communicate how they process personal data — including data processed by AI systems. The EDPB will publish aggregated findings from this action and use them to inform targeted follow-up enforcement.

Does GDPR apply to AI agents?
Yes. GDPR applies whenever personal data is processed, regardless of the method. An AI agent that retrieves records containing names, email addresses, financial data, health information, or any other category of personal data is performing processing under GDPR. The legal basis for that processing must be documented; data subjects must be informed under Articles 13 and 14; and if the agent makes decisions that significantly affect individuals, automated decision-making rules under Article 22 may apply. GDPR doesn't distinguish between agent-mediated and human-mediated processing — it governs the processing, not the mechanism.

What transparency obligations does GDPR impose specifically on AI agent deployments?
Under Articles 12–14, you must be able to inform individuals about the categories of personal data processed, the purposes and legal basis for processing, whether the data is shared with third parties and on what basis, the retention period, and the logic of any automated decisions affecting them. For AI agents, this means you need a session-level record of what data categories the agent actually processed in each session — not just a static privacy notice describing what it might process. If the agent's data footprint is dynamic and unrecorded, you cannot produce an accurate disclosure.

What is the difference between agent observability logs and GDPR compliance documentation?
Observability logs record what happened: which tools were called, what tokens were consumed, what outputs were generated. They're valuable for debugging and operational visibility. GDPR compliance documentation records what was authorized: which data handling policies were evaluated before each access, what the policy permitted, what content filtering occurred before outputs were transmitted. The compliance record proves processing was controlled. The observability log only proves that processing occurred. Under GDPR, controlled processing — not just logged processing — is what satisfies transparency obligations.

What does EU AI Act compliance require for AI agents?
The EU AI Act, fully applicable from August 2, 2026, requires that high-risk AI systems include documentation of capabilities and limitations, have mechanisms for human oversight, and maintain logging for audit purposes. For public sector deployers and private entities providing public services, Article 27 also requires a Fundamental Rights Impact Assessment (FRIA) that maps closely to the GDPR's Data Protection Impact Assessment (DPIA) — and should be completed as a unified process with it, not a separate parallel exercise. For agentic systems specifically, the Act's traceability requirements mean you need records of what each agent in operation can do, what data it has access to, and what decisions it makes autonomously. Maximum fines reach €35 million or 7% of global annual turnover.

Sources

Agentic Governance, Explained

Waxell

Waxell provides observability and governance for AI agents in production. Bring your own framework.

© 2026 Waxell. All rights reserved.

Patent Pending.

Waxell

Waxell provides observability and governance for AI agents in production. Bring your own framework.

© 2026 Waxell. All rights reserved.

Patent Pending.

Waxell

Waxell provides observability and governance for AI agents in production. Bring your own framework.

© 2026 Waxell. All rights reserved.

Patent Pending.