Something changed this week and most of the people it will affect have not noticed yet.

Google announced that autonomous AI agents can now detect threats, hunt attackers, and execute security responses without waiting for a human to approve the action. The Triage and Investigation agent processed five million alerts last year. A thirty-minute manual analysis now takes sixty seconds. Three more agents are coming for threat hunting, detection engineering, and contextual enrichment.

The direction is not subtle. Human-in-the-loop defense is becoming human-aware defense. The agent acts. The human learns about it afterward.

This is the right response to a real problem. Mandiant’s M-Trends 2026 report showed that threat actors now hand off access from an initial breach to a secondary attacker in 22 seconds. Three years ago that took eight hours. At 22 seconds, waiting for a human to review an alert and approve a response is not a philosophy. It is a losing position.

So the agents make sense. That part is not the problem.

The problem starts the moment the agent acts.

Picture the scenario. An autonomous response agent on your network detects anomalous behavior, evaluates context, and executes a block. The threat is neutralized. Logs are generated. The agent moves to the next alert.

Three weeks later a regulator submits a request. They want the decision trail for that block. Who authorized it. What logic drove it. What information the system had at the time. What alternatives were considered and why they were not selected. Whether the action was proportionate to the threat. Whether it complied with relevant data handling obligations.

The agent cannot answer those questions. It executed a function. It does not carry accountability for the outcome.

Your team has the logs. But logs are not answers. Logs are raw material. Someone has to convert that raw material into an accountable explanation that holds up under scrutiny from a party that does not have to accept your interpretation of what happened. That someone is a human being inside your organization. And right now, in most organizations deploying agentic defense, that human being has not been identified, empowered, or given the tools to do that job.

That is the gap nobody announced at Google Cloud Next.

The natural response to this problem, and you will hear it more as agentic defense scales, is to use AI to audit AI. Run an agent over the decision logs. Generate an explanation. Automate the accountability record the way you automated the response.

It sounds elegant. It does not work. Not because the technology cannot produce a document. Because a document is not accountability.

Accountability requires someone who can be questioned, who can be held to a standard, and who can be sanctioned if the standard was not met. A machine-generated explanation of a machine-made decision satisfies none of those requirements. A regulator who understands what they are looking at will see the generated explanation for what it is. A record of what happened, produced by the same system whose behavior is under review, with no independent human judgment applied to it.

More fundamentally, if the decision happened too fast for human review, it also happened too fast for human accountability. Those are not two different problems. They are the same problem arriving at different times. The agent closes the speed gap. The accountability gap widens in its wake.

No framework published today answers this. Not NIST. Not ISO 42001. Not the EU AI Act. Not anything announced this week. They were not written for a world where the decision was made in milliseconds and the accountability question arrives weeks later from a party with legal authority to demand an answer.

That is not a criticism. It is a statement about where we are. The capability arrived. The governance is still in transit.

Here is the part that does not get said enough. The organizations deploying agentic defense are not making a reckless decision. They are making a rational one inside a system that has not yet built the infrastructure to support it.

The vendors built the response layer. Nobody built the accountability layer. That is not a failure of individual judgment. It is a structural condition. The tool arrived. The governance architecture that should exist alongside it has not been designed yet, let alone deployed.

What that architecture looks like is the question Vordan is here to work out. But its absence has a specific shape that is worth naming right now because understanding the shape of what is missing is the first step toward building it.

When an organization deploys an autonomous agent without accountability infrastructure it is not just missing a policy. It is missing six specific things simultaneously and each one compounds the others.

It is missing the record of where the decision came from. Who defined the agent’s criteria, when, under what authority, and against what standard. Not the vendor’s default settings. A deliberate organizational decision with a named owner and a documented rationale.

It is missing the input of the people closest to the risk before that decision was made. The practitioners who know what the agent will touch, what a false positive costs operationally, and what the downstream consequences of an automated block look like at three in the morning. Those people were almost certainly not in the room when the deployment scope was defined.

It is missing a trail that an outsider can follow. Logs that require an insider to interpret are not a trail. They are a liability dressed as documentation. The organizations that will fare best when the accountability question arrives are the ones that built their records for the party asking, not the party answering.

It is missing concurrent thinking. The accountability questions that feel urgent after an incident were answerable before deployment. They just were not asked. Not because the people involved were careless. Because the system they were operating inside gave them no structure for asking them at the right time.

It is missing a response architecture. When the agent does something unexpected, what happens next. Not in the vendor’s incident workflow. In the organization’s own process. With a human owner. With a timeline. With a record that demonstrates the correction actually happened.

And it is missing visibility for the practitioners working alongside these agents every day. The engineers and analysts who interact with the outputs of autonomous decisions and have no clear path to raise a concern, no confidence that the concern will reach someone with authority to act on it, and no understanding of the accountability structure they are operating inside.

None of those gaps exist because organizations are negligent. They exist because the industry built the response layer first and assumed the accountability layer would follow. It has not followed. It has not even started.

The visibility layer is being built. The response layer is getting faster. The accountability structure underneath both of them is the work the industry has not started and that no product announcement this week replaced.

At 22 seconds, the agent acts before you can stop it. The question that follows it moves at a different speed entirely. It arrives weeks later, from a party with the authority to demand an answer, in language the agent was never designed to speak.

Nobody has built the infrastructure to answer that question at scale yet. That is not a vendor failure or a practitioner failure. It is the accountability gap doing what it always does. The tool arrived before the rule. The only difference this time is that the tool is making decisions faster than any human ever has, and the rule has never been further behind.

That is the gap this publication exists to close.

Vordan publishes every Sunday. If someone in your network is the person in the room asking the accountability questions before the memo arrives, forward this to them.

Reply

Avatar

or to participate

Keep Reading