Blog / Threat Research
Threat ResearchDraft — Pending Review

Your SAST Passed. Your Agent Is Still Dangerous.

Key Takeaways

  • SAST tools were not built to assess autonomous decision-making — they check deterministic code paths
  • Agent tool permissions, fund access, and prompt injection surface are first-class security risks that standard scanners miss
  • Irreversible autonomous actions require a different security frame than patchable deterministic applications
  • The scan your pipeline is missing checks what agents can do, not just what the code contains

Traditional AppSec tools miss the agent security surface. Here's the gap and how to think about closing it before you ship.

VORO Research · 5 min read · April 10, 2026

This post explains why SAST, DAST, and dependency scanners are not built for autonomous agent systems — and what the threat model actually looks like for agents with tool access, prompt injection exposure, and irreversible action authority.

Your team just shipped an internal AI agent. It calls three external APIs, pulls context from your customer database, and because the finance team needed it, it has access to your payment processing system.

You ran your SAST scanner before release. It passed clean.

Nobody checked the agent's tool permissions. Nobody looked at the prompt injection surface. Nobody asked whether the fund access had any guard rails. You shipped anyway, because the scanner said you were done.

This is the gap that traditional application security was not built to close.

What AppSec Was Designed For

Application security tooling has evolved over decades around a specific mental model: code executes instructions in a deterministic sequence, and security tools find the places where those instructions can be exploited.

SAST scanners look for dangerous function calls, unsafe data flows, and insecure deserialization. Dependency checkers flag libraries with known CVEs. DAST tools probe a running application for injection points and authentication flaws. These are the right tools for the mental model they were designed for.

That mental model does not include autonomous decision-making.

What Agents Introduce

An AI agent is not executing a fixed sequence of instructions. It is reasoning about what to do, choosing between actions, calling external tools, and sometimes taking steps that cannot be undone. The security surface of that system is different in kind, not just in degree, from a traditional web application.

Three things change when you introduce an agent:

Tool permissions become a first-class risk. A traditional application has a defined set of actions it can take. An agent's tool set can expand. If a tool grants access to external APIs, payment systems, or file systems, the question of what that tool is actually authorized to do, and under what conditions, is a security question that standard SAST scanners have no concept for.

Prompt injection becomes an attack vector. An agent that processes external input before deciding what to do can be manipulated through that input. An attacker who controls what your agent reads can influence what your agent does. Traditional injection detection looks for SQL metacharacters or shell escapes. It is not looking for instruction hijacking inside a reasoning chain.

Autonomous action means some decisions are irreversible. A deterministic application can be patched. An agent that has already transferred funds, sent emails, or modified records has acted. The security frame for that kind of system needs to ask: which actions does this agent have the authority to take, and should any of them require human approval before execution?

The Scan That Doesn't See It

Standard SAST tools are not wrong. They are looking at the wrong layer.

They can tell you that your Python code does not have a known-insecure function call. They cannot tell you that your agent will route to an unconstrained payment tool when prompted with a specific input. They were not designed to assess whether an agent's autonomy is appropriately scoped, whether its fund access is guarded, or whether the external systems it calls introduce risk that should have been flagged at code review time.

The gap is not in the scanner's execution quality. It is in the threat model the scanner was built against. Traditional AppSec threat models were not written with agent autonomy in mind. They do not include fund safety as a risk dimension. They do not map to frameworks like OWASP Agentic or MITRE ATLAS that were designed specifically to address how autonomous systems can be exploited.

What Needs to Change

Closing this gap requires treating agentic behavior as a first-class security concern at the code level. That means:

  • Reviewing tool permissions as a security surface, not just an implementation detail
  • Scanning for prompt injection patterns specific to how agents process and act on input
  • Treating fund access and autonomous action as dimensions that require explicit analysis
  • Mapping findings against frameworks built for this threat class, including OWASP Agentic, OWASP LLM Top 10, and MITRE ATLAS

This is not a replacement for traditional AppSec. It is an addition to it. The SAST scan still matters. The dependency audit still matters. But neither one covers the layer where an autonomous agent becomes a risk by doing exactly what it was designed to do, under conditions nobody anticipated.

Where VORO Fits

VORO is a security intelligence platform built for this analysis. It scans code across 16 languages, including Python, TypeScript, Go, and Rust, the primary languages for agent development, using 647+ patterns and 9 security taxonomies. Those taxonomies include OWASP LLM Top 10, OWASP Agentic, and MITRE ATLAS, alongside the smart contract frameworks that cover fund-safety patterns in on-chain code.

Risk output is organized across six dimensions: fund_safety, access_control, external_risk, code_integrity, dependency_health, and agent_autonomy. Those dimensions exist because the threat model for an agent requires them. They surface findings that a general-purpose scanner will not flag.

VORO runs locally. No code leaves your environment. There is no cloud upload required and no account needed to get started.

If your team is shipping agents in Python, in TypeScript, on-chain, or anywhere in between, the scan your pipeline is missing is one that understands what those agents can do.

pip install agent-builder

VORO supports 16 languages with deep pattern coverage for Python, JavaScript/TypeScript, Go, Rust, and Solidity. 647+ active patterns. 14 external scanner integrations. Local-first, so no code leaves your environment.