From humans to agents: Why zero trust matters more than ever
Oct 14, 2025
Author

Abhijit Guha
Client Partner AI Client Services

Lakshmi Krishna
Principal Data Scientist AI Client Services

Gaurav Vijayvergiya
Lead Data Scientist AI Client Services
The new security paradigm
Traditional Zero Trust is founded on the principle of “never trust, always verify,” meaning no entity, inside or outside the network, is inherently trusted. Every access request requires ongoing authentication and authorization. This approach has become the foundation of enterprise security, especially in a cloud-first, distributed workforce environment.
However, with the rise of autonomous agents and Agentic AI, these assumptions need to be revisited. Agents are no longer static applications; they are adaptive, continuously learning, and capable of independent actions. They can call APIs, access databases, make decisions, engage with other agents, and collaborate across organizations. This increasing dynamism calls for a new, evolved form of Zero Trust, one that is not only procedural but also cognitive.
Emerging attack surfaces
Agentic AI broadens the threat landscape across five key dimensions: Influence, Cognition, Execution, Collaboration, and Exfiltration. These correspond to inputs, reasoning, actions, networks, and data.

The diagram below shows how these layers generate new attack surfaces, emphasizing the need for a Cognitive Zero Trust approach.

Why does security matter more now?
From securing data to securing decisions
Generative AI has advanced from simple chat interfaces to becoming agentic systems. These autonomous, goal-oriented entities can plan, reason, and carry out tasks across integrated enterprise systems. No longer passive, they actively act, adapt, and work together with humans and other agents.
The prevention strategies
The Agent Security Pyramid describes the layered safeguards essential for ensuring safe autonomy. At the foundation, input security defends data and context from manipulation, while model security strengthens reasoning engines against adversarial threats.
Orchestration security guarantees safe coordination of multi-agent workflows, while action security manages the tools and APIs agents are allowed to use. At the highest level is cognitive oversight, incorporating explainability and human-in-the-loop control. The slide below demonstrates how these layers form a comprehensive defense for agentic systems.
The agent security pyramid: A framework for safe autonomy

Our capability
In the age of Agentic AI, red teaming has become essential for establishing trust, not just optional. Autonomous agents now think, reason, and act in ways that introduce new vulnerabilities across prompts, tools, orchestration, and collaboration between agents.
Current static stress-testing methods for agentic applications are limited in scalability and speed, and can be costly because they rely on manual testing. Threat analysis must thoroughly examine attack vectors and their potential interactions, particularly considering the numerous behavioral possibilities of agentic systems.
A strong red teaming approach, as shared below, must therefore go beyond static penetration testing and continuously simulate adversarial conditions using curated attack libraries, adaptive simulation agents, and execution engines that stress-test every layer of the system. At Fractal, we are advancing proactive assurance strategies to strengthen agentic AI systems. Our approach is designed to uncover potential risks early, provide actionable insights, and reinforce trust before vulnerabilities can be exploited. While automation plays a key role, we also recognize the importance of human judgment in complex or novel contexts. As illustrated below, this balanced strategy enables agentic systems to evolve with resilience, transparency, and trusted accountability.
Agentic Red Teaming for cognitive systems: Our strategy
Trust is earned by simulating what could go wrong and breaking it first, so attackers can’t

Recognition and achievements