/

Article

/

Who controls the agent? Governing how AI thinks, acts, and decides

Who controls the agent? Governing how AI thinks, acts, and decides

Who controls the agent? Governing how AI thinks, acts, and decides

Jan 2026

Author

Meenu Sharma, Fractal

Meenu Sharma

Principal Data Scientist

Munish Kaushik, Fractal

Munish Kaushik

Principal Data Scientist

In traditional software systems, access control was binary and user-centric. A human authenticated into an application, and the system checked whether that user could view a page or execute a function. 

In the Agentic Era, this model breaks down. 

Autonomous AI agents do not simply respond to prompts. They reason over enterprise data, retrieve knowledge dynamically, select tools, and execute multi-step workflows. When an agent is instructed to “prepare a quarterly financial report and email it to the board,” it is no longer executing a script; it is operating as a delegated decision-maker acting on behalf of a human. 

Governing such systems requires more than classic Role-Based Access Control (RBAC). Enterprises need Policy-Based Access Control (PBAC) to constrain autonomy at runtime, and AgentOps to observe, audit, and understand agent behavior in production. 

Effective agent governance must control three distinct stages of execution: 

Thinking → Acting → Deciding 

THINK: Governing Agent Reasoning via RAG (RBAC + PBAC) 

For enterprise agents, “thinking” primarily occurs through Retrieval-Augmented Generation (RAG). The agent retrieves documents from enterprise knowledge stores, injects them into the prompt, and reasons strictly over that retrieved context. 

What an agent retrieves defines what it can think about. 

Why access control at retrieval matters 

If an agent retrieves sensitive assets, such as payroll data, legal contracts, or board minutes, it may reason accurately while still violating organizational policy. Preventing this requires access control before retrieval, not after generation. 

Applying RBAC and PBAC to RAG 

  • RBAC restricts access by role (e.g., Finance, HR, Legal) 

  • PBAC adds contextual constraints such as clearance level, geography, project scope, or time window 

Documents that fail policy checks must never enter the LLM context window. 

Conceptual implementation: 

docs = vector_db.search( 
  query, 
  filters=
  { "department": user.department, 
  "classification": {"$lte": user.clearance}, 
  "region": user.region 
  }
)
docs = vector_db.search( 
  query, 
  filters=
  { "department": user.department, 
  "classification": {"$lte": user.clearance}, 
  "region": user.region 
  }
)
docs = vector_db.search( 
  query, 
  filters=
  { "department": user.department, 
  "classification": {"$lte": user.clearance}, 
  "region": user.region 
  }
)

Principle of Least Privilege for Agents 

Autonomous agents must never have unrestricted tool access. Instead, tools should be organized into role-specific toolkits aligned with business function: 

  • Support agents create or update tickets 

  • Sales agents modify CRM records 

  • DevOps agents restart services or deploy infrastructure 

RBAC is enforced at execution time through tool wrappers, API gateways, or service mesh interceptors, ensuring unauthorized calls are blocked regardless of what the LLM proposes. 

Conceptual enforcement: 

def secure_tool_call(ctx, tool, args): 
  policy.check(ctx.agent_role, tool.name) 
  return tool.execute(args)
def secure_tool_call(ctx, tool, args): 
  policy.check(ctx.agent_role, tool.name) 
  return tool.execute(args)
def secure_tool_call(ctx, tool, args): 
  policy.check(ctx.agent_role, tool.name) 
  return tool.execute(args)


Here, RBAC governs what the agent is capable of doing, independent of its reasoning or intent.
 

DECIDE: Governing Autonomous Decisions and Authority (PBAC) 

Decision-making is where RBAC alone becomes insufficient. 

When an agent approves a refund, releases a payment, or triggers a deployment, who is accountable? 

Threshold-Based Autonomy with PBAC 

PBAC enables risk-aware autonomy, allowing agents to act independently only within defined policy thresholds: 

  • Low-risk actions execute automatically 

  • High-risk actions require Human-in-the-Loop (HITL) approval 

Example: 

  • Refund ≤ $50 → agent executes 

  • Refund > $50 → agent prepares justification and escalates 

PBAC allows autonomy to scale with risk and context, not just role. 

Where RBAC, PBAC, and AgentOps Fit in an Agentic System 

Agent Stage 

What Happens 

Control Layer 

Typical Implementation 

Think (RAG) 

Retrieve data for reasoning 

RBAC + PBAC 

Metadata filters, row-level security 

Act (Tools) 

Invoke APIs or workflows 

RBAC 

Tool wrappers, API gateways 

Decide 

Commit outcomes 

PBAC 

Policy engines, approval workflows 

Audit 

Record execution 

RBAC + PBAC 

Distributed tracing, delegated identity logs 


Identity Delegation and Auditability 

Agents must never operate as anonymous superusers. Instead, they should use short-lived delegated identities that clearly express accountability. 

Audit logs must explicitly record: 

“Agent_X acting on behalf of User_Y” 

This prevents privilege escalation and enables traceability during audits, incident response, and regulatory reviews. 

Why AgentOps Is Essential for Production AI 

Even with strong RBAC and PBAC, enterprises still need operational visibility: 

  • What actions did the agent actually take? 

  • Why did it choose one path over another? 

  • Where did it retry, fail, or escalate? 

  • What was the cost per outcome? 

AgentOps focuses on observed behavior, not just final outputs.

  • RBAC and PBAC define what is allowed 

  • AgentOps records what actually happened 

Together, they form the agent control plane, connecting policy, execution, and observability. 

Conclusion 

  • RBAC defines who an agent is allowed to be 

  • PBAC defines when and how far it may act 

  • AgentOps explains what it actually did, and why 

Autonomous agents are not made safe by better prompts alone. 
They are made safe by explicit boundaries, enforced policies, delegated identity, and observable behavior

This is the foundation of enterprise-grade, trustworthy agentic systems. 

Explore agent governance with us

Recognition and achievements

Named leader

Customer analytics service provider Q2 2023

Named leader

Customer analytics service provider Q2 2023

Named leader

Customer analytics service provider Q2 2023

Representative vendor

Customer analytics service provider Q1 2021

Representative vendor

Customer analytics service provider Q1 2021

Representative vendor

Customer analytics service provider Q1 2021

Great Place to Work, USA

8th year running. Certifications received for India, USA,Canada, Australia, and the UK.

Great Place to Work, USA

8th year running. Certifications received for India, USA,Canada, Australia, and the UK.

Great Place to Work, USA

8th year running. Certifications received for India, USA,Canada, Australia, and the UK.

All rights reserved © 2025 Fractal Analytics Inc.

Registered Office:

Level 7, Commerz II, International Business Park, Oberoi Garden City,Off. W. E.Highway, Goregaon (E), Mumbai City, Mumbai, Maharashtra, India, 400063

CIN : U72400MH2000PLC125369

GST Number (Maharashtra) : 27AAACF4502D1Z8

All rights reserved © 2025 Fractal Analytics Inc.

Registered Office:

Level 7, Commerz II, International Business Park, Oberoi Garden City,Off. W. E.Highway, Goregaon (E), Mumbai City, Mumbai, Maharashtra, India, 400063

CIN : U72400MH2000PLC125369

GST Number (Maharashtra) : 27AAACF4502D1Z8

All rights reserved © 2025 Fractal Analytics Inc.

Registered Office:

Level 7, Commerz II, International Business Park, Oberoi Garden City,Off. W. E.Highway, Goregaon (E), Mumbai City, Mumbai, Maharashtra, India, 400063

CIN : U72400MH2000PLC125369

GST Number (Maharashtra) : 27AAACF4502D1Z8