AutoPIL enforces access policy at retrieval time — before sensitive data enters your agent's context window. Every decision is logged, auditable, and observable.
Autonomous agents in financial services, healthcare, and legal retrieve data at runtime — from vector stores, databases, and APIs. That moment is unprotected by your existing governance stack.
Wrap your retrieval function. Every call is evaluated against your policies, written to an immutable audit log, and emitted as an OTEL span — without changing how your agent is structured.
from autopil import ContextGuard, SensitivityLevel guard = ContextGuard( policy_path="policies/financial_services.yaml", audit_db="autopil.db", ) @guard.protect( agent_role="loan_underwriter", user_id="user_001", source_id="credit_scores", sensitivity_level=SensitivityLevel.HIGH, session_id=session_id, ) def get_credit_score(customer_id: str) -> dict: return credit_db.query(customer_id) # ALLOW — policy matched, audit event logged, OTEL span emitted score = get_credit_score("cust_abc") # DENY — raises PermissionError, denial logged, alert rules checked # source='executive_comms' is on the denylist for this role
policies: - name: loan_underwriter_policy agent_role: loan_underwriter allowed_sources: - credit_scores - loan_history - property_valuations denied_sources: - executive_communications - internal_risk_models max_sensitivity: high
Built specifically for financial services, healthcare, and legal — where a data access violation is a compliance incident, not just a bug.
"Most enterprise data governance frameworks were designed for humans querying data. Autonomous agents operate at a fundamentally different speed and scale — and the governance tools have to catch up."
# What did the loan_underwriter agent access today? curl "https://api.autopil.ai/v1/audit/events\ ?agent_role=loan_underwriter\ &decision=ALLOW\ &limit=50" \ -H "X-API-Key: apl_yourkey" # Response [ { "event_id": "evt_abc123", "agent_role": "loan_underwriter", "source_id": "credit_scores", "decision": "ALLOW", "policy_name": "loan_underwriter_policy", "timestamp": "2026-03-26T14:22:01Z" }, ... ]
# Alert if deny rate exceeds 20% in any 10-minute window curl -X POST "https://api.autopil.ai/v1/alerts/rules" \ -H "X-API-Key: apl_yourkey" \ -d '{ "rule_type": "high_deny_rate", "threshold": 0.20, "window_minutes": 10, "notify_url": "https://hooks.slack.com/..." }'
Embed the Python SDK directly in your agent. Use the TypeScript SDK to call the REST API from Node-based platforms, internal dashboards, or enterprise tooling.
# pip install autopil from autopil import ContextGuard guard = ContextGuard( policy_path="policies.yaml", audit_db="autopil.db", ) @guard.protect( agent_role="analyst", user_id="u1", source_id="reports", ) def retrieve(query): return vectorstore.search(query)
// npm install @autopil/sdk import { AutoPilClient } from "@autopil/sdk" const client = new AutoPilClient({ baseUrl: "https://api.autopil.ai", apiKey: process.env.AUTOPIL_API_KEY, }) const result = await client.context.evaluate({ agent_role: "analyst", source_id: "reports", user_id: "u1", query: "Q3 earnings trend", }) if (result.decision === "DENY") { throw new Error(result.reason) }
AutoPIL was built out of a real gap observed while leading enterprise AI and data governance programs at Citi — where autonomous agents were being deployed into regulated environments without any enforcement layer at retrieval time. The problem isn't hypothetical. The compliance and audit requirements are real, and existing tools weren't designed for how agents actually work.
Govern the context. Trust the agent.
Whether you're evaluating AutoPIL for a production deployment or exploring what retrieval-layer governance looks like for your organization — reach out.