AutoPIL enforces access policy at retrieval time — before sensitive data enters your agent's context window. Every integration path runs the same policy engine, writes to the same audit log, and fires the same alert rules.
Each request is bound to a session ID and agent role. The session TTL is resolved from the policy YAML, with a global fallback. Concurrent requests never bleed context — async variants use ContextVar for safe isolation.
The policy engine evaluates role, source, sensitivity level, and session age. Sensitivity decay rules tighten the effective ceiling as the session ages — no operator action required. Decision is ALLOW or DENY — no partial access.
Every decision — ALLOW and DENY — is written to the audit log immediately. Event includes role, user, source, decision, policy name, timestamp, and event ID.
After the audit write, alert rules run against the event. Violations trigger configurable alerts — Slack, PagerDuty, webhook, or custom handlers.
Every enforcement decision contributes to the PIL Score — a 0–100 governance health index computed over the rolling 30-day window. Scope Integrity, Governance Coverage, Isolation Safety, Source Registration, and Trend. The score, its band, and a 30-day sparkline are visible in the dashboard and queryable via API.
from autopil import ContextGuard, SensitivityLevel # session_ttl_minutes is the global fallback; # per-role TTL in policy YAML takes precedence guard = ContextGuard(policy_path="policies/", audit_db="autopil.db", session_ttl_minutes=480) @guard.protect( agent_role="loan_underwriter", user_id="user_001", source_id="credit_scores", sensitivity_level=SensitivityLevel.HIGH, session_id=session_id, ) def get_credit_score(customer_id: str) -> dict: return credit_db.query(customer_id) # ALLOW — audit event logged, session context hash updated score = get_credit_score("cust_abc")
# For async agents — concurrent-safe via ContextVar @guard.protect_async( agent_role="analyst", user_id="user_002", source_id="reports", sensitivity_level=SensitivityLevel.MEDIUM, session_id=session_id, ) async def fetch_report(query: str) -> list: return await vector_db.asearch(query) results = await fetch_report("Q1 revenue")
# Add to your agent's system prompt:
Before accessing any data source, call evaluate_context:
agent_role: loan_underwriter
user_id: <current user>
source_id: <data source you want>
sensitivity_level: high
session_id: <conversation id>
Only proceed if decision is ALLOW.
✅ ALLOW — loan_underwriter may access 'credit_scores'.
Policy: loan_underwriter_policy
Event ID: evt_abc123
curl -X POST http://localhost:8000/v1/context/evaluate \ -H "X-API-Key: apl_yourkey" \ -H "Content-Type: application/json" \ -d '{ "agent_role": "loan_underwriter", "user_id": "user_001", "source_id": "credit_scores", "sensitivity_level": "high", "session_id": "sess_abc123" }'
{"decision": "ALLOW", "policy_name": "loan_underwriter_policy", "event_id": "evt_abc123"}
from fastapi import FastAPI from autopil.middleware import AutoPILMiddleware, RouteRule from autopil import ContextGuard, SensitivityLevel guard = ContextGuard(policy_path="policies/") app = FastAPI() app.add_middleware( AutoPILMiddleware, guard=guard, rules=[ RouteRule( path_pattern=r"^/api/credit/.*", agent_role="loan_underwriter", user_id_header="X-User-ID", source_id="credit_scores", sensitivity_level=SensitivityLevel.HIGH, ), ], )
from autopil.langchain_guard import LangChainGuard from langchain_core.tools import tool guard = LangChainGuard(policy_path="policies/") @tool @guard.protect( agent_role="research_analyst", user_id="user_001", source_id="market_data", sensitivity_level=SensitivityLevel.MEDIUM, session_id=session_id, ) def get_market_data(ticker: str) -> dict: return data_api.fetch(ticker)
from autopil.llamaindex_guard import LlamaIndexGuard from llama_index.core.tools import FunctionTool guard = LlamaIndexGuard(policy_path="policies/") @guard.protect( agent_role="document_analyst", user_id="user_001", source_id="legal_contracts", sensitivity_level=SensitivityLevel.HIGH, session_id=session_id, ) def retrieve_contract(clause: str) -> str: return index.as_query_engine().query(clause) tool = FunctionTool.from_defaults(fn=retrieve_contract)
from autopil.gemini_guard import GeminiGuard import google.generativeai as genai guard = GeminiGuard(policy_path="policies/") @guard.protect( agent_role="content_reviewer", user_id="user_001", source_id="internal_docs", sensitivity_level=SensitivityLevel.MEDIUM, session_id=session_id, ) def fetch_document(doc_id: str) -> str: return docs_api.get(doc_id)
from autopil.openai_agents_guard import OpenAIAgentsGuard from agents import function_tool guard = OpenAIAgentsGuard(policy_path="policies/") @function_tool @guard.protect( agent_role="compliance_checker", user_id="user_001", source_id="regulatory_filings", sensitivity_level=SensitivityLevel.RESTRICTED, session_id=session_id, ) def get_filing(filing_id: str) -> dict: return filings_db.fetch(filing_id)
import boto3 from autopil.bedrock_guard import BedrockGuard guard = BedrockGuard(policy_path="policies/") boto_client = boto3.client("bedrock-agent-runtime") # Wrap the boto3 client — guard intercepts inputText as the policy query client = guard.wrap_invoke_agent( boto_client, agent_role="compliance_agent", user_id="user_001", source_id="regulatory_data", sensitivity_level=SensitivityLevel.HIGH, session_id=session_id, ) # Identical to boto3 invoke_agent — guard runs before the call response = client.invoke_agent( agentId="ABCDEF123", agentAliasId="TSTALIASID", sessionId=session_id, inputText="Summarize Q1 compliance filings", )
# Async variant with aioboto3 import aioboto3 from autopil.bedrock_guard import BedrockGuard guard = BedrockGuard(policy_path="policies/") session = aioboto3.Session() async with session.client("bedrock-agent-runtime") as boto_client: client = guard.wrap_invoke_agent_async( boto_client, agent_role="compliance_agent", user_id="user_001", source_id="regulatory_data", sensitivity_level=SensitivityLevel.HIGH, session_id=session_id, ) response = await client.invoke_agent( agentId="ABCDEF123", agentAliasId="TSTALIASID", sessionId=session_id, inputText="Summarize Q1 compliance filings", )
| Channel | source_type | Use case |
|---|---|---|
| Python Decorator | sdk | Python microservices, scripts, notebooks |
| Async Decorator | sdk | Async Python agents (FastAPI, async frameworks) |
| MCP Server | mcp | Claude Desktop, any MCP-compatible agent |
| REST API | rest | Any language: Go, Java, Ruby, PHP, .NET |
| ASGI Middleware | api | FastAPI / Starlette apps — HTTP-layer enforcement |
| LangChain | langchain | LangChain agents, chains, and LCEL pipelines |
| LlamaIndex | llamaindex | LlamaIndex query engines and retrievers |
| Gemini | gemini | Google Gemini function-calling agents |
| OpenAI Agents | openai_agents | OpenAI Agents SDK function tools |
| AWS Bedrock | bedrock | Bedrock Agents via boto3 / aioboto3 |
Self-hosted. Every channel enforces the same policy and writes to the same audit log.