Start Free Trial
← Back to Blog

AutoPIL v0.6.0: The governance layer for enterprise AI is ready

Building across 12 industries, 135 pre-built policies, a tamper-evident audit chain, and integrations for every major framework. Here's what we built, what building it taught us, and what we're opening up ahead of a public launch in May.

Every enterprise I talk to is somewhere between "we're piloting AI agents" and "we have dozens of them running in production." The deployment curve has gone vertical. The governance curve hasn't kept up.

The problem isn't that people don't care about governance. It's that the tools available were designed for a different era — model monitoring, output evaluation, prompt review. None of them intercept the moment that actually matters: when an AI agent requests access to sensitive data and something has to decide whether to hand it over.

That's the moment AutoPIL was built for. And with version 0.6.0, I think we've built something production-ready — not demo software, not a proof of concept, but infrastructure that can govern an enterprise AI program at scale.

What building across 12 industries taught us

When we started, the core problem looked technical: write the enforcement layer, attach it to the retrieval call, done. What the process of building this actually taught us is that the technical problem is the easy part.

The hard part is understanding what "the right policy" means for a fraud investigator in financial services versus a clinical documentation specialist in healthcare versus a procurement officer in the public sector. Policies aren't just configuration. They encode a domain expert's understanding of where data boundaries have to hold — what a billing agent should never access during an active clinical encounter, what a SAR generator should never touch during a fraud investigation, what a field safety agent should never retrieve from a trading system.

Getting that right requires real industry knowledge, not just software. We worked through the regulations, the workflows, the failure modes across every vertical we support. 135 policies across 12 industries is what that work looks like as code.

What's in the policy library

Financial Services (25 policies across consumer banking, fraud investigation, wealth, risk & compliance, operations) · Healthcare (17 policies across clinical operations, compliance & privacy, revenue cycle) · Insurance · Logistics · Retail · Energy · Manufacturing · Real Estate · Pharmacy · Public Sector · Telecom · Technology. Every policy ships with agent role definitions, source allowlists, denylists, sensitivity ceilings, and session TTLs. Load, customize, extend via REST API — no redeployment required.

The capabilities that matter for production

Here's an honest summary of what's in the platform today — not a feature list, but the capabilities that actually matter when you're running AI agents against sensitive data in a regulated environment.

Pre-retrieval enforcement
Every data request is evaluated against the agent's assigned policy before any data is returned. ALLOW or DENY. Deny by default. The decision happens at the retrieval layer, before sensitive data enters the context window — which is the only point at which enforcement is actually meaningful.
Tamper-evident audit chain
Every decision — allow or deny — is written with a SHA-256 hash chained to the previous record. No delete API. The chain cannot be selectively modified. GET /v1/audit/verify walks the full chain and returns exactly which record was broken and when. This is what makes the audit trail usable for compliance, not just debugging.
Session-level governance
Sessions carry TTLs, per-policy expiry, and sensitivity decay — the max sensitivity ceiling tightens automatically as a session ages. Cross-agent isolation is enforced at the session layer: a session locked to one agent auto-denies any other agent that tries to use it. Every agent in a pipeline shares one session ID, so the audit trail is a unified record of the full pipeline execution.
Catalog integration
Nine catalog connectors ship out of the box: Unity Catalog, Snowflake, Collibra, Purview, Alation, DataHub, Informatica, Immuta, and Apache Polaris. Classifications sync into AutoPIL and feed directly into policy evaluation. Sensitivity mapping is per-connection and fully customizable — or falls back to per-catalog-type defaults.
Alert engine
Threshold rules on denial spikes, new source access, cross-agent isolation violations, and high deny rates. Webhook and email delivery. Per-rule cooldowns. Ten pre-built alert rules seed automatically on trial provisioning. The alert history is part of the audit trail.
Framework integrations
LangChain, LlamaIndex, LangGraph, OpenAI Agents SDK, AWS Bedrock, Google Gemini, and MCP all have native wrappers. One decorator, one policy, one audit trail — regardless of which framework your agent runs on. ASGI middleware covers FastAPI and Starlette apps at the HTTP layer.

The healthcare proof point

The clearest way to show what this looks like in practice is the Hospital Revenue Cycle pipeline we shipped in 0.6.0. Six agents: a revenue orchestrator, a clinical documentation agent, a CDI specialist, a medical coding agent, a charge reconciliation agent, and a billing compliance agent. The pipeline covers the full arc from chart documentation through coding, reconciliation, and final compliance review.

Hospital revenue cycle — 6-agent pipeline
revenue_orchestrator routing & coordination
clinical_documentation chart review
cdi_specialist documentation integrity
medical_coding ICD-10 / CPT
charge_reconciliation PHI access blocked
billing_compliance final review

Steps 2 and 3 in the demo deliberately trigger PHI access blocks — the charge reconciliation agent attempting to access patient clinical records that its policy doesn't permit at that stage of the workflow. The orchestrator detects the denial, reroutes correctly, and the full sequence — every allow, every deny, every reroute — is written to a single session-level audit trail.

This is what HIPAA-compliant AI looks like in practice. Not a policy document. Not a checkbox. A runtime enforcement layer that proves, with a cryptographically chained record, that the agents operating against your PHI were governed at every step.

The question regulators actually ask

It's not "do you have an AI governance policy?" It's "show me what data your AI agents accessed during this investigation, and prove that every access was authorized." A session-level audit chain answers that question. A checklist doesn't.

Getting to 0.6.0 — what the platform has become

It's worth stepping back to describe the arc. We started with a core guard, a SQLite audit log, and four industry verticals. Each release added a layer of production infrastructure:

0.1.x: Tamper-evident hash chain, catalog integration, trial provisioning, API key scopes (SOC 2 CC6.3 least privilege), agent registry, and 12 industry verticals — covering 120 pre-built policies.

0.2.0: Full session lifecycle — TTLs, per-policy expiry, sensitivity decay, cross-agent isolation enforcement, and a session context hash. 32 agent definitions across 11 verticals, with framework assignments for every major integration.

0.3.0–0.4.0: PII masking, per-tenant retention policies, data glossary, admin provisioning, the Getting Started guided onboarding experience, and the Pilot Mode automated source discovery workflow.

0.5.0: Per-connection sensitivity mapping across all nine catalog connectors — every connection can now carry custom tag-to-sensitivity rules that override catalog defaults without re-fetching from the source.

0.6.0: The Hospital Revenue Cycle pipeline, a complete 6-section guided onboarding experience, and the SaaS test suite — 110+ new tests, core coverage enforced at 90%, SaaS coverage enforced at 80%. The platform now has the production infrastructure we're comfortable putting in front of design partners.

May launch — what we're opening up

We're targeting a public launch at the end of May 2026. Between now and then, we're opening a small number of design partner slots for organizations that want to go into production with AutoPIL as part of their agentic infrastructure buildout.

Design partners get:

The industries we're prioritizing for design partnerships are financial services, healthcare, and life sciences — the verticals where the regulatory stakes are highest and where the gap between "we have AI agents" and "we can prove those agents are compliant" is widest.

If you're a CDO, CISO, or AI platform lead in one of those verticals and you're somewhere between proof-of-concept and production, this is a good moment to talk. The window between "piloting this" and "accountable for this in a regulatory context" is closing faster than most organizations expected.

The governance layer needs to be in place before the regulator asks the question.


Anil Solleti is a Managing Director and Head of Data & AI with over 25 years in enterprise technology, data strategy, and AI governance. AutoPIL was built to solve the governance gap between deploying AI agents and being able to prove they're operating within their intended boundaries. Reach out at anil@vibrantcapital.ai if you're working on this problem.

Ready to put governance in place before the regulator asks?

Start a free trial and have your first policy evaluation running in under 10 minutes. Or reach out directly if you want to talk about a design partnership ahead of the May launch.

Start Free Trial