Every enterprise I talk to is somewhere between "we're piloting AI agents" and "we have dozens of them running in production." The deployment curve has gone vertical. The governance curve hasn't kept up.
The problem isn't that people don't care about governance. It's that the tools available were designed for a different era — model monitoring, output evaluation, prompt review. None of them intercept the moment that actually matters: when an AI agent requests access to sensitive data and something has to decide whether to hand it over.
That's the moment AutoPIL was built for. And with version 0.6.0, I think we've built something production-ready — not demo software, not a proof of concept, but infrastructure that can govern an enterprise AI program at scale.
What building across 12 industries taught us
When we started, the core problem looked technical: write the enforcement layer, attach it to the retrieval call, done. What the process of building this actually taught us is that the technical problem is the easy part.
The hard part is understanding what "the right policy" means for a fraud investigator in financial services versus a clinical documentation specialist in healthcare versus a procurement officer in the public sector. Policies aren't just configuration. They encode a domain expert's understanding of where data boundaries have to hold — what a billing agent should never access during an active clinical encounter, what a SAR generator should never touch during a fraud investigation, what a field safety agent should never retrieve from a trading system.
Getting that right requires real industry knowledge, not just software. We worked through the regulations, the workflows, the failure modes across every vertical we support. 135 policies across 12 industries is what that work looks like as code.
Financial Services (25 policies across consumer banking, fraud investigation, wealth, risk & compliance, operations) · Healthcare (17 policies across clinical operations, compliance & privacy, revenue cycle) · Insurance · Logistics · Retail · Energy · Manufacturing · Real Estate · Pharmacy · Public Sector · Telecom · Technology. Every policy ships with agent role definitions, source allowlists, denylists, sensitivity ceilings, and session TTLs. Load, customize, extend via REST API — no redeployment required.
The capabilities that matter for production
Here's an honest summary of what's in the platform today — not a feature list, but the capabilities that actually matter when you're running AI agents against sensitive data in a regulated environment.
GET /v1/audit/verify walks the full chain and returns exactly which record was broken and when. This is what makes the audit trail usable for compliance, not just debugging.The healthcare proof point
The clearest way to show what this looks like in practice is the Hospital Revenue Cycle pipeline we shipped in 0.6.0. Six agents: a revenue orchestrator, a clinical documentation agent, a CDI specialist, a medical coding agent, a charge reconciliation agent, and a billing compliance agent. The pipeline covers the full arc from chart documentation through coding, reconciliation, and final compliance review.
Steps 2 and 3 in the demo deliberately trigger PHI access blocks — the charge reconciliation agent attempting to access patient clinical records that its policy doesn't permit at that stage of the workflow. The orchestrator detects the denial, reroutes correctly, and the full sequence — every allow, every deny, every reroute — is written to a single session-level audit trail.
This is what HIPAA-compliant AI looks like in practice. Not a policy document. Not a checkbox. A runtime enforcement layer that proves, with a cryptographically chained record, that the agents operating against your PHI were governed at every step.
It's not "do you have an AI governance policy?" It's "show me what data your AI agents accessed during this investigation, and prove that every access was authorized." A session-level audit chain answers that question. A checklist doesn't.
Getting to 0.6.0 — what the platform has become
It's worth stepping back to describe the arc. We started with a core guard, a SQLite audit log, and four industry verticals. Each release added a layer of production infrastructure:
0.1.x: Tamper-evident hash chain, catalog integration, trial provisioning, API key scopes (SOC 2 CC6.3 least privilege), agent registry, and 12 industry verticals — covering 120 pre-built policies.
0.2.0: Full session lifecycle — TTLs, per-policy expiry, sensitivity decay, cross-agent isolation enforcement, and a session context hash. 32 agent definitions across 11 verticals, with framework assignments for every major integration.
0.3.0–0.4.0: PII masking, per-tenant retention policies, data glossary, admin provisioning, the Getting Started guided onboarding experience, and the Pilot Mode automated source discovery workflow.
0.5.0: Per-connection sensitivity mapping across all nine catalog connectors — every connection can now carry custom tag-to-sensitivity rules that override catalog defaults without re-fetching from the source.
0.6.0: The Hospital Revenue Cycle pipeline, a complete 6-section guided onboarding experience, and the SaaS test suite — 110+ new tests, core coverage enforced at 90%, SaaS coverage enforced at 80%. The platform now has the production infrastructure we're comfortable putting in front of design partners.
May launch — what we're opening up
We're targeting a public launch at the end of May 2026. Between now and then, we're opening a small number of design partner slots for organizations that want to go into production with AutoPIL as part of their agentic infrastructure buildout.
Design partners get:
- Direct access to the team — policy library customization for your specific use cases and agent workflows
- Production deployment support from day one, including Render-hosted SaaS or on-premises Docker
- Input into the roadmap — your use cases shape what we build next
- Preferred pricing when we formalize commercial terms at launch
The industries we're prioritizing for design partnerships are financial services, healthcare, and life sciences — the verticals where the regulatory stakes are highest and where the gap between "we have AI agents" and "we can prove those agents are compliant" is widest.
If you're a CDO, CISO, or AI platform lead in one of those verticals and you're somewhere between proof-of-concept and production, this is a good moment to talk. The window between "piloting this" and "accountable for this in a regulatory context" is closing faster than most organizations expected.
The governance layer needs to be in place before the regulator asks the question.
Anil Solleti is a Managing Director and Head of Data & AI with over 25 years in enterprise technology, data strategy, and AI governance. AutoPIL was built to solve the governance gap between deploying AI agents and being able to prove they're operating within their intended boundaries. Reach out at anil@vibrantcapital.ai if you're working on this problem.