We Shipped Autopilot 10 Weeks Ago. Here's the Unexpected Thing Customers Want


We shipped Autopilot 10 weeks ago. Autopilot is our autonomous AI agent for identity governance, designed to continuously monitor identity environments, evaluate risk in context, and take action without waiting for human review.
Since then, what's surprised me most isn't about the product. It's about what enterprise security leaders actually want from autonomy, and how dramatically that differs from what the identity industry has been selling them for the last decade.
What follows are notes from inside that learning. Real conversations with the security teams running Autopilot today, plus the CISOs, Heads of IAM, and identity architects evaluating it for the next wave. Across retail, financial services, healthcare, hospitality, and Big Tech. The patterns showed up faster than I expected. They were more uniform than I expected. And one of them surprised me.
The pattern across every conversation
Different industries. Different sizes. Different titles. The same line, in slightly different words.
A CISO at a global financial services firm put it most directly: "I don't need another alert and a warning. I need something to take action."
A senior identity architect at a Fortune 500 retailer framed the other side of the same coin: "I need a log of every action the agent takes, with the reasoning. We still work with auditors, and 'the system decided' isn't a good enough answer."
On the surface those two statements look opposite. One asks for autonomy. The other asks for documentation. But they're the same insight, said from two seats: enterprises want autonomous action, and they want a clean audit trail of every action that gets taken. Autonomy without auditability is a non-starter in any regulated environment. Auditability without autonomy is the status quo we've all been stuck with for a decade.
The thing that's been mis-sold for ten years is that autonomy and accountability are opposites. They're not. They're complementary. Customers are not asking us to choose between "fully automated" and "humans review everything." They're asking for a system that does the work and shows its work, at the same time, every time.
Almost nobody in the legacy identity governance market has built that combination. They've built two products: rules engines that fire alerts, and access reviews that get rubber-stamped quarterly. Neither is autonomous. Neither produces the kind of action-level audit trail a regulated environment can defend. Both are exhausting.
We built Autopilot to do both: take the action, and produce a complete, defensible audit log of every step it took and why. That's the unlock.
The crawl-walk-run shape they're choosing themselves
I expected we'd have to convince buyers to start small with autonomy. To meet them where they were, hand-hold them through a phased rollout, and prove value before unlocking more.
Instead, security teams are articulating the pattern back to us before we pitch it.
A Head of Security at a healthcare enterprise said it clearly: "Trust in autonomy builds over time. So maybe it prompts me first, but as we get comfortable with it, this just needs to run. I've got an agent for that. It just happens."
Almost every conversation followed this shape. Phase 1: the agent investigates, surfaces a recommendation, a human clicks. Phase 2: an admin pre-approves classes of action, and the agent executes. Then, eventually, full autonomy on narrow, well-bounded tasks.
This is buyers leading the architecture, not vendors prescribing it. That's a tell. It means the market has matured past the question of whether autonomy belongs in identity governance and moved to the question of how to operationalize it without losing accountability.
The teams we're working with don't need convincing. They need a credible path. We've built that path into Autopilot from day one.
What buyers are rejecting
Every demo we ran ended in a comparison. Security leaders held Autopilot up against their existing IGA stack, and named what was breaking.
The list rhymes across industries.
Quarterly access reviews are theater. A senior security leader at a global financial data firm asked his team, almost rhetorically, "would you rather have compliance or security?" The framing was sharp because it was honest. Quarterly UAR cycles exist for auditors, not for defenders. Everybody in the room knows it. Nobody in legacy IGA has the architecture to fix it, because their architecture is built around the cycle. Ours isn't.
Rules-based systems produce more alerts, not more action. Several CISOs described arriving at security programs that had ten thousand identity rules and one human staring at the dashboard. The rules weren't wrong. They were just disconnected from the question of who actually had the authority to act on them. Autopilot collapses that gap.
Legacy deployments take years and don't reduce manual work. One enterprise security leader described getting more value in three weeks of running Linx than in three years of running two of the largest legacy IGA platforms combined. I'm not going to name the platforms. The point wasn't that those tools are bad. It's that they were built for a different era of identity, one where humans had time to be in the loop on every decision. That era is over.
One enterprise we engaged with was running five identity products simultaneously. Five. They had to deprovision one because it was disrupting the others. This is what the end-state of "buy more tools" looks like. It's not a security program. It's a tax on the security team. We replace that stack with one platform that does what those five together couldn't do.
These aren't edge cases. They're the pattern.
The agents teams want first
When teams pick their first Autopilot deployment, three patterns dominate.
Admin Drift Monitor. Listens for any access change that elevates someone to admin. Runs a peer comparison and a JIT/access-profile check. Only fires if it finds no justification for the elevation. The reason this one wins is concrete: it produces almost zero false positives, because the bar for "should this human be admin" is exceptionally well-defined inside any mature security program. Teams audit the agent's reasoning easily. They trust it within days. Then they extend it.
UAR Reviewer Classifier. Continuously evaluates entitlements during access review campaigns and pre-recommends approve/deny before the human even opens the review. The pattern: stop asking humans to be the first decision-makers. Make them the second. The human's time is worth far more on the close calls than on the obvious ones.
Access Profile Tuner. The next agent we're shipping. It continuously refines access profiles based on real usage patterns, tightening over-provisioned access automatically and surfacing the gap between what someone has and what they actually use day to day. Same architectural pattern as the first two: narrow scope, accountable action, the human as the second decision-maker, not the first.
What links these three agents is what they aren't. They aren't general-purpose AI assistants. They aren't conversational chatbots. They aren't models that "help you think about identity." They're narrow, accountable, single-purpose agents. They do one thing. They do it well. They show their work.
This is the part of the next 12 months in identity that I think the industry is going to get wrong. The future isn't going to be one giant AI that handles all of identity governance. It's going to be a fleet of narrow agents, each one auditable, each one deployed when the team is ready, each one retired or replaced as the threat shape changes.
The phrase I've started using internally is "the control plane is the agent fleet, not the model." That's the architectural bet behind Autopilot. We've built it that way because the security teams running it are operating that way.
We make autonomy boring. Boring in the way that fire suppression in a data center is boring. Specifically engineered, well-instrumented, mostly invisible, deeply trusted. That's the bar we set for Autopilot, and it's the bar we're meeting.
What this means for the next twelve months
A few predictions, from where I sit 10 weeks in.
The identity governance category is going to fragment along the autonomy axis. Vendors who ship generic "AI-powered" features bolted onto rules engines will lose share to vendors who ship narrow, accountable agents that customers can audit, deploy, and extend. The winners will be the ones who make autonomy explicable, not the ones who make it impressive. We've made our bet, and the market is validating it in real time.
The "control plane" framing is the right one, but it goes beyond agents. Microsoft, Okta, and others are now naming their agent control surfaces. That's a market-defining moment. The deeper truth, though, is that the agent control plane only works if the identity control plane underneath it is unified. You can't govern an agent's actions if you don't have a unified record of every human, machine, service account, and agent identity in your environment. Identity becomes the substrate. The agent layer is the workload. Linx is the only platform built for both.
CISOs are going to keep telling us they want autonomy and verification together. I expect this signal to get louder, not quieter. The boards of the companies our customers serve are starting to ask "are we governable?" instead of "are we secure?" That's a more sophisticated question, and it's going to drive procurement priorities for the next two years. The platforms that answer it well will define the next decade of security.
The bar for what counts as "autonomous identity security" is going to rise quickly. Six months from now, the demo bar will be entirely different from where it is today. The bar is being set by the platforms shipping today, with skin in the game. We are one of them.
Closing
We shipped Autopilot 10 weeks ago. The conversations are different from the ones I had a year ago. Different from six months ago. Different from 10 weeks ago.
The market is moving, and it's moving toward something specific: autonomous identity governance that earns trust by showing its work. That's exactly what we built.
If you're a CISO or Head of IAM thinking about how autonomy fits into your identity program, what to deploy first, how to phase trust, where the audit trail needs to live, we'd be glad to compare notes. The companies that figure this out first won't be the ones who buy the most tools. They'll be the ones who deploy autonomy with discipline. We're doing it now. Come see what 10 weeks of shipping autonomous identity governance actually looks like.
10 weeks in, that's what I'm certain of.
Frequently asked questions
What is autonomous identity security?
Autonomous identity security is the use of AI agents to continuously monitor identity environments, evaluate risk in context, and take action in real time without waiting for human review. It replaces the periodic, alert-driven model of legacy identity governance with a continuous, agent-driven model that operates at machine speed and produces a complete audit trail of every action taken.
What is Linx Security's Autopilot?
Autopilot is Linx Security's autonomous AI agent for identity governance. It runs as a fleet of narrow, single-purpose agents, including Admin Drift Monitor, UAR Reviewer Classifier, and Access Profile Tuner, that each perform a specific identity governance task continuously, with full action logging and customer-readable rationale on every decision. Autopilot is shipping today, with deployments and active engagements across retail, financial services, healthcare, hospitality, and Big Tech.
How is autonomous identity security different from legacy IGA?
Legacy identity governance platforms are built around quarterly access review cycles, rule-based alerting, and human-in-the-loop decision making. Autonomous identity security is built around continuous monitoring, AI-driven contextual risk evaluation, and direct action by accountable agents. Linx Security delivers value in weeks rather than the months or years typically required by legacy IGA platforms.
Which Autopilot agents do customers deploy first?
The two most common first deployments are Admin Drift Monitor, which detects unauthorized administrative privilege elevation and only fires when it finds no business justification, and UAR Reviewer Classifier, which pre-classifies entitlements during access review campaigns to reduce human review time. Access Profile Tuner is the next agent shipping, continuously refining access profiles based on actual usage patterns. Teams typically extend to additional agents over the following 30 to 90 days as trust in the platform builds.
How does autonomous identity governance work with auditors?
Autonomous identity security only works in regulated environments if every action the system takes produces a complete, defensible audit trail. Linx Security's Autopilot logs every action with full reasoning, the data inputs the agent used, and the policy or context that triggered the decision. This satisfies SOC 2, ISO 27001, NIST, and most regulatory frameworks while maintaining continuous autonomous operation.
Who should consider autonomous identity security?
Autonomous identity security is most relevant for CISOs, Heads of Identity and Access Management, and security architects at enterprises with more than 1,000 employees or significant non-human identity sprawl across service accounts, machine identities, AI agents, and contractor access. Companies operating in regulated industries (financial services, healthcare, retail, hospitality) and those running multiple legacy identity tools simultaneously typically see the fastest value from migrating to a single autonomous platform.

