Linx Blog

All posts

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Linx Security SecurityWeek
Company News

Linx Security Raises $50 Million for Identity Security and Governance

Apr 5, 2026
Company News

Linx Security Raises $50 Million for Identity Governance Platform

Apr 3, 2026
We Just Raised Series B. Here's What it Means for the Future of IGA.
Company News

We Just Raised Series B. Here's What it Means for the Future of IGA.

Mar 31, 2026

We're at one of those rare moments where an entire software category gets rewritten from scratch. Not improved. Replaced. AI isn't making identity governance faster - it's making the old architecture obsolete.

When Niv and I started Linx two years ago, we made a bet: that the identity governance category was overdue for a fundamental rethink, and that AI-native architecture - not AI bolted onto legacy infrastructure - would be what made that possible. That the future of IGA wasn't periodic reviews and manual workflows. It was continuous, autonomous, and built for a world where humans, machines, and AI agents all coexist inside the same enterprise.

Today, I'm proud to announce that Linx Security has raised a $50M Series B, led by Insight Partners, with continued support from Cyberstarts and Index Ventures - bringing our total funding to $83 million. And alongside this round, we've launched Linx Autopilot: the industry's first AI agent purpose-built for Identity Governance and Administration.

This isn't just a funding milestone. It's a signal that the IGA category is at an inflection point - and that Linx is leading it.

Why Now

The identity landscape has been transformed by three forces converging at once.

First, AI agents are proliferating inside every enterprise - not as experiments, but as active participants in business workflows. They hold credentials. They access sensitive systems. They act with autonomy. And almost none of today's governance frameworks were built to manage them.

Second, the attack surface has exploded. One breach, one over-privileged service account, one dormant credential - and the damage can be catastrophic. Boards know it. CISOs feel it daily. The compliance frameworks are finally catching up.

Third - and this is what excites me most - the technology is finally ready. AI-native architecture makes it possible to do in seconds what traditional tools take weeks to accomplish: detect, evaluate risk in context, and act. Not reactively. Continuously.

IGA was always treated as a necessary evil. A compliance checkbox. Something you suffered through. We built Linx on the premise that it doesn't have to be that way.

What We're Building - and Why It Matters Now

The enterprise of 2026 doesn't look like the enterprise IGA was designed for. AI agents are being provisioned inside every workflow. Non-human identities now outnumber human ones. The attack surface isn't growing linearly - it's multiplying. And the governance frameworks built for a world of on-prem directories and annual access reviews were simply never designed for this reality.

Linx is built AI-native from the ground up - not AI layered onto legacy architecture. That distinction matters more than it might sound. It's what allows us to move from periodic, reactive governance to something fundamentally different: continuous, autonomous identity security that operates at the speed of the business and the speed of the threat.

Think of it as having a security operator working 24/7 on your behalf - one that monitors every identity in your environment, detects risk in context as it emerges, and acts before the damage is done. When a privileged account behaves unexpectedly, it responds. When an AI agent is provisioned with excessive permissions, it sees it. When an employee moves roles and leaves ghost access behind, it remediates - before an attacker finds it first.

Security teams don't lose control. They gain leverage. The tedious, repetitive work gets handled autonomously. The decisions that require human judgment get escalated. That's what modern identity governance looks like - and that's what we're delivering.

To the People Who Made This Possible

None of this happens without the people.

To Niv - twenty years of shared history, and I still learn something from you every week. Building this company alongside you has been one of the great privileges of my career. You push this product to places I wouldn't have imagined.

To Sarit - your technical vision and relentless standards are woven into every line of this platform. What you've built with the engineering team is something we'll be proud of for a long time.

To our entire Linx team - 100 people who bet on a vision and made it real. Every customer win, every product breakthrough, every late night - that's us, together. I'm incredibly proud of what we've built as a team.

To Teddie, Elan and the Insight Partners team - your belief in where this market is going gave us a true partner for the next chapter. And to Gili at Cyberstarts, and Shardul at Index Ventures - you've been with us from the beginning, and your conviction in this vision has never wavered. We don't get here without all of you.

And to our customers - the security leaders and identity practitioners who chose to build with us early, challenged us to be better, and trusted us with what matters most. You are the reason we do this. Your trust is the highest validation we know.

What Comes Next

The market isn't just ready, it's asking for it. Every security leader we talk to, every enterprise scrambling to govern AI agents they provisioned last quarter with no visibility into what they can access, confirms what we believed two years ago: this category was overdue, and the moment is now.

What comes next is simple to say and hard to execute: we scale. We're growing the team, accelerating the Autopilot roadmap, and going deeper with the enterprises already trusting us to govern millions of identities in production.

The IGA category is being rewritten. The window to define what the next generation looks like is open.

We intend to define it.

- Israel Duanis, CEO & Co-Founder, Linx Security

Company News

Linx Lands $50 Million from Wiz's Earliest Investors to Fix Identity Security in the AI Era

Mar 31, 2026
Linx Security Raises $85M
Company News

Linx Security Raises $50M Series B as Identity Becomes Security’s Biggest Failure Point

Mar 31, 2026

NEW YORK, March 31, 2026 - Linx Security, a pioneer in modern identity security and governance solutions, today announced a $50 million Series B financing round led by global software investor Insight Partners, with continued participation from Cyberstarts and Index Ventures. This brings Linx’s total funding to $83 million. The 100-person startup has already signed multimillion-dollar contracts with banks, healthcare companies, and Fortune 500 firms, governing millions of identities globally.

As enterprises adopt cloud, automation, and AI, the number of identities inside organizations has exploded, now spanning not just employees, but machines, services, and AI agents, which outnumber humans by roughly 80 to 1. Traditional identity governance tools, built for a smaller and more static environment, have struggled to keep up, leaving security teams with limited visibility, slower response times, and expanding risk at a time when nearly 90% of security incidents involve identity-related failures.

Founded in 2023 by cybersecurity veterans Israel Duanis and Niv Goldenberg, the company provides an AI-native platform that continuously maps, monitors, and governs all identities across the enterprise, human, non-human, and agents alike. By replacing manual processes and periodic reviews with real-time detection and automated remediation, Linx enables organizations to reduce identity risk without slowing down the business.

“Identity governance has shifted from a back-office compliance function to a core pillar of enterprise security,” said Israel Duanis, CEO and co-founder of Linx Security. “This funding allows us to scale faster and meet the growing demand from organizations that need real-time visibility and control over every kind of identity operating in their environment.”

Linx recently introduced Linx Autopilot, the first autonomous AI agent designed to fundamentally change how identity governance is managed. Moving away from the constraints of manual oversight and reactive processes, Autopilot continuously monitors identity activity, detects meaningful changes in real time, and takes action, either resolving issues automatically or escalating when needed. By operating across human, machine, and agents, it enables security teams to move from periodic control to continuous, intelligent enforcement, without adding operational overhead.

The new funding will support Linx’s next phase of growth, including expanding its global footprint, scaling enterprise go-to-market efforts, and accelerating product development around autonomous identity governance.

"Linx is reimagining IGA architecture to tackle the emerging problem of agent governance. The company’s AI-first approach, along with the introduction of Linx Autopilot, well positions Linx in this critical category and we're thrilled to partner on this journey,” said Teddie Wardi, Managing Director at Insight Partners.

"We backed Linx at inception because we believe identity would become the core control layer of modern security,” said Gili Raanan, Founding Partner at Cyberstarts. “AI agents are rapidly expanding the number of identities operating inside organizations, turning identity governance from a back-office compliance task into a board-level risk. Linx is building the platform to govern that new reality.”

About Linx Security

Linx Security is the AI-native identity security and governance platform built for the era of AI agents and non-human identities. Founded in 2023 and headquartered in New York, the company delivers unified visibility, continuous risk detection, and autonomous remediation across every identity in the enterprise - human, non-human, and AI. Backed by Insight Partners, Index Ventures, and Cyberstarts, Linx Security is trusted by identity-intensive enterprises globally to eliminate identity risk without slowing the business. For more information, visit www.linx.security.

About Insight Partners

Insight Partners is a global software investor partnering with high-growth technology, software, and Internet startup and Scale-up companies that are driving transformative change in their industries. As of June 30, 2025, the firm has over $90B in regulatory assets under management. Insight Partners has invested in more than 875 companies worldwide and has seen over 55 portfolio companies achieve an IPO. Headquartered in New York City, Insight has a global presence with leadership in London, Tel Aviv, and the Bay Area. Insight's mission is to find, fund, and work successfully with visionary executives, providing them with tailored, hands-on software expertise along their growth journey, from their first investment to IPO. For more information on Insight and all its investments, visit insightpartners.com or follow us on X @insightpartners.

The Shift to Truly Autonomous Identity Security: Introducing Autopilot Cover
AI Agents

The Shift to Truly Autonomous Identity Security: Introducing Autopilot

Mar 18, 2026

TL;DR

  • Traditional identity governance relies on periodic review cycles, but point-in-time checks detect risks and misconfigurations long after they are introduced. Organizations need to take a new, modern approach to securing identity.
  • Current AI-powered identity security systems are not autonomous. They show alerts and generate recommendations but rely on a human trigger before they start taking action.
  • Truly autonomous identity security is a fundamental shift, and that’s where Linx Security’s revolutionary new Autopilot AI comes in. Autopilot evaluates access, assesses risk, and either initiates remediation or escalates to a human when oversight is required.

What Are the Limits of Reactive Identity Security?

Reactive identity security and point-in-time checks can’t keep up with the constant change that characterizes modern identity environments, especially at scale. Employees change roles, contractors rotate in and out, and machine identities created to perform a specific task are no longer needed once the task is done.

Periodic review cycles made sense in a world where identity was changing slowly and the blast radius of a compromised account was limited. But today, a single compromised identity can cascade across different cloud environments, SaaS platforms, and CI/CD pipelines in minutes. 

The 2024 Midnight Blizzard breach at Microsoft proves this point. During this attack, threat actors compromised a single test tenant account, then moved laterally to high-value assets like cybersecurity team accounts and even executives’ accounts. 

The difficult truth? Identity is now the quickest path attackers can take to reach critical systems, and reactive security isn’t enough. (Learn more about why identity breaches are preferred by attackers here.)

How Do Identity Risks Emerge Between Reviews?

Identity risk arises from the slow accumulation of misconfigurations and access changes that happen between governance reviews.

Typically, role drift and privilege accumulation are the most common sources of identity risk in any organization. Even though an access grant for a specific engineer might have been legitimate when it was approved, permissions often persist long after a role change makes them irrelevant.

Access entitlements across multiple systems exacerbate this issue, as a single user might have multiple identities and permissions across different cloud providers, SaaS applications, CI/CD platforms, and other tools. 

Risks don’t live in these systems in isolation. Think of a user who has read-only access to a production AWS account but admin access to a CI/CD pipeline that can deploy resources to that account. Human reviewers and review tools that look at systems independently won’t catch this escalation path.

And the problem compounds when time enters the equation. When someone is granted permanent elevated access to address a particular issue instead of JIT admin access, the window between that change and the next governance review becomes especially dangerous. 

For example, a developer might get admin access to a production environment to help troubleshoot an outage. Though the incident is resolved within hours, the elevated permissions persist. 

If an attacker compromises this account, the blast radius can be significant: They’ll have access to all applications, secrets, and workloads that are running in that production environment. Identity solutions that conduct periodic reviews will eventually catch over-privileged access, but there might be months of exposure in the meantime.

Finally, department restructures happen all the time. In fact, with AI adoption, they’re more frequent than ever. These organizational changes shift the access context entirely. For instance, a team that used to need access to a particular environment may no longer exist in the same form. Despite this shift, their permissions usually stay in place until the next review cycle, resulting in over-privileged access on a team-wide scale.

What Is Reactive Tooling? What Is the Alternative?

Many enterprises believe that they’re keeping pace with risks because they’ve invested heavily in Identity Governance and Administration (IGA) platforms and Privileged Access Management (PAM) solutions. But these tools flag risks long after they’ve been introduced. 

Even the newer generation of identity security tools that have AI and machine learning (ML) capabilities still function as analysis engines. They identify issues and give you recommendations on how to solve them, but they don’t act on your behalf. 

Without automated provisioning and deprovisioning tied directly to lifecycle events, permissions drift between review cycles with no option to correct them.

The organizations that are effectively slashing identity risks are those embracing AI identity security automation in 2026: continuous, always-on coverage from autonomous AI that can detect, prioritize, and remediate access issues in real time, with minimal human oversight.

Why Should You Move From AI Assistance to Autonomous Execution?

Most of what the market calls today “AI-powered identity security” is actually AI-assisted security. As we’ve seen, these tools detect anomalies and generate recommendations. They might identify that a particular user has more privileges than most of their peers or that a service account hasn’t been used for a long period of time. These insights are useful, but AI-assisted tools leave a critical gap between identifying an issue and remediating it.

Depending on a human for input isn’t always the wrong move. Yet workflows where humans have to analyze and act on every notification from AI tools keep engineers trapped in a cycle of alerts. After all, human bandwidth will never be able to match the pace at which identity risks are growing.

To free engineers up to innovate and turbocharge remediation speed, autonomous systems handle straightforward fixes and repetitive actions. They determine when human input isn’t required by evaluating context. Then, they decide on an appropriate response and execute the corresponding workflow. 

By leveraging an autonomous security agent, the entire identity security workflow shifts from “send an alert and a recommendation to a human” to “assess the problem, decide what to do about it, and act accordingly.”

Introducing Autopilot

With Linx Security’s Autopilot, teams can now deploy AI agents that work continuously on their behalf: monitoring their identity environments 24/7, detecting meaningful changes as they happen, evaluating risk in context, and taking action in real time whenever there are issues.

What Does Autopilot Offer?

  • Speed and Control: Autopilot evaluates access, assesses risk, and either initiates remediation or escalates to a human when oversight is required, solving the speed-control paradox.
  • Governed Autonomy: Autonomy demands trust. Autopilot is designed with that in mind, featuring guardrails and intelligent oversight mechanisms that ensure each autonomous action is carefully controlled. 
  • Reduced Alert Fatigue: Unlike AI-assisted platforms, Autopilot reduces alert fatigue by looping in humans only when it’s truly necessary.
  • Task-Specific Agents: Each Autopilot agent is an expert at a core identity task, such as identification of access drift, profile tuning, and JIT access approvals.
  • A Comprehensive Suite of Tools: Autopilot is part of a three-tier AI architecture, alongside AI enhancements that constantly optimize and refine your data and AI Copilot, a personal AI assistant that makes engineers Linx system superusers.

“Security teams don’t need more noise—they need meaningful leverage,” says Niv Goldenberg, Chief Product Officer and Co-Founder at Linx Security. “Autopilot allows organizations to modernize identity security responsibly, combining continuous AI-driven execution with human expertise.”

Conclusion

In a periodic review model, there’s a gap between when identity risks emerge and when governance catches up. Access changes constantly, governance occurs quarterly, and attackers operate within this window.

With autonomous identity security, this gap is closed by autonomous agents that monitor access changes in real time, evaluate them against an organization's in-play policies, and take immediate action to resolve any issues. 

Autonomous identity security is where Linx stands apart.

“Autopilot marks the beginning of a new chapter for Linx,” says Israel Duanis, CEO and Co-Founder of Linx Security. “Our vision is to build a security platform that doesn’t just inform teams—it operates alongside them. The future of identity security isn’t more alerts or more manual reviews. It’s intelligent systems that continuously strengthen posture while keeping humans in control. This launch establishes Linx as a leader in autonomous identity security and sets the foundation for where our platform is headed.”

If you want to see Autopilot in action, join us for an in-person demonstration during the RSA Conference (March 23–26). We’ll also be hosting a live virtual demonstration on April 9th at 11 a.m. ET.

To see Autopilot live virtually, register for our upcoming webinar on April 9th: Autopilot: Closing the Identity Risk Gap with Autonomous AI, or schedule a demo to get a personalized demonstration.

Company News

Linx Security Launches Autopilot, Introducing Autonomous AI for Identity Security

Mar 18, 2026
Anatomy of an Identity Breach cover
Identity Security

Anatomy of an Identity Breach: The 7 Steps Attackers Repeat (With Real Examples)

Feb 9, 2026

TL;DR

  • Attackers typically follow seven steps to carry out an identity attack, and there are ways to protect yourself at each stage of the kill chain.
  • Always check if your credentials have appeared in data leaks and change them, implement phishing-resistant MFA, take advantage of JIT for admin accounts, and use the principle of least privilege.
  • Preventing attacks is just one piece of the puzzle; you should also take measures that limit the blast radius, ensure you can detect issues if they pass your prevention mechanisms, and leverage automated workflows that respond to issues.

Why Do Attackers Prefer Identity-Based Attacks?

Identity is now the fastest route to critical systems: Humans, non-human identities (like service accounts, workloads, and API keys), SaaS apps, cloud control planes, and AI agents all operate through permissions and tokens that can give attackers a dangerous foothold.

Raising the stakes, identity attacks are more likely to succeed than other attacks, and they’re also harder to detect. When a threat actor uses one of your credentials, they blend in with legitimate traffic, and most security tools miss the subtle signs that point to a compromise.

While it’s impossible to build perfect prevention against all of these attacks, you can implement ironclad defenses. The key is to take a layered approach. With defense–in-depth strategies in place, when one layer is compromised, another layer will block the attack, whether it stems from phishing, credential stuffing, token harvesting, or another identity attack vector.

In this article, we’ll explore the practical steps attackers take to compromise identities and provide hands-on advice for thwarting them at each stage of the identity kill chain.

What are the 7 Steps Attackers Use for Identity Breach?

Attackers typically follow these steps to carry out an identity attack:

  1. Initial access
  2. MFA or “friction” bypass
  3. Privilege gain
  4. Lateral movement via identity
  5. Persistence
  6. Taking action on objectives (data access, fraud, ransomware enablement)
  7. Evasion and reentry

Each step links together, enabling the next step in the chain. As a result, a minor compromise can lead to widespread breaches because of privilege escalation, lateral movement, and persistent actions.

Let’s take a look at each step in detail.

Step 1: Initial Access (Credentials or Foothold)

An attacker can obtain access to credentials through phishing campaigns, reused passwords, accidentally exposed secrets in VCS systems or CI/CD pipelines, or by purchasing compromised accounts on the dark web. 

Reused passwords are especially problematic. Despite security training programs, many employees continue to use the same passwords across personal and professional accounts. This practice creates a domino effect: Compromised access to one service compromises access to many others.

What’s a Real-World Example?

In 2021, attackers gained access to Colonial Pipeline’s systems by using a compromised password for a VPN account that didn’t have MFA enabled. This account actually belonged to a former employee, but it was never disabled after their termination. The threat actors used this foothold for a ransomware attack against the company, which provides fuel for about half of the East Coast. System outages cascaded into fuel shortages, and a state of emergency was declared in 17 states and Washington, D.C. Restoring operations took a $4.4 million ransom payment.

How Can Organizations Keep Systems Safe?

  • Prevent: Identify and disable all inactive accounts, as they can also pose security risks if compromised. Ensure MFA is enabled for all your users.
  • Limit the Blast Radius: Reduce the number of externally accessible services, and require additional passwords and MFA for anything important.
  • Detect: Monitor for unusual activity, like authentication attempts from unfamiliar locations or devices or numerous failed login attempts that signal credential stuffing.
  • Respond: Leverage automated workflows to immediately disable compromised accounts.

Step 2: MFA or “Friction” Bypass

MFA is just the first line of defense, and it’s not a silver bullet. When attackers encounter MFA, they can employ tactics to get around it. For example, fatigue attacks involve sending a flood of MFA approval requests to your users until they accept.

Social engineering isn’t the only risk, though. Phone-based MFA is vulnerable to SIM swap attacks, which could allow attackers to intercept your SMS codes.

What’s a Real-World Example?

In 2022, Uber experienced a data breach that began when a hacker purchased an employee’s credentials on the dark web. After encountering MFA, the attacker impersonated a security employee, initiated a fatigue attack, and asked the compromised user to accept the MFA requests he sent. Once the fatigue attack proved successful, the attacker gained access to Uber’s VPN; from there, he moved laterally, ultimately gaining full admin privileges.

How Can Organizations Keep Systems Safe?

  • Prevent: Use strong MFA mechanisms (Authenticator Apps, Hardware keys or Passkeys) for all accounts if possible, otherwise at least for privileged ones. Implement phishing-resistant MFA, and establish strict proof-of-identity requirements for help desk employees.
  • Limit the Blast Radius: Require multiple approvals for high-privilege account resets; require additional passwords for sensitive services.
  • Detect: Implement MFA monitoring that automatically denies a flood of requests, and require human approval (with identity verification) before users can add a new authentication device.
  • Respond: Whenever you detect suspicious MFA activity, temporarily restrict access for your user until verification is complete.

Step 3: Privilege Escalation

Accounts with permanent administrative rights are exactly what malicious actors are looking for. Instead of standing privileges, a better move is to grant temporary admin privileges through a mechanism like just-in-time access. 

Another problem to look out for? When secrets hygiene is not implemented consistently, and secrets like API keys are stored in VCS systems or wikis, there are simple opportunities for privilege escalation.

What’s a Real-World Example?

In October 2023, Okta experienced a breach after an attacker compromised a customer support engineer’s account. This account had administrative rights, allowing the attacker to view HTTP Archive (HAR) files containing cookies and session tokens uploaded by customers during support troubleshooting sessions. By stealing session tokens, the attacker was able to impersonate legitimate users across different organizations.

How Can Organizations Keep Systems Safe?

  • Prevent: Implement just-in-time (JIT) access for administrative accounts.
  • Limit the Blast Radius: Ensure admin accounts are specific to a single service and don’t have cross-service privileges.
  • Detect: Implement alerts for role changes or permission modifications.
  • Respond: Build in automation that responds to a suspicious account by revoking elevated access and reviewing recent actions.

Step 4: Lateral Movement via Identity (SSO, SaaS, Cloud Control Plane)

It goes without saying: When attackers gain elevated privileges, what they’re really gaining is the ability to move laterally through your connected systems. For example, a compromised SSO can unlock access to dozens of applications, and cloud control planes can be easily accessed from anywhere if you have valid tokens.

What’s a Real-World Example?

In 2023, an attacker known as Storm-0558 leveraged forged Microsoft authentication tokens to access enterprise email accounts. The mechanism of attack? Lateral movement from MSA (customer) keys to the Azure AD enterprise system. The breach affected approximately 25 organizations, primarily government agencies, including U.S. State Department email accounts. 

How Can Organizations Keep Systems Safe?

  • Prevent: Avoid creating “super admin” accounts that can access all your systems.
  • Limit the Blast Radius: Remove unnecessary permissions that might offer access to systems your users don’t actually need access to.
  • Detect: Implement monitoring for unusual access patterns, especially accounts accessing systems they’ve never accessed before.
  • Respond: When you detect lateral movement, isolate the compromised identity and review access logs.

Step 5: Persistence (Tokens, OAuth Apps, Service Principals, Backdoor Identities)

As soon as an attacker gains access to your systems, they’ll look for ways to maintain it if the original entry point is detected and blocked. Persistence techniques include the creation of OAuth applications, service principals, and API keys. These mechanisms are highly effective because they are often mistaken for legitimate administrative objects and can even survive password resets.

What’s a Real-World Example?

In 2025, Salesforce warned customers that a group called ShinyHunters was using vishing (voice phishing) to trick help desk staff into resetting MFA on privileged accounts. Once they got a foothold in a Salesforce instance, the attackers created malicious OAuth applications that allowed them to maintain persistent access.

How Can Organizations Keep Systems Safe?

  • Prevent: Control who can create OAuth applications, and establish lifecycle governance for service principals to ensure they have expiration dates.
  • Limit the Blast Radius: Restrict the permissions that can be granted to OAuth applications (for example, in AWS, use permission boundaries or service control policies to limit what IAM roles your OAuth apps can assume); ensure your service principals respect the principle of least privilege (PoLP).
  • Detect: Alert on the creation of new applications that require extensive permissions.
  • Respond: Maintain an inventory of authorized OAuth apps and service principals, and remove any new apps that are created outside of your process.

Step 6: Action on Objectives (Data Access, Fraud, Ransomware Enablement)

Identity compromise is rarely the final objective for an attacker. Usually, it’s only a stepping stone on the way to accessing data, committing fraud, or enabling ransomware.

What’s a Real-World Example?

In September 2023, MGM Resorts experienced a devastating ransomware attack that led to more than a week of operational problems across 30 resorts, like shutdown slot machines, offline ATMs, and locked-out guests (the downside of digital hotel keys). Attackers gained access by researching employees on LinkedIn, then calling the help desk to request a password reset in their names.

How Can Organizations Keep Systems Safe?

  • Prevent: Implement PoLP on both the infrastructure and data layer; require additional verifications before a user can perform sensitive actions (e.g., ask users to reauthenticate with MFA or ask them for a manager’s approval).
  • Limit the Blast Radius: Prevent the creation of “super admins.” If any exist in your systems, downgrade their privileges. 
  • Detect: Alert on mass downloads or unusual queries against sensitive databases.
  • Respond: Implement automation that can quickly restrict access when suspicious data is detected.

Step 7: Cover, Repeat, Expand (Defense Evasion + Re-Entry)

Powerful attackers try to reduce their visibility as much as possible by altering audit logs and disabling security tools. They also wreak havoc by creating multiple re-entry points. Many times, this goes unnoticed: In the wake of a breach, organizations can get tunnel vision and focus only on the initial entry point.

What’s a Real-World Example?

In 2023, a threat group called LockBit demonstrated impressive defense-evasion techniques, accounting for $91 million in ransomware payments in the U.S. alone. The secret to their success? They played the long game. When they gained access to their victims’ systems, they didn’t deploy ransomware right away; they first covered their tracks and expanded their foothold. Malware deployment and ransom demands came weeks or months later.

How Can Organizations Keep Systems Safe?

  • Prevent: Implement audit logging, and forward logs to immutable storage.
  • Limit the Blast Radius: Ensure that no one can disable security monitoring, not even for testing purposes.
  • Detect: Alert on log-retention policy changes and treat them as high-priority security incidents.
  • Respond: Implement automation that can quickly revoke access for a compromised identity across all systems.

What Are Best Practices for Reducing Identity Breaches?

Follow this checklist to cut your identity risk:

  • Start by Gaining Visibility: You can’t protect what you don’t see, so inventory your identity sprawl and identify password-only external access.
  • Review Admin Privileges: Determine who has admin rights, and analyze if they actually need all those permissions.
  • Test How Fast You Respond to Issues: Identify how much time it takes to revoke all access for a specific identity. Use this test result as a baseline for improvement.
  • Deploy Phishing-Resistant MFA: Phishing-resistant MFA needs to be implemented everywhere, as attackers often compromise lower-priority systems first and then move laterally.
  • Eliminate Exposed Credentials and Leaked Secrets: Scan your code repositories, wikis, and shared documents for exposed credentials. Implement automated scanning in your CI/CD pipelines to prevent secret leaks.
  • Protect Audit Logs: Audit logs should be stored in immutable storage to ensure they cannot be altered after creation.
  • Create Alerts: Alert on role changes, app consents, unusual MFA behavior, and federation changes.
  • Implement JIT Elevation: You don’t need persistent admin permissions. Administrative access should be granted on demand for a specific time period.

Conclusion

Identity breaches are the easiest way in for attackers, and they usually follow a predictable pattern.

To disrupt this pattern, shifting left with stronger prevention is a start, but it’s not enough. You’ll also need to build powerful detection capabilities and automate quick responses to threats. Your motto should be, “Make it harder to get in, harder to escalate, harder to persist, easier to detect, and faster to contain.”

At Linx Security, we help organizations build robust identity security that addresses each stage of the attack chain. Book a demo with one of our engineers to learn more about how we can keep your systems safe from identity breaches.

AI-Native Databases: The Missing Layer Behind Reliable CoPilots Cover
AI Agents

AI-Native Databases: The Missing Layer Behind Reliable CoPilots

Jan 30, 2026

For the past two years, I've been building agents that expose data residing in different databases. Here, I'd like to share some actionable insights I've gathered along the way.

At Linx, we had to handle extremely high-scale databases for large enterprises. Building agents that perform well with low latency and high accuracy is hard, and the list of challenges is long.

Which model should you use? Should you fine-tune? How do you consume historical query data, and should you perform active learning? What about orchestration, do you go with an agentic framework or keep it vanilla? How do you respond quickly to investigations running against high-scale databases? And how do you rationalize cross-domain information spanning Business, Security, Governance, and Compliance?

These are just a few of the questions we had to answer.

While all of these topics are important, I'd like to focus on a different angle, one that turned out to be even more crucial.

When building an agent, engineers tend to equip it with tools that allow it to query the data, expose the schema, and assume that the agent will perform well from there. However, that's not the case.

Imagine you're exposing a schema to a junior analyst who's proficient in your database query language. Will they be able to answer questions about the data correctly? In reality, no. In the following sections, I'll explain why not, and how we solved it.

What Differentiates AI from Humans?

Jeremiah Lowin, in his excellent talk, presents criteria for how LLMs differ from humans when consuming data from APIs. Here's my version for the database problem, which is slightly different:

Real-Life Examples of Non-AI-Friendly Cases

Bad Naming: We had a field called is_external, which actually means “is the email domain external to the organization.” It does not mean the user is external, it's a property of the email itself. That naming alone caused repeated mistakes when the AI was asked about external users. The AI assumed the user was a guest, leading to incorrect security audit reports.

Different Lingo: We use a graph database. The relationship between a user and their accounts was represented by an edge named owner_of. But the relationship between a user and their secrets was named responsible_for. When someone asks "Who owns this secret?", the agent tries to generate a query using owner_of, even though we explicitly mentioned what types of edges exist and how they operate. As a result, the query returned no results, even though the data existed.

Design for Performance: We have an accounts collection, where each account represents a human in a specific application. We chose to keep the application name and data in another collection, storing only the app ID in the account document so that one could join them to get the app name if required. This was done to support the use case of app renaming without migrating many documents. In reality, since 98% of queries from accounts required the app name, this caused a huge waste of tokens as the same join query was generated over and over again. (We also found it to be non-performant for the non-agent use case as well.)

Fields That Shouldn't Be Exposed: We had many internal and legacy fields for feature flags, processing states, migration leftovers, and version counters. Things like read_for_processing and migrated—humans learn to ignore them. In some cases, the agent treats them as meaningful and starts weaving them into answers; in others, we're just wasting tokens.

Why Database Schemas Are Built This Way

Your database dialect is set by how your company talks and names things, but customer language doesn't always match your schema's language. The moment users ask questions their way, the gap shows up immediately. While engineering teams optimize for performance, storage, and clean abstractions—all valid priorities—AI agents need something entirely different: clarity and self-explanatory semantics. This disconnect persisted because, until recently, these systems were not customer-facing, and engineers who had to query the data saw no reason to complain.

Why Building a View Is Not Enough

In the MCP presentation, the suggestion is to curate a new API to be consumed by an agent alongside the existing API. One might say, "Let's build a view on top of the existing database and enforce these concepts there." However, I don't think this approach works here, for several reasons:

Views drift. New fields get added to the real model, someone forgets to update the view, and suddenly the CoPilot can't answer questions about new features. For customers, if the product and the AI's "understanding" (the view) diverge by even 5%, the Agent becomes unreliable.

Views might require duplicating the data, which is costly (depending on the type of view).

The concepts here aren't only good for an agent, but for anyone trying to access the database. The same mistakes made by an agent are also made by engineers who build queries and assume they correctly understand the schema.

This doesn't mean we can't create new fields to be consumed solely by the AI, or that there won't be fields the AI should ignore. But the majority of data should be streamlined with a process designed to ensure it is AI-Native. There's nothing more frustrating than finding out a day after a new feature was released that it's not exposed by the CoPilot.

Why Can't We Use a DAL (Data Abstraction Layer)?

A Data Abstraction Layer (DAL) is a software layer that sits between the application and the database, providing a simplified interface that hides the complexity of data storage and retrieval.

A DAL addresses many of the issues raised above. It focuses on outcomes, inner joins are already set, fields that should be ignored are removed, and it's usually explainable and optimized for performance.

However, using a query language is almost like writing code. You can do much more, and DALs are always limited by how they were designed and built. With open-ended queries, the possibilities are as broad as the database creator allows—which is usually what customers expect when asking a CoPilot about their data.

DALs are rigid; AI needs the flexibility of SQL but the safety of a DAL.

How We Solved These Challenges at Linx

At Linx, we weren't just dealing with flat tables; we were managing a massive Identity Graph. This added a layer of complexity where "truth" isn't found in a single row, but in the relationships between disparate domains—merging Business, Security, Governance, and Compliance data into a single coherent view.

We decided to build multiple tools to help our CoPilot answer customer questions. On the database side, we expose everything by default—new fields require a description and should be immediately discoverable by the agent. Alongside this, we also expose built-in APIs to save time for simple cases.

We have different mechanisms for reducing end-to-end latency around RAG and active learning, but I won't go into them in this post, as they target a different angle of how to improve CoPilot performance and reliability and would require another blog.

Engineers can decide to hide fields from the CoPilot explicitly.

The AI-Native Data Lifecycle: From Code to Production

*Yes I used Gemini to make this ridiculous diagram, it seemed only fitting given the topic. And it gets the job done!

1. Intentional Design: The journey begins with a cultural shift in how our engineers view data. During the initial design phase, engineers must explicitly classify every new field as either exposable or restricted. This ensures that data exposure is never a side effect, but always a conscious decision.

2. Static Enforcement: We utilize static analyzers to enforce our documentation standards: if a field is marked for exposure but lacks a clear description, the build is blocked. This rigid enforcement prevents "schema drift," ensuring that no new data points are silently added or forgotten without a clear contract.

3. Agentic Semantic Validation: We have developed a custom internal agent specifically designed to validate our data integrity. Rather than relying on basic syntax checks, this agent performs deep semantic analysis:

Consistency Checks: Validates that field names align perfectly with their descriptions.

Logic Verification: Analyzes calculated fields to ensure the underlying logic matches what the name implies to an LLM.

Confusion Matrix: Proactively flags near-duplicate fields or ambiguous naming conventions that could cause "hallucinations" or mix-ups during inference.

4. Production Monitoring: Finally, we maintain standard production monitoring to identify and resolve any edge-case issues or anomalies in real time.

Many Databases, Many Truths

Okay, so far so good, right? I wish it were that easy.

As I continued building, I found that this gets harder as systems mature. Real products don't query a single database. You have an operational DB, an analytics store, a warehouse, a lakehouse, documentation, and now APIs via MCP. The same concept ends up in multiple places with slightly different names or shapes. The model has to guess whether account, tenant, and org are the same thing or three different ones. We check for that too: the same entity exposed under different names across different sources, creating ambiguity.

Principles for AI-Native Infrastructure

To sum it up, when building CoPilots that run Text2SQL tasks, we should follow principles that make the CoPilot more reliable (alongside the well-known database metrics we follow, such as performance). Just as we follow SOLID principles when writing code, below is a suggested modified SOLID (or SDDID) for AI-Native infrastructure:

Semantic Naming: Table and field names must be self-explanatory. If is_external refers to an email domain and not a user's status, it must be renamed or aliased for the AI.

Dialect Alignment: The schema should match the mental model of the user. If your customers ask about "Ownership," don't hide that relationship behind technical jargon like responsible_for. Your database dialect must speak the same language as your business.

Documentation: Every exposable field must have a description attribute. This metadata shouldn't live in a separate Wiki; it should be part of the database contract.

Intentional Exposure: Not all data is for AI. Use "AI-Exposability" flags to hide internal flags, version counters, and migration leftovers that confuse the model and waste tokens.

Drift Detection: Implement automated "Semantic Tests" in your CI/CD. If a new field is added without a description or violates naming conventions, the build fails. AI-readiness is a first-class citizen.