
Agentic AI is no longer on the horizon for enterprise security teams, it is already inside the building. According to the Cyber Security Tribe Annual Report, 73% of organizations are already using or developing agentic AI within cybersecurity, up from 59% the prior year. The conversation has shifted from "should we?" to "how far should we go?"
That's a harder question. And it's exactly the one Cyber Security Tribe put to senior security leaders at RSAC 2026. AIBound CEO and co-founder Niall Browne was among the experts who responded — and his perspective cuts to the heart of what makes agentic AI both a force multiplier and a governance challenge at the same time.
The Trajectory Is Clear, and Irreversible
Niall's starting point is direct: the 73% of organizations using agentic AI today will become 100%. This isn't speculation — it's the natural trajectory of where enterprise software is headed. Just as the average smartphone user now runs close to 80 apps, every employee will soon operate alongside a comparable number of AI agents. The capability is coming regardless of whether security teams are ready for it.
That reality creates both enormous opportunity and genuine risk. Agents are, by their very nature, autonomous and nondeterministic. As Niall notes, "you are never entirely sure what you will get." The question isn't whether to adopt agentic AI — it's whether your organization has the controls in place to govern it responsibly as adoption accelerates.
The Right Access. The Right Guardrails. The Right Balance.
The governance challenge Niall articulates is not a binary one. You want agents to have the right access, data, and identities to do their jobs effectively — but you need guardrails that prevent them from acting beyond their remit. Getting that balance wrong in either direction is costly: over-restrict agents and you lose the operational efficiency gains; under-restrict them and you introduce cascading risk into your environment.
Absolute technical security controls for AI don't yet exist, and waiting for a perfect solution isn't a viable strategy. The practical path forward is smart, adaptive governance: scoped identities with least-privilege access, runtime behavioral monitoring, and human-in-the-loop checkpoints for high-risk actions. Organizations that build these guardrails now — rather than waiting — will be the ones who can safely accelerate as agentic capability matures.
This is exactly the problem AIBound was designed to solve. The AI Control Plane gives security teams the visibility and enforcement layer they need to govern agent identities, monitor runtime behavior, and enforce policy boundaries — making it possible to say yes to agentic AI without losing control of it.
Where Agents Belong — and Where They Don't
Niall draws a clear line between use cases where agentic autonomy creates strategic advantage and those where it introduces unacceptable risk:
High-value, lower-risk use cases:
- SOC triage — high-volume, pattern-driven work that benefits from machine speed and consistency
- Threat hunting — continuous analysis across large data surfaces where agents outperform human analysts on volume
- Automated vulnerability scanning — repeatable, structured work with well-understood decision criteria
High-risk use cases requiring human oversight:
- Autonomous access revocation — wrong decisions can lock out legitimate users at critical moments
- Production infrastructure changes — errors can propagate faster than any human can intervene
- Any action that is difficult to reverse — when the blast radius of a mistake is large, human judgment must stay in the loop
This framework aligns closely with what security leaders across the Cyber Security Tribe article consistently described: the line isn't "AI can do this" versus "AI can't do this" — it's between decisions where being wrong is recoverable and decisions where being wrong creates cascading damage.
Perfection Is the Enemy of Good
Perhaps the most important message in Niall's perspective: don't let the absence of a perfect solution become a reason to delay governance. The risk of inaction is just as real as the risk of moving too fast.
Organizations that build their AI governance framework incrementally — starting with scoped identities, behavioral monitoring, and clear escalation paths — are far better positioned than those waiting for a comprehensive solution that may never fully arrive. Every agent deployed without governance is a risk that compounds as the number of agents grows.
What This Means for Your Security Program
The organizations that will get the most from agentic AI are the ones that treat governance as a prerequisite, not an afterthought. That means:
- Establishing agent identity infrastructure before scale — every agent needs its own scoped identity with least-privilege access, not borrowed credentials from a human user
- Instrumenting runtime behavior so you know what agents are actually doing, not just what they were designed to do
- Drawing explicit lines between autonomous actions and those that require human confirmation, and enforcing those lines through policy
- Measuring outcomes continuously — agentic AI should be held to the same accountability standards as any other security control
Agentic AI is one of the most consequential capability shifts in enterprise security in years. Getting the governance right now — while adoption is still accelerating — is the difference between a strategic advantage and a systemic liability.
AIBound is the AI Control Plane for enterprise security teams — providing the visibility, governance, and enforcement needed to deploy AI agents safely at scale. Learn more →
Read the full Cyber Security Tribe article →


