For years, the promise of AI in cybersecurity felt more like a vendor pitch than a practical reality. Security teams were drowning in alerts, stretched thin, and staring at dashboards that generated noise faster than analysts could process them.
AI was supposed to fix all of that. In many cases, it didn’t. At least not right away.
But that’s changing. According to a recent conversation published by Dark Reading, security leaders are increasingly moving past the early stumbles to find genuine, measurable value in AI deployments.
“I got to say, like a year ago, when I started asking the question to both practitioners and security leaders, the leaders were more motivated at what was possible, of course, than necessarily the practitioners were,” reflected Omdia analyst Dave Gruber to Dark Reading. “There was a fair amount of just nervousness and cautiousness in approaching things. But boy, I’ll tell you, over the last three research cycles that I’ve gone through right up until now, now there’s, I’ll call it what it is, it’s excitement about what’s possible, not only excitement about like how I can get my job done better, but excitement about the promise for making my life better and maybe my career prospects better going forward too.”
The challenge now isn’t whether to use AI. It’s how to use it well.
From Security Operations’ Runbooks to Real-Time Response
One of the most practical applications emerging in security operations is the conversion of existing runbooks into AI-driven workflows. As Frederick Lee, CISO of Reddit explains, teams are “literally taking some of the run books they have today, feeding those into LLMs, and turning those into agents to actually continue some of the operations.”
It’s not glamorous, but it’s effective. Existing institutional knowledge becomes the foundation for autonomous response, reducing the manual burden on analysts to extend operational coverage beyond normal working hours.
The Dark Reading conversation also highlighted how AI is expanding the reach of stretched security teams. This isn’t only in terms attack surface coverage, but in terms of hours. AI can now provide responses to end users around the clock in ways that a human team simply cannot sustain alone today.
Incident Summarization and Threat Intelligence: Where AI Shines
One horizontal use case that has proven broadly valuable is incident summarization. What once required an analyst to painstakingly document a case, can now happen quickly and accurately with AI assistance. Pulling context from multiple sources. Synthesizing timelines. Translating technical detail into something consumable. This is exactly the kind of task that general AI excels at, so it frees analysts to focus on higher-priority work.
The vertical use cases are equally compelling, particularly in threat intelligence analysis. Operationalizing threat intelligence has historically been one of the most difficult challenges in security. Any delay between gaining access to intelligence and being able to act on it adds direct risk to the organization. AI is beginning to close that gap, helping teams move faster from insight to action.
Real-World SOC Deployments: Wins and Lessons Learned
Real-world SOC pilots are beginning to produce concrete data, according to a separate Dark Reading article “AI in the SOC: What Could Go Wrong?”. A cybersecurity leader at a Fortune 500 food manufacturing company ran a six-month AI trial inside her SOC and saw:
- Mean time to discovery improve by 26 to 36%
- Mean time to response improved by 22%
- False positives were reduced by 16 points
This was all accomplished while maintaining strict human oversight and audit controls.
The approach was deliberate: AI was embedded as a read-only triage assistant inside security case management workflow, synthesizing alerts from endpoint, network, cloud, and OT monitoring feeds.
It was never allowed to interact directly with production equipment like programmable logic controllers (PLCs) or industrial control systems like SCADA.
That discipline of knowing where AI should and should not operate proved critical to the deployment’s success. In one instance, the AI detected a suspicious file at an endpoint, determined it contained potential malware, and quarantined it autonomously. That kind of proactive prevention, running continuously and without fatigue, represents a genuine shift in what security operations can look like.
The lesson from these early deployments: start with clear guardrails, measure rigorously, and let results, not assumptions, drive expansion.
The Problem With Most AI Tools Today
Despite the progress, a core frustration persists across the industry. Justin Foster, CTO of Forescout, put it plainly: “In 2025, we saw the peak of inflated expectations for AI in cybersecurity. AI promised relief, but most solutions focused on making individual tools smarter, while leaving the work itself just as fragmented and manual.”
That’s the crux of the problem. Adding AI features to existing tools does not automatically reduce the cognitive load on analysts, eliminate the need to chase signals across a dozen dashboards, or tell a network operator what they actually need to do today. It just makes individual steps in a still-broken workflow slightly faster.
Go deeper: Watch principal technology analyst Zeus Kerravalla of ZK Research interview Forescout’s CTO, Justin Foster about AI fatigue and prompt engineering:
Forescout VistaroAI: Building AI Around How Security Work Actually Happens
This is precisely the challenge that Forescout set out to solve with VistaroAI™, our newly introduced skills-based agentic AI suite. The central design philosophy is a departure from how most AI tools work: rather than requiring security professionals to learn prompt engineering or adapt their workflows to the AI, VistaroAI adapts to them.
The platform is role-based and skills-based by design. Roles available today include network operator, network security analyst, security manager, SOC analyst, biomedical engineer, and compliance officer. Each role receives a personalized, daily view of what requires attention — prioritized based on that organization’s specific environment and packaged with the context needed to act. A network operator sees operational shifts. A security manager sees risk movements tied to business impact. A compliance officer sees policy deviations and approaching deadlines. The same underlying data is shaped differently depending on who needs to see it and what they are responsible for doing.
As CEO Barry Mainz describes it: “VistaroAI flips that model: we’ve encoded agent-based skills with guardrails to handle the complexity behind the scenes and paired them with role-based personas that deliver recommendations and next steps aligned to real needs and responsibilities. Value is delivered on day one.”
This is significant. Most AI deployments require weeks or months of tuning before they deliver a meaningful signal. A role-based, skills-first approach means analysts are not handed a blank AI canvas and told to figure out what to ask of it. They are handed a structured, daily work queue that reflects their actual responsibilities.
At the same time, VistaroAI keeps human judgment at the center. The platform is designed to augment professional decision-making, not replace it. It surfaces risk signals more clearly, structures daily work, and removes friction, so the analyst, manager, or compliance officer remains in control of what happens next. This human-in-the-loop architecture is not a limitation. It’s the feature.
In a domain where the consequences of a wrong decision can be severe, responsible AI means AI that escalates to humans rather than acting unilaterally on high-stakes decisions.
What This Means for the Security Profession
The trajectory of AI in cybersecurity is becoming clearer. The early phase of bolting AI onto existing tools to make individual features smarter is giving way to a more integrated model. This is where AI is built around how security work actually happens … from the ground up.
That shift requires meeting security professionals where they are: accounting for their specific roles, the specific environments they are defending, and the specific actions they need to take each day. It requires delivering prioritized clarity instead of more volume. And it requires keeping humans in the loop, not as a regulatory checkbox, but as a genuine architectural commitment.
The security leaders who are seeing results are the ones who have approached AI not as a silver bullet, but as a force multiplier. It works best when it is anchored in real workflows, measured against real outcomes, and paired with the human expertise that no model can replicate.
AI in cybersecurity is no longer just a promise. For the teams getting it right, it is becoming the operating model.