The patterns were immediately apparent as I walked the RSA Conference show floor this year: identity as the primary attack surface and the security chaos unleashed by agentic AI.
While these topics aren’t new, the sense of urgency is.
At Bospar’s invitation-only “Decoding RSAC and Driving Strategy” analyst relations (AR) event on the conference’s final day, analysts from Omdia and Frost & Sullivan unpacked just how fast organizations are moving, and how wide the gap remains between ambition and execution.
“What struck me most was how candid the conversations were, with analysts and security leaders cutting straight to the hard questions about AI risk and identity,” said Alex Soejarto, analyst-in-residence at Bospar. “That’s what good AR produces. Not talking points, but real dialogue that sharpens how both sides think about what’s coming next.”

Identity: The New Front Door for Attackers
Credential-based attacks have surged to the point where identity is the security layer. Booth after booth showcased tools promising visibility into non-human identities, entitlement sprawl and the murky territory where cloud permissions meet real-world risk. As one analyst noted, identity governance and observability will have “a lot to do in the near future,” especially as AI agents multiply across enterprise environments without security as a prerequisite.
90% Doing AI. 10% Doing It Well.
Agentic AI was the show’s inescapable buzzword.
Omdia’s IT Enterprise Insights survey of 6,000 respondents found 90% of organizations are using AI. But only 10% have reached the optimization stage. The rest are stuck somewhere between experimentation and coherent strategy.
“There’s an expectation gap and an execution gap,” Adam Etherington, global cybersecurity practice leader at Omdia, told the Bospar AR gathering. Boards want innovation and risk mitigation, and CISOs want guardrails. Vendors are layering AI into everything from SOC automation to cloud workload protection. At the same time, buyers scramble to figure out what reduces risk.
The Agentic Governance Vacuum
Security teams may not always fully understand that AI agents are executing tasks across enterprise systems, touching sensitive data, making API calls and operating with permissions they haven’t grasped yet. The conversations at the Bospar event zeroed in on the governance vacuum: Who owns the risk when an AI agent acts on its own? How do you audit decisions made at machine speed? What happens when an adversary compromises an agent’s access?

Jarad Carleton, Frost & Sullivan’s global cybersecurity program leader, offered a vivid example. Some organizations are encouraging employees to experiment freely with AI, to the point of spinning up local instances on devices connected to corporate networks. The upside is innovation. The downside is that a compromised instance becomes a force multiplier for attackers, as it collects data from every connected system and exfiltrates it with minimal additional effort.
Regulated Industries Are Moving Quickly
Analysts noted that heavily regulated industries are advancing the fastest with agentic AI because their policies and operating procedures are already in place and standardized. They have the documentation to feed AI systems and the compliance frameworks to govern them. Less regulated sectors don’t have that advantage.
CISOs expecting budget increases are instead being tasked to fund AI innovation while keeping the enterprise secure. This is a balancing act demanding hard choices about where to optimize and where to invest.
Without Structure, There Is Just Risk
On the RSAC 2026 show floor, identity vendors pitched deeper observability, AI security startups pitched governance frameworks and platform players pitched consolidation. And everyone pitched AI.
The takeaway from the analyst roundtable was blunt: Speed without structure is just risk by another name. Organizations racing to deploy agentic AI without locking down identity, data access and governance are introducing risk.