Bip Milwaukee Local News

collapse
Home / Daily News Analysis / Why Cybersecurity Must Rethink Defense in the Age of Autonomous Agents

Why Cybersecurity Must Rethink Defense in the Age of Autonomous Agents

May 13, 2026  Twila Rosenbaum  10 views
Why Cybersecurity Must Rethink Defense in the Age of Autonomous Agents

In March 2026, San Francisco once again became the epicenter of the cybersecurity world. Thousands of practitioners, vendors, and investors gathered at Moscone Center for the RSA Conference, where one theme dominated every keynote, panel, and booth conversation: Agentic AI. Not just AI as a tool, but AI as an actor.

From autonomous code generation to decision-making systems that initiate actions without human intervention, the industry is entering a new phase. Developments like Mythos, a next-generation AI framework capable of orchestrating complex, multi-step cyber operations, highlight both the promise and the risk of this shift. The Cloud Security Alliance predicts a surge in simultaneous AI-powered attacks and urges defenders to fight AI with AI. OpenAI has responded by scaling its Trusted Access for Cyber program to support thousands of verified defenders and hundreds of security teams. Gartner reinforces this trend, forecasting AI spending to grow by 44 percent in 2026 and reach $47 trillion by 2029, far exceeding its projected $238 billion for information security and risk management solutions in 2026.

The Dual-Use Reality of Agentic AI

Technologies like Mythos reveal a fundamental truth: the same capabilities that benefit defenders also empower attackers. Adversaries are already using AI to enable autonomous reconnaissance and lateral movement, real-time adaptation to defenses, and scalable, low-cost attacks with minimal human involvement. This is not theoretical. Early rogue AI agents are probing environments, exploiting misconfigurations, and mimicking legitimate users. Attackers no longer need to control every step; they can deploy agents that behave like identities.

The dual-use nature of AI is a well-documented phenomenon. In cybersecurity, offensive tools often become defensive ones and vice versa. For example, large language models can generate phishing emails or malware code, but they also power detection algorithms. The difference now is speed and autonomy. Agentic AI can operate at machine speed, making decisions in milliseconds, which outpaces human reaction. As a result, security teams must prepare for attacks that evolve faster than traditional signature-based defenses can adapt.

The Risk of “One More Tool”

Every major shift in cybersecurity has led to a wave of point solutions. The result is predictable: tool sprawl, siloed visibility, and operational complexity. These gaps often benefit attackers. Agentic AI risks are following the same path. Early signs are already visible: AI security posture management tools, AI runtime protection platforms, AI-specific anomaly detection engines, and AI governance solutions. Each may provide value, but adding more tools increases friction. Organizations do not need more dashboards. They need better context and control over the entities operating in their environments, whether human or machine.

At the parallel AGC Cybersecurity Investor Conference, AI experts and industry leaders reached a more pragmatic conclusion: organizations should treat AI like an identity. This perspective cuts through the hype. Rather than viewing AI as a new tool category that requires entirely separate security stacks, it places AI within the established and critical domain of identity security. Because fundamentally, agentic AI behaves like an identity: it authenticates (via APIs, tokens, or credentials), it accesses systems and data, it performs actions within an environment, and it can be compromised, misused, or go rogue. Once you accept this, the path forward becomes clearer—and far less fragmented.

Identity Threat Detection as the Foundation

If AI is treated as an identity, identity threat detection and risk mitigation solutions become the logical control plane. This approach focuses on analyzing behavior across credentials and systems. It combines adaptive verification, behavioral analytics, device intelligence, and risk scoring in a unified platform. Applied to AI, this enables behavioral visibility to detect anomalies such as unusual access, privilege escalation, or data exfiltration; risk-based controls to adjust access, enforce additional verification, or isolate suspicious agents; united policy enforcement across human and machine identities; and lifecycle management to prevent orphaned or unmanaged agents.

As rogue AI agents emerge—whether compromised or malicious—identity-driven security provides a practical defense. It enforces least privilege, continuously validates access, detects abnormal behavior, and automates response actions. These capabilities already exist in modern identity security frameworks and can be extended to AI without introducing new silos. For instance, tools like conditional access policies, user and entity behavior analytics (UEBA), and privileged access management (PAM) can be adapted to monitor and control AI agents. A compromised AI agent that suddenly attempts to escalate privileges or access sensitive data would trigger alerts just as a compromised human account would.

This approach also addresses a critical gap: many organizations lack visibility into non-human identities. Service accounts, API keys, and now AI agents often fly under the radar of traditional security monitoring. By treating them as identities, companies can inventory, monitor, and govern them using existing identity governance frameworks. This reduces the attack surface without requiring a bespoke security stack for every new technology.

Why the Identity-Centric View Succeeds

The identity-centric model succeeds because it is based on a simple truth: the ability to act is the ability to cause harm. Whether an actor is human, software bot, or AI agent, the underlying security principle remains the same—verify and authorize every action, and monitor for anomalies. This is the essence of Zero Trust, which has gained mainstream acceptance. The National Institute of Standards and Technology (NIST) defines Zero Trust as a security model that eliminates implicit trust and continuously validates every stage of a digital interaction. Applying this to AI agents means that no agent is trusted by default, and each agent must prove its identity and intent before accessing resources.

Furthermore, identity-centric defense aligns with regulatory trends. Emerging regulations in the European Union, such as the EU AI Act, require organizations to ensure that AI systems are transparent, accountable, and secure. By integrating AI into existing identity security and governance frameworks, companies can demonstrate compliance with these requirements without reinventing the wheel. This also simplifies auditing and reporting, as all access and activity can be logged and reviewed within a single system.

The conversations in San Francisco this March made one thing clear: the future of cybersecurity will be shaped by entities that can act independently. Some will be human. Many will not. As technologies like Mythos continue to push the boundaries of what AI can do, the industry must evolve its defensive mindset accordingly. The most effective strategy may also be the simplest: If it can act, it should be treated like an identity. By anchoring AI security within identity threat detection and risk mitigation frameworks, organizations can protect against rogue agents—without adding yet another fragmented tool to an already complex defense arsenal.

Dr. Torsten George, an internationally recognized IT security expert with over 30 years of experience, has been vocal about this shift. In his writings, he has emphasized that identity security must evolve to address the machine identity challenge. He co-authored the book "Zero Trust Privilege for Dummies" and regularly provides commentary on data breaches, insider threats, and compliance frameworks. His insights align with industry reports from Gartner and Forrester, which identify identity and access management as a top priority for cybersecurity spending in the coming years.

The implications extend beyond security. Organizations that embrace an identity-centric approach to AI will be better positioned to innovate safely. They can deploy autonomous agents for customer support, network optimization, and threat hunting while maintaining strict control and visibility. This balance between agility and security is crucial in an era where cyberattacks can cripple business operations within minutes.

Examples of AI-powered attacks are already emerging in the wild. Earlier this year, researchers demonstrated an AI agent that could autonomously hack cloud systems with minimal oversight. The Mythos framework, as highlighted by the Cloud Security Alliance, is capable of carrying out multi-stage attacks that include reconnaissance, exploitation, and data exfiltration without human intervention. These aren't theoretical exercises; they are proof-of-concept tools that will inevitably be weaponized by malicious actors.

In response, security vendors are racing to develop AI-first defenses. However, the market is already witnessing fragmentation. A report from a major cybersecurity analyst firm lists over 50 startups focusing solely on AI security, each offering a point solution. This proliferation mirrors the early days of cloud security, where a similar boom in point tools led to integration headaches and security gaps. The identity-centric approach offers a way out of this cycle by providing a unified control plane that can govern all types of actors.

Moreover, the identity-centric model scales with organizational growth. As companies expand their use of AI agents for tasks like code generation, data analysis, and customer interaction, the number of non-human identities grows exponentially. Managing these via separate tools becomes unfeasible. By contrast, an identity platform that handles both human and machine identities can scale elastically, using automation to onboard, monitor, and decommission agents as needed. This reduces operational overhead and prevents orphaned accounts that could be exploited.

The time to act is now. The cybersecurity industry is at a crossroads. One path leads to adding layers of complexity and cost, as we have done with previous technology shifts. The other path leads to simplification and effectiveness, by leveraging existing identity security capabilities. The choice is clear: treat AI as an identity, and build the defense around that principle. The companies that do will be the ones that can trust their autonomous agents—and trust that those agents are not being used against them.


Source: SecurityWeek News


Share:

Your experience on this site will be improved by allowing cookies Cookie Policy