“AI’s going to put you out of work.” I get that a lot these days when I tell people I help tech brands create marketing content.
But I don’t believe it. I’m an AI optimist—following the space, diving into every new model released. As I watch the field mature, I’m more and more convinced that AI is a multiplier, not a replacer, so I don’t feel threatened in the least.
Of course, AI has created new threats—along with new ways to defend our organizations. But while newer security products offer controls for simpler threats, what about complex AI security risks?
We’ve all heard about deepfakes, AI-powered phishing, and malware chatbots. But what about these three “off-the-beaten-path” twists and turns when it comes to AI and cybersecurity—for both attackers and defenders?
Let’s take a look—through the eyes of an optimist.
1. New Attack Vector: AI-Generated Code Risk
AI coding assistants (GitHub Copilot, ChatGPT, Amazon CodeWhisperer) speed development but also introduce vulnerabilities at scale.
Developers believe AI makes their code more secure, according to a Stanford University study. But the same study found their code was actually buggier, with 40% of AI-generated code vulnerable to MITRE’s top 25 CWEs.
While developers blindly trust AI-generated snippets, AI also adds greater risk of polymorphic malware that can generate variations of malicious code snippets, each slightly different in syntax, structure, or logic. That makes them tough to detect with traditional security tools. A poisoned AI model could inject vulnerabilities into local codebases or even popular open-source projects, spreading undetected threats around the world in hours.
True, many systems now also include AI-driven code audit solutions to test for accidental or deliberate weaknesses, but they’re far from perfect. The real challenge here will be building smarter security controls within AI applications themselves.
2. New Defense Vector: AI Agents
Security tools have been moving towards automation, even before GenAI really made its mark. Soon, AI agents will be able to act autonomously, including on the open internet.
In the SecOps space, we’ll see AI agents operating as penetration testers—automating reconnaissance, vulnerability exploitation, and lateral movement. They’ll go way beyond traditional automation, learning from failures and adapting attacks dynamically, much as human pen testers do but at a different scale.
AI agents will also be able to handle Level 1 triage, acting as part of the SOC team. True, this will come with a price tag, but they’ll never need coffee breaks or long weekends.
It’s not all rosy—“agentic attackers” are another big industry prediction right now. But the good news is that many of these attackers will aim for low-hanging fruit like unpatchable legacy systems and poorly secured public cloud configurations. Defenders can stay one step ahead by keeping their security up to date.
3. New AI Risk: Compliance Complications
AI-driven cybersecurity tools, like those powering SOAR solutions, are making real-time decisions about risk, fraud, and access control—but how are those decisions being made?
Security vendors don’t document exactly how their AI works both to safeguard intellectual property but also to prevent exposing system vulnerabilities. Yet compliance requires documentation, transparency, and accountability.
New AI governance laws are coming in across many jurisdictions, raising several transparency and privacy challenges for AI tools:
- Storing security training data (as required for compliance) expands the attack surface.
- Accessing data across regions creates sovereignty issues.
- Permitting user opt-outs (required in some jurisdictions) adds complexity.
- Documenting AI systems or security incidents could expose confidential security algorithms.
Plus, simply on a day-to-day basis, security teams can’t act on alerts raised by AI-driven SecOps tools if they don’t understand how the AI reached its conclusion due to lack of transparency.
All this leads to a difficult question: How can AI-driven cybersecurity tools comply with privacy regulations and maintain transparency while continuing to innovate?
It’s tough to predict how these inherent conflicts will play out, making this a fascinating space to watch over the next few months.
The Future of AI Risk
Like any arms race, AI creates a vicious cycle:
- We embrace the benefits of AI
- AI exposes us to new types of attack, plus drift and decay
- We throw more AI at the problem
- And so on…
Like I said, I’m an optimist. But as I mentioned from last year’s AWS GenAI summit in London, I’m also a realist when it comes to what AI can and can’t do. That means that while pursuing innovation, we also need to fast-track solutions that solve problems of ethics and transparency.
The inevitable “Spy vs. Spy” escalation of attack and defense capabilities shouldn’t stop any of us from taking advantage of AI. There’s no looking back. In security, as in so many other fields, AI will keep on driving innovation (both for attackers and defenders), and it’s going to be a fascinating ride.
Come talk security, AI, and content to us at RSAC 2025 in San Francisco. Schedule a meeting.
*This post was originally published on the RSAC blog.