
AI coding assistants went from experiment to enterprise standard faster than almost any technology in recent memory. In a recent StackHawk survey of 250+ AppSec stakeholders, 87% of organizations have adopted tools like GitHub Copilot, Cursor, or Claude Code. Over a third are already at widespread or full adoption.
The productivity gains are real. So are the security implications. But the conversation about AI coding risk stays stuck on whether AI “writes vulnerable code” — which misses the deeper shifts in how software gets built and how it needs to be secured.
The Good
I think this one is obvious. Velocity matters when it comes to product differentiation and innovation—and AI delivers it. Developers are producing significantly more code than they did six months ago. Features that used to take weeks now ship in days.
AI can also improve baseline code quality. Assistants trained on millions of repositories have internalized common patterns, including secure ones. For routine stuff — input validation, standard auth flows, common API patterns — AI-generated code is often more consistent than what a junior developer writes from scratch. The “AI writes insecure code” narrative ignores that human-written code was never a security gold standard either.
And boilerplate security is getting automated. Parameterized queries, standard encryption patterns, OAuth scaffolding — these are exactly where AI assistants shine. The repetitive security hygiene that developers used to shortcut because it was tedious now gets generated correctly by default.
The Bad
The context gap is real and growing. When you write code line by line, you develop intuition about how it works, what it touches, where the edge cases live. When you review AI-generated code, you’re asking a different question: “Does this work?” Not “Is this secure?” Not “How does this interact with our authorization model?” Developers accepting complete implementations without deeply understanding them is a fundamentally different risk profile than developers building those implementations themselves.
Documentation and institutional knowledge suffer. AI-assisted development often means less time spent in the codebase. Developers understand features at a functional level but may not trace the security implications. That knowledge gap compounds—six months later, nobody quite remembers why a particular API endpoint exists or what data it can access.
Manual processes can’t keep pace. When development velocity increases 5-10x, everything downstream breaks. Security reviews, architecture approvals, asset documentation, attack surface tracking—any process that relies on humans keeping pace with development is now permanently behind. Our survey found “keeping up with rapid development velocity and AI-generated code” was the number one challenge cited by AppSec stakeholders.
The Risky
The risk isn’t the code—it’s the confidence. The real danger isn’t that AI writes vulnerable code (though it can). It’s that organizations ship faster while understanding less about what they’re shipping. Tests pass, code reviews approve, features deploy—but the security team’s mental model of the application diverges further from reality with each AI-assisted sprint.
Shadow applications multiply faster than ever. That weekend proof-of-concept an engineer spun up “just to test something”? AI assistants make it trivially easy to build, which means trivially easy to forget. Our survey found only 30% of AppSec stakeholders are “very confident” they know 90%+ of their attack surface. AI-assisted development makes that number worse, not better.
Security teams are triaging, not securing. When code volume increases but AppSec headcount doesn’t, something has to give. Our data shows 50% of AppSec teams spend 40% or more of their time just triaging and prioritizing findings—determining what’s real before they can address what matters. That ratio was already unsustainable. AI development velocity breaks it completely.
What This Means for Security Leaders
The organizations getting this right aren’t trying to slow down AI adoption—that ship has sailed. They’re adapting their security programs for a world where:
- Visibility is foundational. You can’t secure what you don’t know exists. Automated attack surface discovery from source code isn’t a nice-to-have when developers ship faster than documentation can track.
- Runtime validation matters more than ever. When developers have less context about the code they’re shipping, you need testing that validates how applications actually behave—not just how code looks statically.
- Intelligence beats volume. The answer to 5x more code isn’t 5x more findings to triage. It’s smarter prioritization that connects vulnerabilities to business risk, so finite AppSec resources focus on what actually matters.
AI coding assistants aren’t going away. The productivity benefits are too significant, and the adoption curve is already behind us. The question isn’t whether to embrace them—it’s whether your security program is built for the world they’ve created.




