CISOs were already struggling to help developers keep up with secure code principles at the speed of DevOps. Now, with AI-assisted development reshaping how code gets written and shipped, the challenge is rapidly intensifying.
Whereas only about 14% of enterprise software engineers regularly used AI coding assistants two years ago, that number is on its way to skyrocketing to 90% by 2028, according to Gartner projections. And research from analytics firms like Faros AI shows what that wide-scale adoption looks like in practice. Developers using AI are merging 98% more pull requests (PRs).
For security teams, this velocity creates a compounding problem. There’s more code, it’s produced faster, and there’s less time for review. Now, in theory AI tooling can help automate a lot of the more manual parts of the code review process. But in practice that’s not actually happening with much fidelity yet. And even as the effectiveness of AI-driven code review ramps ups, that wouldn’t mean the obsolescence of developer training anyway.
The training just needs to change. As AI tools get better at catching and fixing common code-level flaws, the focus of developer security training shifts to more fundamental principles around threat modeling for systemic software risks. What is needed to get thrown out are traditional training methods. Consensus among security leaders is that dev training needs to be bite-sized, hands-on, and mostly embedded in developer tool chains.
Refocusing from output to outcomes
As AI-assisted coding matures, the mechanics of catching common code-level vulnerabilities are increasingly going to be handled by the tools themselves. AI coding assistants paired with static analysis and automated remediation will be able to identify and fix many of the line-by-line flaws that developer security training has traditionally focused on. These are those pesky issues like SQL injection, cross-site scripting, and insecure configuration that security teams have nagged developers about for decades.
This should have CISOs rethinking how they approach developer enablement and training. Because even if automated scanning and remediation becomes table stakes in AI-assisted development, the review process at check-in is still likely to miss a ton of security weaknesses elsewhere.
“AI-generated code could be syntactically correct while contextually reckless,” says Ankit Gupta, senior security engineer at Exeter Finance and a AppSec advocate who’s worked to help developers deploy more secure software. “Developers are left to sift through AI output that is ‘plausible but untrusted.’ This shifts the focus of secure development to be more of a validation exercise than a creation exercise.”
Rather than focus on preparing developers for line-by-line code review, the emphasis moves toward evaluating whether their features and functions behave securely in context of deployment conditions, says Hasan Yasar, a secure DevOps advocate and the technical director of Rapid Fielding of High Assurance Software at the Carnegie Mellon University Software Engineering Institute. He says developers especially need to be able to pick up on risks in integration points, architecture, and logic.
“We are shifting from output to outcomes,” Yasar says, explaining that the goal is to get developers to look critically at how their systems work in actual runtime. “Outcomes are the features we are delivering to the users — do these functions or features work the way they’re supposed to?”
Emilio Pinna, director and co-founder of developer security training platform SecureFlag, says this represents a fundamental shift in what security awareness training needs to cover. “Five years ago, industry training taught specific patterns: ‘Don’t do this. Always do that,’” he says. “Today, training should also focus on the underlying principles so developers can evaluate any code, regardless of how it was generated.”
Developers need to recognize when AI-generated code introduces unsafe assumptions, insecure defaults, or integrations that can scale vulnerabilities across systems. And with more security enforcement built into automated engineering pipelines, developers should ideally also be trained to understand what automated gates catch, and what still requires human judgment. “Security awareness in engineering has shifted to a system-level approach rather than focusing on individual vulnerabilities,” Pinna says. “This includes issues such as identity and access control, dependencies, and supply-chain risks.”
Threat modeling as a core competency
This system-level thinking should also elevate the need for greater developer fluency in threat modeling, says Yasar. He notes that threat modeling has historically been difficult for product security and engineering teams to operationalize at scale. One of the longstanding barriers to practical threat modeling was the knowledge required to build effective threat models. Teams struggled to understand enough about the organizational context of how applications were being used, the architecture, and the relevant risks to tie it all together and identify the most relevant potential threats.
AI may actually help here. By synthesizing organizational context and architectural patterns, AI can make it easier to build threat models that would have previously required extensive manual effort, Yasar says. But while AI can accelerate the mechanics of threat modeling, developers still need to understand the fundamentals: how to think about trust boundaries, how to identify assets worth protecting, and how to anticipate how attackers might abuse a feature. CISOs looking to shift developer training away from vulnerability avoidance may want to start weaving threat modeling skills as a core competency instead.
This means that CTOs and CISOs need to help developers and the rest of the engineering team to start to cultivate “threat modeling intuition,” says Michael Bell, founder and CEO of Suzu Labs. “It cannot be a simple ‘does this code work?’ check. But needs to morph into ‘how could this be abused?’,” he says. “We are offloading a large portion of the mental load to write the code, so let’s focus that opened time and opportunity to review the code being output.”
Bell believes that building up threat modeling intuition requires a higher level of hands-on and immersive training like work in cyber ranges that shows developers how attackers would target their applications. “As AI handles more of the routine coding work, the human value shifts to judgment,” he says. “Hands-on training builds judgment in a way that lectures and videos don’t.”
Baking training cues into guardrails
The real trick to hands-on training is figuring out how to serve it up to developers in a high-velocity engineering environment. AI-assisted coding is only accelerating workflows and making production expectations even more breathless. A CISO asking to slow things down for training will get considerable side-eye from CTOs under the gun.
“Traditional, static, one-time courses don’t work in today’s development lifecycle,” says Pinna. “What’s proving effective is continuous, hands-on training in labs with realistic engineering scenarios. They also need contextual, just-in-time learning.”
The emerging approach among secure coding leaders is to blend platform engineering with targeted developer engineering, embedding security guidance directly into the workflows and tools developers already use. Rather than expecting developers to remember what they learned in last year’s training, security teams should be building guardrails that teach as they enforce, Pinna says.
“Security teams are creating guardrails that scale across development pipelines,” says Pinna. “These guardrails turn risks into guidance for developers and make sure that automated tools reinforce training. The goal is for training and enforcement to work together, so coming across a guardrail also helps developers understand security principles.”
Gupta describes a similar vision: “Instead of expecting users to read documentation, security expectations are built into pipelines, with pop-up explanations justifying the presence of a control and describing how to comply.”
It may even expand beyond a pop-up. Delivering on-demand micro-learning in five-, ten-, and fifteen-minute increments based on the exact issue the developer has run into can be incredibly powerful. “The tools I’m using should help me out to learn,” Yasar says.
The data from guardrails and controls being triggered can be used by the AppSec team to drive creation and delivery of more in-depth, but targeted education. When the same vulnerability or integration pattern pops up again and again, that’s a signal for focused training on a subject.
“AppSec teams play a critical role in connecting automated findings to training,” Bell says. “When the same issue appears repeatedly, that’s a training opportunity.”
The CISO’s new training agenda
Smart CISOs likely already understand that the vibe-coding landscape is going to demand more rather than less security savvy from the dev team. This will require security leaders to work more closely than ever with engineering leadership to influence a shift in the content and delivery mechanisms of security awareness training.
Beyond the basics already described here, security pundits say that there’s also another new security training wildcard that CISOs will desperately need to address as AI-assisted coding takes hold within their organization. Developers will now need training in how to work securely within the AI tools themselves.
“CISOs need to ask: how can I train my engineers to use AI tools with a security mindset?” says Yasar. “How can I teach them to evaluate and verify what they’re asking and what they’re receiving from these tools? That’s going to come down to governance.”
This means working with CTOs and other relevant stakeholders to establish clear policies that define when AI-assisted code requires human review, what types of data can be used with AI tools, and how AI usage is governed before code reaches production. Gupta says organizations are already starting to formalize these rules as part of their broader developer enablement programs.
There’s also an opportunity here to finally make good on long-unachieved secure-by-design goals. CISOs can work with engineering teams to use prompt engineering guidance to embed security requirements at the point of code generation. Security teams that offer developers training and ready-made prompt language will help them produce more secure software from the start.
“Now I can bake compliance into my prompt. I can build up compliance by design into my architectures,” Yasar explains. “If I’m a developer I can prompt the tool to build me a web login and make sure that web login follows HITRUST compliance guidelines. I can say ‘here are the guidelines in detail.’ That’s going to give us a very good opportunity to insert compliance by design into the prompt itself.”
In this way, CISOs can harness the shift to AI-assisted coding in a way that helps build more resilient software than ever.
The bottom line is that developer training is here to stay. But CISOs need to put in the work to influence changes that embed security judgment into engineering culture. This means working hand-in-hand with CTOs to weave threat modeling, guardrails, and AI governance wisdom directly into the tools developers use every day.