In my recent articles for CSO, I’ve talked about the limits of current SOC models and the importance of rehearsal. This time, I want to focus on something that’s becoming increasingly clear: purple teaming has lost its depth.
We’ve turned one of the most powerful tools for resilience into a transactional exercise that feels reassuring but reveals very little about how an organization will cope when the pressure is real.
Care and attention have become rare assets in our world. Distraction dominates both the consuming and supply sides of cybersecurity. Clients are pulled into complexity and novelty, while services providers are pulled into deadlines and deliverables.
Meanwhile, attackers — increasingly powered by AI — are becoming faster, quieter, and more determined.
When threats accelerate, surface-level testing is no longer enough.
The absence of findings is not the absence of risk
I’ve seen this pattern everywhere: a purple team engagement produces a set of impressive outcomes. The report looks good. Findings correlate with expectations. Leadership feels reassured.
But a result is often treated as the result, as if the absence of findings means the absence of risk. This is a flaw.
The industry’s default approach is shaped by time pressure, commercial constraints, and scopes that are too narrow. None of this is malicious, it’s simply how the system has evolved. Providers deliver what they’re contracted to deliver, and clients take the report as a sign of depth.
Omissions, often caused by time pressure or lack of mental space, are invisible. And invisible omissions are the most dangerous kind.
Two clients who “shouldn’t have been breakable”
Recently, we worked with two extremely mature organizations. On paper, both looked close to unbreakable.
Instead of running a standard purple team, we co-designed the engagement with them. We looked at the problem as a determined attacker would, and we shared tacit knowledge openly, both our own and theirs. Crucially, everyone involved had visibility into the controls in place. It was a genuine cyber security partnership, not an audit.
And both organisations were compromised — deeply — with almost no sign of compromise.
In one case, there was a single indicator of compromise: “domain admin.” Nothing about how it happened. Nothing about what to do next. No instinctive or automated response. Just a light turning red with no playbook behind it.
In the other case, the SOC detected multiple signals but never acted in time. Detection without action is just noise.
The experience was humbling. And it forced a blunt question: “You saw us. So what?”
That’s the real test. Not whether the SOC sees something. Whether it does something — fast enough and accurately enough — to stop the damage.
Standard purple teaming can’t get you there
Purple teaming should be the discipline that reveals these realities, but the current model rarely does. Service providers tend to focus on the bypass, the exploit, the “win.” Clients focus on closing tickets, finishing the engagement, and getting the report.
Neither mindset creates the space needed for deep thinking.
Had we rushed through our work we would never have found what we did. Time pressure shapes outcomes more than most organizations realize. When testing is constrained by a standard 9–5, it limits how far teams can explore the conditions that lead to real compromise.
Resilience is the “brake” moment
Imagine you’re driving, and you see the car ahead braking suddenly. Awareness helps, but it’s your immediate reaction that avoids the collision. Insurance plans don’t matter at that moment. Nor do compliance reports or dashboards.
Only vigilance and rehearsal matter.
Cyber resilience works the same way. You can’t build the instinct required to act by running one simulation a year. You build it through repetition. Through testing how specific scenarios unfold. Through examining not only how adversaries get in, but also how they move, escalate, evade, and exfiltrate.
This is the heart of real purple teaming.
AI didn’t help either organisation
Both clients had AI embedded in their SOCs. And it made no difference.
AI can accelerate analysis, but it can’t replace intuition, design, or the judgment required to act. If the organization hasn’t rehearsed what to do when the signal appears, AI only accelerates the moment when everyone realises they don’t know what happens next.
This is why so much testing today only addresses opportunistic attacks. It cleans up the low-hanging fruit. But if organized crime wanted these organisations, they would have had them. And that’s not an easy sentence to write.
A model that creates false confidence
The standard testing model traps everyone involved:
- One-off tests create false confidence.
- Scopes limit imagination.
- Time pressure eliminates depth.
- Commercial structures discourage collaboration.
- Tooling gives the illusion of capability.
- Compliance encourages the appearance of rigour instead of the reality of it.
This is why purple teaming often becomes “jump out, stabilize, pull the chute, roll on landing.” But what about the hard scenarios? What about partial deployments? What about complex failures? That’s where resilience is built.
And today, resilience is the only meaningful metric.
New mindset: slow, consistent, engaged, outcome-driven
In my experience, purple teaming that works requires:
- Co-ownership of the mission.
- Tacit knowledge shared on both sides.
- Full visibility into controls.
- Scenarios designed, not bought.
- Repetition and rehearsal.
- Space for thinking.
- Disciplined simplicity.
- A focus on the “so what,” not the bypass.
This is systems thinking. Engineering. Psychology. It is, in every sense, harder work than the standard model.
But the seemingly impossible becomes possible when both sides push each other, and when the aim is not to produce a report but to reveal reality.
Purple teaming is about getting in, sure. But it’s also about what happens after that. Without a different approach, focused on consistency and outcomes, organizations will keep passing tests while failing in practice.