Over the last four years, I’ve watched organizations get blindsided by threats that originated in a third-party network. More than 35% of data breaches are caused by a compromised vendor or partner, not by any failure in the organization’s controls. While many organizations know that the biggest threats to their security come from forces entirely outside their control, that risk is accelerating this year.
Some of those forces come from beyond their network and even far beyond their region. International conflict is influencing attacker behavior in ways that are showing up far from conflict zones. AI-driven automation is reducing the effort required to exploit systems and people. Third-party risk continues to be the most common reason well-defended organizations still suffer serious incidents.
These three factors are creating an environment that is heightening cybersecurity risk. I work with organizations that invest in security, quantify risk and take resilience seriously. Yet when something truly disruptive happens, it is rarely because a basic control was missing. Security is only as strong as the weakest link in a chain that extends far beyond an organization’s firewall — and those weak links are multiplying.
Geopolitics amplify cyber risk, particularly for OT networks
For a long time, geopolitical conflicts felt like a separate category of risk. If you did not operate in or near a conflict zone, it was easy to assume it posed little risk to your organization or your security posture. In my experience, that assumption no longer holds.
In my previous position, we had an office in Israel, so I was always alert and aware of tensions and conflicts in that area. What I see consistently is that techniques used in active geopolitical conflicts do not stay contained to that geographic area or digital environment. The techniques and tactics are tested, refined and then used by criminal groups and other threat actors. Eventually, they surface in environments that have nothing to do with the original conflict.
The 2026 WEF Global Cybersecurity Outlook reflects this shift, identifying geopolitical instability as a primary driver of cyber risk, and how those tensions have translated into repeated cyber and kinetic disruptions to various sectors like energy, telecom and water. While it is much less likely that the U.S. will be hit with a kinetic attack, we are and have been getting hit with battle-tested cyberattacks. The network that is often targeted by these kinds of attacks is the Operational Technology (OT) network and IoT devices. The report correctly ties this to real safety and continuity impacts, not just data loss, which matches my experience. As a leader, you need to expect some kind of spillover from active warzones to your environment and plan accordingly. Defense in depth is more than a slogan; it’s a way to avoid, mitigate or transfer this kind of risk.
What I have seen forward-looking organizations do is elevate OT security to the board level so that OT risk is added to the Risk Register as board oversight. Organizations that I work with, where life and health concerns are top of mind, have segmented their network to reduce the blast radius of an attack. The best defense is to implement a ransomware resilient backup solution that has immutable backups with a 3-2-1-1 strategy, where that extra 1 is an immutable copy. Once the board has been made aware of the risk, the budget typically follows.
AI is accelerating both the attackers and your defenses, but governance is often missing
What I see generative AI doing in cybersecurity is accelerating what attackers can do and lowering the cost of entry for new criminal gangs. Cyberattacks are more potent because the technology makes it easier to target victims, create deepfake videos or explicit and lewd pictures or fake their voices. Cyber defense tools are getting better, but make no mistake, we are in an arms race with the attackers, criminals and nation-states.
At the same time, organizations are expanding their attack surface by leaps and bounds through internal AI adoption. Chatbots, AI assistants, GPT models and internal AI tools are all new vectors for attack. Agentic AI tools are very easy to build, but are often given more access and privileges than needed. Agents that can read and compose emails, and create and delete appointments and contacts, can provide significant benefits while also creating havoc if there isn’t a human in the loop or proper governance in place. Many organizations are deploying AI faster than they can secure it.
In practice, organizations often follow two paths. The first is a big splashy AI project that usually costs millions of dollars, but often doesn’t have a clear goal, clean data, or appropriate governance in place before the project starts. In this case, the project stalls as policies and decisions are made. Budgets are blown, timelines slip, project dates get extended, or not, and then the project is either abandoned or brought in-house now that internal staff have learned enough from the consultants.
The second path is slower, smaller and often internally driven. This smaller project is more organic and often focused on a particular project or need of the organization. The budget for this smaller project is minimal until the proof of concept is viable, and it can demonstrate a ROI. Because the smaller project is slower, policy and governance can be developed alongside it.
Leaders should assume that AI models can be manipulated and exploited. AI models will have data leakage issues without robust data governance policies and controls. Also, prompt-injection attacks cannot totally be prevented; you need strong guardrails and to evaluate the data model and guardrails on a regular basis.
What I have seen successful companies do to address the governance issue is create an AI Risk Council that has the CISO, CAIO, CTO, Legal and Risk, and the Council, at a minimum, has veto power for model release and is given time to do AI pentesting. Organizations with strong risk programs have tended to adopt the NIST risk management framework. It’s a great guide to get started on an AI governance program for any project.
Cyber inequity is a systemic business risk
Even if you do everything right, a partner that’s connected to your network can create a vulnerability. Vendors and partners might not operate with the same level of cyber maturity that you do.
I have firsthand experience with four organizations that have suffered data breaches, even though their own security programs held. In each case, a vendor or partner was compromised first. The effect was the same every time. Company data was compromised, and the organization had to respond as if it had failed. Pointing the finger at the third party was not going to help customers who had data stolen or restore their trust.
Criminal gangs are opportunistic; they will attack the weakest link, and if your suppliers do not invest in the same level of controls that you do, or that you need, that is a risk to your business the moment systems are integrated. Many leaders still underestimate this exposure because it feels indirect, but cyber inequality is a systemic risk to your business.
I saw this become more visible during the pandemic, when supply chain disruptions forced organizations to examine dependencies that they had not fully mapped. The lesson remains relevant now. As organizations rely more heavily on external partners, the gap between internal and external exposure continues to widen.
If these risks feel overwhelming, don’t panic. While this quickly evolving threat landscape is the new normal, cybersecurity professionals like this dynamic environment. We like change. The organizations that respond best have realistic incident response and business continuity plans that assume a partner will eventually be compromised. They involve internal teams early and work closely with trusted partners so that when disruption occurs, response is coordinated rather than reactive.
Organizations can’t eliminate these external pressures, but they can plan for them. The leaders I see succeed assume disruption, invest in resilience and prepare for failures that start outside their control.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?