This year will mark the turning point where artificial intelligence will stop assisting and start acting. We will witness a qualitative leap towards agent-based or agentive AI, capable of making autonomous decisions, managing complex workflows, and executing end-to-end tasks without constant intervention. However, this autonomy carries with it a serious warning for businesses: the ability to operate alone exponentially multiplies the impact of any error or security breach.
According to ISACA’sTech Trends and Priorities Pulse Poll, 59% of IT and cybersecurity professionals anticipate AI-driven cyber threats in 2026. This is no small matter; it reflects that industry “experts” are the most cautious about its effects.
Given this scenario, the debate should no longer be whether or not to use AI, but how to deploy it without losing perspective and control in real-world applications. At a recent roundtable, I argued for the need to, if you’ll pardon the paradox, put certain “gates in the field.” Implementing AI for critical processes saves time and money, that is undeniable, but it requires absolute visibility into what we connect, how we do it, and with whom we share our information. This places us before the obligation to train people and govern what happens in the company, always keeping human responsibility at the center of the equation.
With the advent of agentic AI, this premise goes from being a prudent recommendation to a survival imperative. The risk is no longer limited to models that generate text, but to agents that execute actions on systems, customer databases, and supply chains. Herein lies a dangerous disconnect: according to the same study, only 13% of professionals consider their organization to be “very prepared” to manage these risks. This is an alarming statistic that reveals that the vast majority of companies are rushing into the AI race while operating in an unacceptable zone of vulnerability.
That is why I will never tire of repeating that disruptive advances, such as agentic AI, require that all evolution be grounded in governance. Governance is not understood as bureaucracy that slows down agility, but as the set of rules that define the limits, responsibilities, and necessary evidence: which use cases are approved, what data agents can work with, what the mandatory controls are, how automated decisions are supervised, and who is responsible when something goes wrong.
Within this complex landscape, the good news is that the market is beginning to mature in its reading of the situation. It is true that the use of AI in areas such as cybersecurity can alleviate operational burdens, but it also generates an inevitable implementation toll. IT teams must lead the deployment of AI solutions and the development of policies governing their use, with the goal of safe and responsible adoption, which requires time, resources, and vision.
On the other hand, there is a limiting factor that we cannot ignore: the lack of specialized talent and the fatigue of existing talent. One fact that should concern any company is that, according to an ISACA study, a staggering 79% of people working in IT experience burnout. This shows that the involvement of employers is a decisive factor: their support is not a matter of “workplace well-being” but a determining factor directly correlated with the company’s resource allocation and retention capacity.
Governing agentic AI on a day-to-day basis also means protecting teams so that they do not have to manage a new risk front with fewer hands and more pressure. Where to start? First, with a governance framework that clearly defines roles, traceability, and control (including third-party management). Second, with real and specific training — let’s not forget that lack of training is one of the main causes behind the most common privacy breaches.
And third, through resilience. It is no coincidence that business continuity and operational recovery have been established as strategic priorities for 2026.
Ultimately, agentic AI can represent the ultimate leap in efficiency for organizations or a leap into the void in terms of exposure and vulnerability. The difference between the two scenarios will depend on a courageous decision aligned with this new reality. Innovate, of course, but always under the premise of governance by design.
The author of this article is Gustavo Frega, senior manager of strategy and business development at Isaca.