Nearly half (47%) of organizations reported a cyberattack or data breach involving a third-party accessing their network in the 12 months to mid-2025, according to Imprivata and Ponemon report. As organizations increasingly rely on services providers to help manage critical systems and security operations – from cloud infrastructure and data platforms to managed security and AI services – the risk of exposure also grows.
Security leaders face mounting pressure from boards to provide assurance about third-party risks, while services provider vetting processes are becoming more onerous — a growing burden for both CISOs and their providers. At the same time, AI is becoming integrated into more business systems and processes, opening new risks.
CISOs may be forced to rethink their vetting processes with partners to maintain a focus on risk reduction while treating partnerships as a shared responsibility.
Why vetting services providers is growing more complex
Managed services providers (MSP) help augment internal resources, achieve cost savings, provide round-the-clock coverage and fill specialist gaps. More than half of organizations (52%) turn to MSPs when their number of security tools becomes unmanageable and 51% rely on them to evolve their cybersecurity strategy as they grow, according to Barracuda’s MSP Customer Insight Report 2025.
Naturally, such critical reliance requires comprehensive vetting processes.
Christina Cruz, director of cybersecurity at media investment company Advance, describes a comprehensive process that includes industry frameworks, GRC checks, privacy, data protection, incident response, business continuity and disaster recovery plans. It must identify who’s in the leadership, whether there’s a dedicated cybersecurity function, risk assessments, security controls, software development lifecycle, vulnerability management, resiliency, service-level agreements and other contractual obligations from the service provider.
“It’s a very extensive framework we use — and those are only the high-level categories,” she says.
The services outsourced are also becoming more complex, from security operation centers to threat hunting and incident response. There’s now also data management that stretches from designing and architecting systems through to day-to-day operations.
“This can include data warehousing, monitoring and reporting, security metrics and providing tuning for applications,” she says.
A recent project involved a six-month timeline for consulting, design, and managing a Snowflake environment, which included risk assessments, legal negotiations, project management, and moving towards a steady state. “Performing and evaluating a risk assessment and validating they can meet the technical requirements, going through the contractual agreement, and moving into the implementation phase and steady state was a very big lift,” she tells CSO.
Should risk assessment be about questionnaires or conversation?
David Stockdale, director of cybersecurity at the University of Queensland (UQ), needs services providers to understand the make-up and complexity of a higher education institution.
“Because of the size and research intensity of the university, we tend to build a lot in-house. Where we do use service providers, it’s usually for specific layers on top of our own services,” he says. “Researchers have different requirements to corporate or teaching units, so a cookie-cutter approach doesn’t work. The providers we work with have to understand that and be willing to adapt.”
Risk evaluation is embedded across UQ’s procurement and governance processes for all third parties. The process goes through multiple layers of governance. “Risk evaluations for third parties are consolidated up into the cyber risks, which are then consolidated up into IT risks, and then into university-wide risks. Every three months we review the whole of UQ’s risk register, with a summary going to the board quarterly.”
When looking to engage a services provider, his vetting process starts with building relationships first and then working towards a formal partnership and delivery of services. He believes dialogue helps establish trust and transparency and underpin the partnership approach.
“A lot of that is ironed out in that really undocumented process. You build up those relationships first, and then the transactional piece comes after that.”
Stockdale says the evaluation cycle must stay flexible to allow for emerging risks. He stresses that effective vetting depends on realism and partnership. “I’m a great believer in putting yourself in the other person’s shoes,” he says. “If you were in their position, would you share that information or allow that audit? Probably not. So, it’s about building a relationship where there’s trust, openness, and a lot more to-ing and fro-ing of information.”
From the vendor’s side, partnership is equally critical and guides formal assurance and shared responsibility around managing risk. Fred Thiele, Interactive CISO, says that assurance depends on more than just the data that’s gathered in questionnaires. It needs to include the engagement that follows. He encourages CISOs to use the vetting process to open a dialogue about shared risk and ongoing improvement, not just tick boxes.
“If your questions stop once the form is complete, you’ve missed the chance to understand how a partner really thinks about security,” Thiele says. “You learn a lot more from how they explain their risk decisions than from a yes/no tick box.”
Transparency and collaboration are at the heart of stronger partnerships. “You can’t outsource accountability, but you can become mature in how you manage shared responsibility,” Thiele says.
Questions that can guide CISOs in the vetting process
Thiele believes many enterprises have built elaborate risk frameworks that satisfy auditing but struggle to turn them into meaningful assurance.
He cautions about a growing “cottage industry” of third-party risk tools and compliance templates that create paperwork rather than partnership. “They drive behavioral change over time, but how much they actually improve posture is questionable.”
In his experience, vetting practices reveal as much about an organization’s maturity as they do about a provider’s security posture. Thiele’s list of suggested questions will guide CISOs to get a handle on service provider security in the vetting stage:
- Leadership and accountability: Who is accountable for cybersecurity, where do they report, and how often to the executive or board?
- Framework and standards for cybersecurity policy: Do you align with recognized frameworks and how do you validate your alignment? Have you performed a SOC audit and if so, to what level?
- Risk management: How do you identify, assess, and prioritize cyber risks in your environment?
- Data protection: How do you protect customer data at rest, in transit, and in use?
- Access control: How do you ensure only authorized people can access your systems and customer data?
- Incident response: What is your process for a cyber incident that impacts customers and how quickly do you notify impacted parties?
- Third-party risk: How do you assess the security of your own suppliers and partners?
- Testing and assurance: Do you regularly test your security posture? Please provide y/n for the following and share high-level results if possible: penetration testing, crisis management exercises, IT general controls, SOC1/SOC2.
- Training: What training regime is in place for ensuring your employees stay current on cyber threats and how to prevent them?
- Continuous improvement: Biggest security improvement in the past 12 months and what’s planned for the next 12?
“I really like the first, second, and last because they show whether the leadership is engaged, the frameworks are real, and the organization is actually improving,” Thiele says.
How far is too far for transparency?
What happens when organizations want access to sensitive information such as pen test results or vulnerability reports? Negotiations typically happen with an NDA in place, but there are still limits. Transparency and trust can sometimes take negotiation from both sides.
For Thiele, a request to view the enterprise risk register may be a ‘no’ but a request to review pen test results at a high level, the answer is more likely to be a ‘yes’. “We’re happy to give you a summary, but not the detailed findings. It’s not that we’re hiding anything — it’s that the less detail that’s out there, the better,” Thiele tells CSO.
With requests for reports and completing detailed assessments with 200+ questions, the contract needs to warrant the time and effort to fulfil the requirements. “We’ve started to put bounds around it,” he says. “If it’s a multimillion-dollar engagement, sure. But if it’s small, we’ll point them to our online portal instead.”
In Stockdale’s case, after being given assurances and naively accepting them, he now requests solid evidence. In practice, that means as part of due diligence, UQ’s cybersecurity team now prefers standards-based assurance. In the past, they’ve asked for pen test results and sometimes been refused. “So we tend to go for that more standards-based approach — ISO 27001, SOC 2 — as part of our third-party risk assessment.”
AI adds risk — and new ways to assess it
AI is another area where organizations are increasingly engaging with services providers and a paradox when it comes to risk assessments. On the one hand, it has the potential to automate parts of the process, save time and identify gaps or other issues. At the same time, AI is spreading into more tools and services, which are expanding the risk surface for organizations. Security teams are having to adapt, and quickly, to take account of generative AI.
“We’re now very focused on evaluating any potential partner for the use of generative AI and it’s a new category that’s been added to our evaluation,” Cruz says.
With AI, Cruz has started to monitor vendors acquiring ISO 42001 certification for AI governance. “It’s a trend I’m seeing in some of the work that we’re doing,” she says.
Cruz says a steering committee handles big-picture oversight and a working group develops recommendations and more of the hands-on execution. “Depending on the recommendations coming out of that group, we update specific areas in our program to incorporate the requirements needed to govern the use of AI and also protect the organization’s data. The important point is that it takes a cross-functional group within an organization to build out what’s needed and what should be evaluated and reported on,” Cruz adds.
Thiele says generative AI can assist organisations to research and verify prospective partners. “With Gen AI, you can surface a lot of what’s already in the public domain — certifications, breach disclosures, even employee profiles — and use that to check whether what you’re being told actually holds up,” he says.
The same technology that creates risk can also improve visibility, helping CISOs cut through generic assurances and spot inconsistencies before contracts are signed. “It’s there to enhance the conversation, not replace it,” he adds.