Responsible disclosure is built on an assumption that “doing the right thing” will be met with timely action, fair treatment, and professional respect, if not a bounty award. Increasingly, that assumption is failing. And when it does, organizations alienate researchers and create regulatory, legal, and reputational risk.

Over the past few years, security researchers have found themselves waiting months, sometimes more than a year, for companies to acknowledge responsibly disclosed vulnerabilities, even as the same flaws quietly put customers at risk. In several cases, frustration over silence, disputed severity assessments, or shifting scope boundaries pushed researchers toward public disclosure, legal escalation, or questionable behavior companies later characterized as extortion.

As vulnerability reporting becomes slower, more bureaucratic, and less rewarding, the line between cooperative research and adversarial pressure is blurring. For CISOs, this is no longer an ethics debate. It is a governance and risk-management problem.

A recent flashpoint

Most recently, the React2Shell vulnerability (CVE-2025-55182) illustrated how responsible disclosure can work when the right structures are in place. The flaw was privately reported to the React maintainers on 29 November 2025. The disclosure triggered a coordinated response involving the React team, Next.js maintainers at Vercel, and major cloud providers including Amazon Web Services (AWS) and Cloudflare, allowing patches to be developed and tested ahead of public disclosure.

Despite the prompt acknowledgment and remediation efforts, the vulnerability was quickly exploited in the wild. Responsibility for mitigation was effectively distributed across maintainers, framework integrators, and downstream users. Because React sits at the core of the modern web stack, the flaw rippled across development and security teams globally, highlighting how even well-handled disclosures can still produce widespread operational risk.

React benefits from strong institutional support through the React Foundation and backing from multiple large technology companies. That support enables coordinated fixes, communication, and sustained maintenance.

The more difficult question is what happens when a researcher uncovers a similarly critical flaw in a widely used open-source project that has no corporate backing, no formal security team, and no bounty program?

In those cases, exploitation is clearly unethical, but reporting the issue often means unpaid labor with uncertain outcomes. The dilemma raised in practitioner circles after React2Shell was not about this specific incident, but about the broader incentive gap. If responsible disclosure offers neither compensation nor assurance of timely action, what realistically motivates researchers to continue doing the right thing?

The question resonated not because it’s new, but rather that it reflects a growing disconnect between how vulnerability disclosure is supposed to function and how it increasingly does in practice.

Enter the gray zone of ethical disclosure

The result is a growing gray zone between ethical research and adversarial pressure. Based on years of reporting on disclosure disputes, that gray zone tends to emerge through a small set of recurring failure modes.

Silent treatment and severity warfare: Researchers submit detailed reports and receive no response for months, or face disputes over CVE scope and CVSS scoring that turn technical discussions into negotiations. Researchers feel compelled to defend impact claims aggressively and to be taken seriously, while vendors push back against what they view as inflated risk. In some cases, bounty hunters preemptively elevate severity, anticipating resistance and delays.

Process as denial of service: Automated scanners, AI-assisted fuzzing, and largely theoretical bugs increasingly flood maintainers and security teams with low-signal reports — a dynamic repeatedly highlighted by Daniel Stenberg, the founder of the cURL project. As a defensive response, maintainers demand ever more concrete proof of exploitability, raising the threshold for engagement even for legitimate findings. In some cases, projects begin questioning whether bug bounties meaningfully improve security, or simply externalize triage cost under the guise of incentives.

Coercive escalation: Finally, when established disclosure channels appear unresponsive or dismissive, some researchers resort to public pressure, legal threats, or ethically ambiguous demonstrations to force action.

Each of these failure modes seems rational in isolation. Together, they erode trust and steadily push responsible disclosure toward a more adversarial posture.

Case studies from the fault line

In 2025, a responsibly reported email spoofing flaw affecting a major delivery platform was deemed out of scope, triggering a dispute over severity and impact. The underlying issue was not whether the bug existed, but whether it crossed the organization’s internal threshold defining risk. The disclosure process stalled, and frustration escalated on both sides, with the vulnerability reporter barred from the bug bounty program over advances the company saw as extortion.

A similar pattern appeared at a ride hailing company, where multiple researchers independently reported a flaw that allowed emails to be sent appearing to originate from the company’s domain. Despite clear reproduction steps and repeated follow-ups, the reports went unanswered for more than a year. Ethical disclosure was met not with remediation, but with silence.

Elsewhere, disputes have emerged over overlapping CVE claims, with multiple parties arguing over attribution for the same underlying issue. What is meant to be a coordination mechanism instead became a contest for recognition, further distorting narratives.

More troubling are cases where researchers crossed ethical boundaries entirely. For example, hijacking open-source libraries to harvest cloud credentials, or taking control of legitimate packages to embed job application messages, compromising downstream users in the process. Such actions are indefensible but are best understood as symptoms of a disclosure ecosystem that increasingly rewards escalation, visibility, or leverage over patience and cooperation.

Why is this happening now?

It would be easy to frame these disputes as a breakdown in professional norms, but what is happening beneath the surface is the convergence of several structural forces.

Vulnerability report volume has surged. Automated scanners and AI-driven fuzzing tools now generate vast numbers of technically valid but operationally irrelevant findings. Maintainers and security teams are forced to triage at scale, often under significant time and resource constraints.

At the same time, compliance pressures have hardened organizational responses. Once a CVE is reported, it is often treated as a problem by default, before context or exploitability is assessed. High severity scores can trigger build failures, audits, or executive escalation regardless of practical impact — a common frustration for developers using SCA tools that block builds over edge cases that ultimately need to be ignored or waived.

CVSS scoring itself is mechanically calculated and intentionally environment-agnostic, meaning low-impact edge cases can score similarly to actively exploited flaws, contributing to alert fatigue and skepticism.

Finally, open source infrastructure remains structurally underfunded. Many critical components are maintained by a small number of individuals with no obligation, or capacity, to absorb the operational cost imposed by global dependency chains.

In this environment, demanding proof of real-world impact is a form of noise control, rather than hostility. That seemingly reasonable demand, however, has downstream consequences.

When proof becomes unpaid consulting

In many disputes, disclosure breaks down not because a vulnerability does not exist, but because proving its real-world impact requires environment-specific analysis that neither side budgeted for.

Researchers are asked to build realistic PoCs, demonstrate exploit chains, or validate assumptions across configurations they do not control. Maintainers are asked to reason about downstream usage patterns far beyond their original design scope. Both are performing system-level analysis without compensation.

Maintainers are justified in pushing back against low-signal reports. Researchers are justified in feeling that the bar for engagement keeps rising. The system offers no obvious place to send the cost.

Why should CISOs care and what can they do?

For cybersecurity leaders, the implications are concrete.

When disclosure channels are perceived as slow, dismissive, or adversarial, researchers disengage. Some go quiet. Others escalate publicly. A few take ethically questionable paths. None of these outcomes improve security posture.

In practice, most of the levers that determine these outcomes sit with software vendors, platform providers, and open-source stewards. In those environments, CISOs oversee product security incident response teams (PSIRTs), vulnerability intake, disclosure timelines, and researcher engagement. This is where incentives are set, researcher experience is shaped, and triage decisions determine whether cooperation compounds or collapses.

For CISOs operating in vendor, platform, and open-source environments, there is no single fix. Outcomes improve materially when disclosure is treated as an operational function rather than a moral expectation.

Practical steps that CISOs in this space can take include:

  1. Establish and honor service-level expectations for acknowledgement and triage, even when fixes take time.
  2. Assign clear ownership for the researcher experience, not just vulnerability intake.
  3. Publish severity triage criteria and document rationale when disagreeing with reports.
  4. Avoid treating CVSS scores as deployment gates without environmental context.
  5. Use third-party disclosure programs or coordinators to absorb overflow and reduce friction.
  6. Offer meaningful non-cash recognition where bounties are not feasible.
  7. Commit to upstreaming fixes when patching dependencies internally.
  8. Provide legal safe harbor language for good faith testing to reduce adversarial escalation.
  9. Fund the open-source dependencies your organization relies on, whether through sponsorship, contracts, or consortiums.
  10. Be explicit about what level of proof is expected and what isn’t.

None of these steps require endorsing exploit sales or paying ransoms for vulnerabilities. They require acknowledging that ethical behavior does not scale on goodwill alone.

For CISOs in healthcare, finance, education, and other consuming organizations, the risk manifests differently but no less acutely. When disclosure breaks down upstream, it surfaces downstream as delayed patches, brittle compensating controls, and security decisions driven by incomplete or distorted signals.

Left unaddressed, those gaps can become governance failures. Organizations may be unable to explain why known vulnerabilities remained unpatched, why risk signals were discounted, or why vendor assurances were accepted without scrutiny.

Enterprise CISOs influence this system through procurement requirements, vendor accountability, and how rigorously vulnerability data is contextualized before triggering disruption. Treating disclosure quality as a third-party risk factor is no longer optional.

Read More