The acting director of the US Cybersecurity and Infrastructure Security Agency uploaded sensitive government contracting documents to a public version of ChatGPT last summer, triggering automated security alerts and raising questions about AI governance at the agency responsible for defending federal networks and critical infrastructure.
Madhu Gottumukkala, who has led CISA since May 2025, uploaded at least four documents marked “for official use only” to OpenAI’s ChatGPT platform between mid-July and early August, Politico reported. The documents contained contracting information not intended for public release.
Cybersecurity sensors detected the activity in early August, generating several alerts in the first week alone, according to the report citing four Department of Homeland Security officials.
The incident occurred despite Gottumukkala having personally requested special permission to use ChatGPT shortly after joining CISA. At the time, the AI tool was blocked for most DHS employees over concerns that sensitive information could be retained outside federal systems, the report added, citing the DHS officials.
Data entered into the public version of ChatGPT can be incorporated into the model’s training data and exposed to hundreds of millions of users. Unlike DHS-approved AI tools with controls preventing inputs from leaving federal networks, the public ChatGPT retains uploaded information on OpenAI servers.
Enterprise AI governance failures exposed
The incident highlights systemic failures in how government agencies, and by extension, enterprises, manage AI tool exceptions for senior officials, security analysts said.
“FOUO is not classified, but it is still sensitive government information,” said Arjun Chauhan, practice director at Everest Group. “Uploading it to a public AI tool creates real exposure: loss of data control, expanded exposure surface, secondary misuse risk, and policy boundary collapse.”
The pattern mirrors early enterprise incidents where employees pasted confidential material into ChatGPT, Chauhan said. The critical difference is that controls reportedly existed at CISA, and the breach occurred through an exception pathway. “That highlights a core governance failure. Exceptions and senior access are often where AI controls break down.”
Federal agencies now have AI policies and governance bodies, but the gap appears to be in execution rather than intent, according to Chauhan. Safe, approved AI tools are not always the default or most usable option, and enforcement varies by role and seniority.
Sunil Varkey, advisor at Beagle Security, said the incident reflects a broader organizational challenge. “Leadership teams may reference these tools positively for learning, productivity, and communication refinement, which unintentionally normalizes their use,” he said. “As a result, such platforms have rapidly become de facto productivity applications without being treated with the governance rigor typically applied to enterprise systems handling sensitive information.”
The tension between convenience and security often drives such incidents, Varkey added. Because “for official use only” data is not formally classified, users frequently underestimate its operational, contractual, or reputational impact.
Jaishiv Prakash, director analyst at Gartner, said the biggest risk when officials upload FOUO-marked documents to public AI platforms is losing control over the data. “You have no visibility into how long it’s retained, whether it can ever be deleted, or if it becomes exposed during legal holds or discovery.”
Organizations must provide employees with licensed, governed AI platforms featuring supplier-agreed data residency, strict no-training guarantees, and minimal retention, Prakash said. “Without that, people will continue turning to public AI tools out of convenience, putting sensitive information at risk.”
Leadership credibility questioned
The uploads triggered an internal DHS assessment involving the department’s then-acting general counsel Joseph Mazzara and chief information officer Antoine McCord, along with CISA’s chief information officer Robert Costello and chief counsel Spencer Fisher, the report said. The outcome has not been disclosed.
According to the report, CISA spokesperson Marci McCarthy confirmed that Gottumukkala received approval to use ChatGPT under DHS safeguards and described the usage as “short-term and limited.” She said he last used the tool in mid-July 2025 under an authorized temporary exception and that CISA’s default policy blocks ChatGPT access unless an exception is granted.
The fact that automated alerts triggered shows controls can detect misuse, analysts said, but the incident occurring at the leadership level raises accountability questions.
“Because this involves the head of the civilian cybersecurity agency, the impact is largely reputational,” Chauhan said. “Leaders set behavioral norms. Deviations undermine compliance culture and weaken credibility when advising other agencies and critical infrastructure operators.”
The ChatGPT incident adds to mounting controversies surrounding Gottumukkala’s brief tenure. In December, Politico reported that he failed a counterintelligence polygraph test in late July and that DHS subsequently suspended six career staffers, characterizing the polygraph as “unsanctioned.”
CISA has lost a significant number of its workforce since the Trump administration took office, with personnel dropping from over 3,300 to around 2,200 through buyouts, early retirements, and layoffs. The agency faces proposed budget cuts of nearly $500 million for fiscal year 2026.
Gottumukkala previously served as South Dakota’s chief information officer under then-Governor Kristi Noem, now DHS secretary. CISA did not immediately respond to a request for comment.