Trump's Cyber Chief Uploaded Sensitive Files to ChatGPT — What Just Happened
By DailyAIWire Desk | 3 Min ReadingTime
The person responsible for protecting America’s digital infrastructure just triggered a major security alarm. And it involves ChatGPT.
Madhu Gottumukkala, the acting director of the Cybersecurity and Infrastructure Security Agency (CISA), uploaded sensitive government documents to the public version of ChatGPT last summer. The files weren’t classified, but they were marked “for official use only” — government speak for information that shouldn’t be shared publicly.
Multiple automated security warnings went off immediately. These are the same alerts designed to catch hackers or prevent accidental data leaks from federal networks.
Here’s the twist: ChatGPT was banned for all Department of Homeland Security employees when Gottumukkala joined CISA in May. He had to request special permission to use it. And then he uploaded sensitive contracting documents anyway.
![]()
Why This Matters More Than You Think
When you upload something to the public version of ChatGPT, it doesn’t just disappear into the void. OpenAI, the company behind ChatGPT, can use that data to improve responses for other users. With nearly 800 million active users worldwide, that’s a lot of potential exposure.
Government agencies have internal AI tools specifically designed to keep data locked down. DHS has its own chatbot called DHSChat that prevents anything you type from leaving federal networks. But Gottumukkala wanted ChatGPT instead.
One DHS official told Politico bluntly: “He forced CISA’s hand into making them give him ChatGPT, and then he abused it.”
This isn’t just bureaucratic drama. CISA’s job is defending federal networks against sophisticated hackers from Russia, China, and other adversaries. The acting director uploading sensitive files to a public AI platform raises obvious questions about judgment and security protocols.
What Happened Behind Closed Doors
Senior officials at DHS launched an internal review in August to figure out if any government infrastructure was compromised. The department’s then-acting general counsel and chief information officer got involved to assess potential damage.
Gottumukkala met with CISA’s leadership to discuss what he uploaded and proper handling of sensitive materials. The outcome of the review remains unknown.
CISA spokesperson Marci McCarthy defended the incident in a statement, saying Gottumukkala “was granted permission to use ChatGPT with DHS controls in place” and the use was “short-term and limited.” She said he last used ChatGPT in mid-July 2025 under an authorized exception.
But there’s a discrepancy. Politico’s sources say the security alerts were triggered in August, with several warnings in just the first week alone.
![]()
The Bigger Picture for Workers
This incident comes at a moment when AI tools are flooding workplaces everywhere. A new Gallup poll shows 12% of American adults now use AI daily at their jobs. That number is climbing fast.
But here’s the problem: most employees don’t understand the security implications. What feels like a helpful productivity tool can become a data leak nightmare. Especially in government, healthcare, finance, or any field handling sensitive information.
If the head of America’s cybersecurity agency can make this mistake, what does that say about millions of other workers using ChatGPT to summarize contracts, draft emails, or analyze documents?
Companies are scrambling to create policies. Some are building internal AI tools with guardrails. Others are banning public AI platforms entirely. The middle ground is messy and confusing.
What Changes Now
This story puts pressure on federal agencies to clarify AI usage policies. When even leadership doesn’t seem to understand the boundaries, rank-and-file employees are left guessing.
For students and young professionals entering the workforce, this is a wake-up call. Understanding AI security isn’t optional anymore. It’s a core professional skill.
Gottumukkala previously served as South Dakota’s chief information officer under then-Governor Kristi Noem, who now leads DHS. His background adds another layer of irony — someone with IT leadership experience should theoretically understand these risks better than most.
There’s also the polygraph issue. At least six CISA staffers were placed on leave last year after Gottumukkala reportedly failed a polygraph test he requested. He’s denied failing it, telling a congressman he didn’t “accept the premise of that characterization.”
What Happens Next
Federal employees will likely face stricter AI usage rules. The incident proves that permission alone isn’t enough — agencies need clear technical controls and ongoing monitoring.
OpenAI and other AI companies will face renewed questions about enterprise security. Can they offer government-grade versions that don’t feed data back into public models? Some already exist, but adoption is slow.
For everyday users, the lesson is simple: think twice before uploading anything sensitive to ChatGPT or similar tools. Work documents, personal information, proprietary data — once it’s in the system, you’ve lost control.
The irony is almost too perfect. America’s top cybersecurity official, someone whose entire job is preventing exactly this kind of exposure, made a rookie mistake with the world’s most popular AI chatbot.
It’s a reminder that no matter how sophisticated our defenses get, human error remains the biggest vulnerability in cybersecurity.