A judge killed DOGE's grant purge. The 'review process' was asking ChatGPT 'Is this DEI?'
A federal judge restored $100M+ in grants after two DOGE staffers used ChatGPT to flag 97% of NEH grants as DEI, including an HVAC repair and Holocaust research.
U.S. District Judge Colleen McMahon issued a 143-page ruling on May 7 restoring more than 1,400 National Endowment for the Humanities grants worth over $100 million. The grants had been killed by two DOGE staffers who used ChatGPT to decide which ones were “DEI-related.”
Their process flagged 97% of everything they reviewed. A museum HVAC repair made the list.
The case, American Council of Learned Societies et al v. McDonald et al, is now the clearest judicial statement on what happens when a government uses AI to make consequential decisions without human review. The ruling covers First Amendment, Fifth Amendment, and statutory authority grounds. Each one alone would have been enough to kill the terminations.
The ChatGPT prompt that killed $100 million in grants
Justin Fox, a former private equity associate at Nexus Capital Management, and Nate Cavanaugh, a college dropout who co-founded an IP licensing startup called Brainbase, led the review. Neither had experience in humanities, grant administration, or government service. Judge McMahon described them as people who “did not have much experience in anything at all.”
Their methodology: Fox built a “Detection List” of keywords including “DEI, DEIA, Equity, Inclusion, BIPOC, LGBTQ, homosexual, tribal, immigrants, gay, native.” He then fed short grant descriptions (not the full applications or underlying materials) into ChatGPT with this exact prompt:
“Does the following relate at all to DEI? Respond factually in less than 120 characters. Begin with ‘Yes.’ or ‘No.’ followed by a brief explanation.”
ChatGPT’s yes/no answers went into a spreadsheet. Fox testified that he “did not define ‘DEI’ for ChatGPT” and had “not the slightest idea” how the model understood the term.
Out of 1,163 grant proposals analyzed, 1,057 were flagged as DEI-related. That’s 97%. NEH career staff had already reviewed the same grants and marked them “N/A” for executive order violations, but DOGE overrode those assessments. Fox labeled his lists “Craziest Grants” and “Other Bad Grants.” Only about 40 grants from the Biden administration were ultimately spared.
On April 1, 2025, NEH canceled roughly $100 million in grants and terminated 65% of its staff. The agency has awarded $6 billion in total since 1965; its budget represents 0.003% of federal spending.
What got flagged
The list of grants ChatGPT decided were DEI-related reads like a parody:
- High Point Museum HVAC replacement ($349,000): a North Carolina history museum needed a new HVAC system to preserve its collection. ChatGPT’s response: “Yes. Improving HVAC systems enhances preservation conditions for collections, aligning with the goal of providing greater access to diverse audiences. #DEI.” The model invented a rationale that didn’t exist in the grant description.
- “In the Shadow of the Holocaust” anthology: a collection of short fiction by Jewish writers from the Soviet Union. NEH acting chairman Michael McDonald himself testified it wasn’t DEI-related. DOGE overruled him.
- Ancient Hebrew texts project: study of the Book of Jubilees and Testament of Moses, flagged because it mentioned “Jewish thought.”
- Uyghur persecution documentation: a human rights study on China’s treatment of Uyghurs.
- Women Airforce Service Pilots (WASP) project: historical research on WWII women aviators.
Judge McMahon wrote: “A grant funding the study of the experience of Jewish women during the Holocaust is not wasteful because it concerns Jewish women. Yet that is precisely how DOGE treated them.” She added: “At a time when the specter of antisemitism has reemerged from the shadows, for our Government to deem a project about Jewish women disfavored because it centered on ‘Jewish cultures’ and ‘female’ voices is deeply troubling.”
The ruling
McMahon granted summary judgment on all counts. Her ruling rested on three independent legal grounds:
First Amendment: the grant terminations constituted “a textbook example of unconstitutional viewpoint discrimination.” The government targeted grants based on their content relating to race, gender, religion, and ethnicity.
Fifth Amendment equal protection: DOGE used “the mere presence of particular, protected characteristics to disqualify grants from continued funding.”
Ultra vires: Congress never gave DOGE the authority to terminate NEH grants. DOGE had no statutory basis for what it did.
The government’s defense was that ChatGPT made the classifications, not the government. McMahon compared this argument to comedian Flip Wilson’s character Geraldine Jones, who would excuse her behavior by saying “the devil made me do it.”
She wrote: “That excuse did not work for Geraldine Jones, and it does not work for the Government. There is no distinction to be drawn here between the Government and ChatGPT.”
Her sharpest line landed on page 87: “The Constitution does not have an exception for algorithmic convenience.”
She also noted that DOGE staff used Signal with auto-delete during the review process, a potential Federal Records Act violation that the court flagged but didn’t rule on directly. Cavanaugh testified that the team faced White House pressure to “move faster.”
Why this matters beyond humanities grants
The ruling’s reach extends past the NEH. McMahon specifically addressed ChatGPT’s reliability, noting “what courts now know about the hallucinatory propensities of ChatGPT and similar generative-AI tools.” She treated the model as an instrument of the government, not a separate actor, meaning the government can’t outsource constitutional obligations to AI and then disclaim the results.
This is a precedent-setting framing. Previous cases involving algorithmic decision-making in criminal sentencing and lending faced similar questions, but those tools were at least purpose-built for their domains. DOGE used a general-purpose chatbot to make constitutional determinations about $100 million in federal spending. The gap between the tool’s design and its deployment is what makes the case legally novel and factually absurd in equal measure.
The NEH case now sits alongside a growing body of AI-governance caselaw. The Pennsylvania lawsuit against Character.AI asks whether a chatbot can practice medicine. The Five Eyes agentic AI advisory warned about oversight gaps in autonomous AI systems. The Michigan data center ruling showed what happens when AI infrastructure overrides local governance. The thread connecting all four is identical: AI making or enabling consequential decisions without adequate human review.
For any federal agency currently using AI in grant reviews, benefits determinations, or regulatory enforcement, McMahon’s framework is now the standard to beat: define what you’re asking the AI to evaluate, validate its outputs against expert judgment, and keep a human in the loop who can override the system. DOGE did none of those things.
What this means for you
If you work in government technology, read the ruling. McMahon’s framework is clear: AI can assist decision-making, but the government remains constitutionally responsible for the outcomes. The “ChatGPT did it” defense is now precedent-dead in the Southern District of New York.
For the AI industry, the case illustrates a failure mode that technical safeguards alone can’t prevent. ChatGPT did exactly what it was asked: answer a yes-or-no question about a vague concept. The problem was upstream. Nobody defined the term, nobody validated the outputs, nobody checked whether a heating system is actually a diversity program. That’s an organizational failure wearing a technology mask.
Authors Guild President Mary Rasenberger said: “We are gratified that justice was done and we will be watching closely to make sure every one of these grants is restored.”
All 1,400+ terminated grants must now be restored. The termination notices are to be “treated as without legal effect.” The White House and DOJ haven’t commented on whether they’ll appeal.
Share this article
Sources
- Judge finds Trump's DOGE-led cancellation of humanities grants unconstitutional — PBS NewsHour
- DOGE bros' grant review process was literally just asking ChatGPT 'is this DEI?' — Techdirt
- Court docket: American Council of Learned Societies v. McDonald — Justia
- DOGE's cuts to Jewish humanities grants were unconstitutional, judge rules — Jewish Telegraphic Agency
Frequently Asked
- What did DOGE do with ChatGPT?
- Two DOGE staffers fed short grant descriptions into ChatGPT with the prompt 'Does the following relate at all to DEI?' and used the yes/no answers to cancel over 1,400 National Endowment for the Humanities grants worth $100M+.
- What was the court's ruling?
- U.S. District Judge Colleen McMahon ruled the terminations were 'unlawful, unconstitutional, ultra vires, and without legal effect,' issuing a permanent injunction requiring all 1,400+ grants to be restored.
- Does this ruling affect government AI use broadly?
- The ruling establishes that the government cannot delegate constitutional responsibilities to an AI system and disclaim responsibility for the outcomes. The judge wrote, 'The Constitution does not have an exception for algorithmic convenience.'
- What grants were wrongly flagged as DEI?
- Flagged grants included Holocaust fiction research, a North Carolina museum HVAC repair, Uyghur persecution documentation, ancient Hebrew texts, and Women Airforce Service Pilots research.