Vercel got breached through a third-party AI tool's OAuth app. Here's what leaked.
A Context.ai compromise let attackers take over a Vercel employee's Google Workspace. Non-sensitive env vars were exposed, and a ShinyHunters persona is asking $2M.
Vercel confirmed on April 19 that attackers reached into its internal systems through a compromised OAuth integration tied to Context.ai, a small AI analytics tool a Vercel employee had connected to their corporate Google Workspace. The attackers pulled some environment variables and 580 employee records. A threat actor using the ShinyHunters name put the data up for $2 million on a hacking forum the same day.
What we know
- The entry point was Context.ai, not Vercel. Vercel’s bulletin says the attacker compromised a “small, third-party AI tool” and pivoted from there into an employee’s Google Workspace. The Hacker News identified the app as Context.ai, with the OAuth client ID
110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com. That same OAuth app reaches “hundreds of users across many organizations,” per Vercel. - Once inside the Workspace account, the attacker read Vercel environments and pulled environment variables that were not marked “sensitive”. Vercel encrypts sensitive-marked env vars at rest and says there’s no evidence those values were accessed.
- 580 employee records were exfiltrated, covering names, Vercel email addresses, account status, and activity timestamps, per BleepingComputer’s read of the forum listing. The threat actor also claims access keys, source code, database samples, NPM and GitHub tokens, and Linear data, but Vercel hasn’t confirmed that scope.
- A “limited subset of customers” was affected directly. Vercel is contacting them one by one. Everyone else isn’t believed to be compromised, though Vercel is urging a full secrets review regardless.
- Vercel engaged Mandiant and law enforcement, and is working with Context.ai to map the full blast radius. Services like Next.js and Turbopack were confirmed unaffected.
- A ShinyHunters-tagged actor posted the data with a $2M asking price. People linked to the actual ShinyHunters crew told BleepingComputer they had nothing to do with it. The name gets borrowed a lot.
What we don’t know
- The customer count. Vercel has not published a number, and the “limited subset” language tells you nothing about whether that’s 30 customers or 3,000.
- Which specific env vars leaked. Vercel is emailing affected customers with their individual scope, but there’s no public inventory.
- Whether the ShinyHunters claim of source code, database samples, and GitHub tokens is real or embellishment. The forum listing is one screenshot away from fiction.
- How Context.ai itself got compromised. Vercel says the OAuth-app compromise is “ongoing and broader,” which iTnews noted means more victims are likely to surface in the coming weeks.
Sources
Vercel’s knowledge-base bulletin is the authoritative timeline and is being updated as the investigation runs. BleepingComputer has the attacker-side framing and the $2M listing details. The Hacker News tied the OAuth client ID to Context.ai by name.
Why the “sensitive” toggle matters more than you think
Vercel has offered a Sensitive Environment Variables feature for two years. Teams widely ignored it. Sensitive-marked values are stored encrypted and aren’t readable from the dashboard or the API after they’re set. Non-sensitive values are readable, logged, and apparently reachable by anyone who can get into the right Workspace account.
The breach doesn’t prove the feature is a panacea. It proves that not using it costs you a recovery window. In this incident, every secret without the flag is now in the “rotate immediately” bucket; every secret with the flag is in the “probably fine, but check” bucket. That’s a meaningful difference when your ops team is going password-spray through production at 2am.
What this means for you
If you use Vercel, assume every environment variable without the sensitive flag is burned and rotate it this week. Start with anything that can write to a database, call a payment API, or grant repo access. Then go into Google Workspace’s OAuth console and review which third-party apps have drive.readonly, gmail.readonly, or broad Workspace scopes on your employees’ accounts: the Context.ai vector is reusable against any vendor with a quiet analytics app plugged into your staff’s mail. Turn on the sensitive-env-var flag on every real secret from now on, not just the ones you remember at 3pm. One more thing: the ShinyHunters-brand claim of source code and database samples probably outruns the truth. Wait for Vercel’s per-customer emails before you front-run a full incident to your own customers. But run the rotation now anyway. The cost is a rollout window; the alternative is finding out the hard way that you had tokens in the “description” field of a build env.