Supply Chain April 25, 2026 · 5 min read · By Forum Desk

Vercel's Context.ai Breach Pins the Real Cost of an 'Allow All' OAuth Click

A Vercel employee gave a small AI productivity tool full Google Workspace permissions. Months later, that tool was breached — and the attacker walked the OAuth scope straight into Vercel's environment.

  • #supply-chain
  • #ai-agents
  • #identity
  • #oauth
A hand inserting a small modern key into an ornate brass keyhole on a dark wooden table, lit by a single candle — OAuth grant as access metaphor.

Cloud hosting platform Vercel disclosed last week that an attacker reached its internal environment through a third-party AI tool that one of its engineers had connected to a corporate Google Workspace account with the broadest permissions the OAuth screen offered. The chain that started with an “Allow All” click on a productivity sidekick ended with attackers enumerating environment variables inside Vercel’s production tenant — and a BreachForums actor using the handle ShinyHunters trying to flip the haul for around $2 million.

Anatomy of an OAuth chain

Per reporting from The Hacker News and Help Net Security, the initial compromise was not at Vercel at all. In February 2026, an employee at Context.ai — a small AI Office Suite vendor — was hit by Lumma stealer. Logs from the infected machine reportedly showed the user searching for and downloading Roblox auto-farm scripts and game executors, a notorious distribution channel for that family of infostealer. The Lumma haul gave attackers credentials and OAuth tokens belonging to a Context.ai operator with elevated access.

From there, the path back to Vercel was a single OAuth grant. A Vercel employee had previously signed up for Context.ai’s tool using their corporate Google Workspace account and accepted the broadest scope on the consent screen, granting the AI vendor effectively “all of Workspace” on their behalf. With Context.ai compromised, the attacker rode that grant into the employee’s Workspace identity, pivoted into adjacent Vercel-internal resources, and decrypted a set of non-sensitive environment variables. Vercel says no customer source code was accessed, but credentials for a “limited subset of customers” were exposed. Google Mandiant and other firms are assisting the investigation.

The scope problem nobody is owning

Two systemic failures stack on top of each other in this incident, and neither is unique to Vercel. The first is that consumer-style OAuth grants — “Allow this app to read your email, files, and calendar” — are still treated as one-click consent inside enterprises, with no central review of which third-party apps hold which scopes against which corporate identities. The second is that AI tools, by their nature, request maximally broad scopes because their value proposition is “everything in your Workspace, summarised.” That combination turns every small AI vendor into a potential single-token doorway into the customer estate.

The Cloud Security Alliance’s CSAI Foundation and Dark Reading’s recent identity-security coverage have both flagged this pattern: AI agents are forcing identity teams to govern third-party app authorization the same way they govern human employees, and most enterprises do not yet have the tooling to do so at the granularity required. Allow lists for connected apps, periodic OAuth grant reviews, and mandatory scope-minimisation are still considered “advanced hygiene” rather than table stakes.

What this means for practitioners

For security teams, the immediate to-do list is unglamorous. Inventory every OAuth-connected app touching corporate Workspace and Microsoft 365 tenants, especially anything labelled “AI.” Revoke any grant scoped to “everything.” Disable end-user consent for high-scope apps so admin approval is required, and log every grant change to the SIEM. For vendors, the lesson is that the size of the customer is no defence — Vercel is not a small target — and that a single compromised contractor inside a small AI startup can be the most valuable asset in the supply chain.

The ShinyHunters claim of “the largest supply chain attack ever” is marketing. The pattern under it — Lumma stealer at a small AI vendor, OAuth grant at a big customer, environment-variable exfiltration as the payoff — is the part that should be sobering.