The security community has been pointing everyone toward provenance attestations as the answer to supply chain integrity. Cryptographic signatures. SLSA certifications. Trusted publishing pipelines. The promise was simple: if the signature is valid, the code is safe.


On May 11, 2026, at 19:20 UTC, that promise failed. Publicly. Completely. In six minutes.


A threat group called TeamPCP, operating a self-propagating worm they named Mini Shai-Hulud, published 84 malicious package versions across 42 packages in the TanStack namespace. TanStack includes some of the most widely used routing and data libraries in the React ecosystem, packages with over 12 million weekly downloads. The malicious versions spread from there to Mistral AI, UiPath, OpenSearch, and Guardrails AI. By the time researchers had mapped the full scope, over 170 packages spanning 518 million cumulative downloads had been touched.


OpenAI confirmed yesterday that two of its employee devices were compromised in the attack. Internal credentials were exfiltrated from a limited set of source code repositories. The company is now rotating code-signing certificates and requiring all macOS users to update their applications before June 12.


Two devices. No customer data. Contained quickly. That is not the story.


The story is what the malicious packages were carrying when they arrived: valid SLSA Build Level 3 provenance attestations, generated by Sigstore, the cryptographic infrastructure the developer community built to verify that a package came from a trusted source. This is the first documented npm worm to produce validly attested malicious packages. Every downstream system that checked provenance and saw “verified” was misled by the architecture designed to protect it.


I want to be precise about why this matters and why it is not just a developer problem.


The attack did not steal credentials. It did not phish a maintainer. It did not plant a rogue contributor. It engineered a path through TanStack’s own legitimate CI/CD pipeline by chaining three vulnerabilities in GitHub Actions: a pull request targeting a workflow that trusted fork code, cache poisoning across the fork-to-base trust boundary, and runtime memory extraction of a short-lived OIDC token from the GitHub Actions runner process. The pipeline stole its own publish token, used it to publish malicious packages, and the packages arrived with valid attestations because the trusted pipeline produced them. TanStack’s own post-mortem is direct about this: no npm tokens were stolen, and the npm publish workflow was not compromised. The attacker made the workflow compromise itself.


This is the accountability architecture question. SLSA provenance and Sigstore attestations were designed to answer “can I trust this package?” The answer the architecture returned was yes. The answer was wrong. Not because the system was forged. Because the pipeline generating the certificate was the attack surface. The governance layer certified something it should not have, and it did so correctly, by its own rules.


That is not a technical failure. It is a structural one. An attestation system built to verify origin cannot detect origin compromise. The trust signal is only as reliable as the integrity of the infrastructure issuing it, and there was no accountability architecture governing that infrastructure.


OpenAI’s specific situation adds a second layer worth examining. The company had already been hit by a supply chain attack in March, when a North Korean group compromised an Axios library in a GitHub Actions workflow used to sign macOS applications. After that breach, OpenAI accelerated deployment of new controls: hardened CI/CD credential handling, package manager configurations with minimum release age requirements, additional provenance validation. The two employee machines compromised in the TanStack wave had not yet received those updated configurations. They were in the rollout window.


That window is the dangerous window. Not ignorance of the threat class. Not a failure to act. The gap between a known attack pattern, a decision to close it, and full coverage across every asset that needs protection. This is the Timing component of the VAF operating inside a single organization’s security remediation cycle. The threat moved faster than the control deployment.


There is a third element that extends this beyond the immediate incident. TeamPCP briefly published the Mini Shai-Hulud source code on GitHub before the repository was removed. Copies have already been mirrored. The worm is now a template. The group has since announced a supply chain attack contest, offering cryptocurrency rewards to anyone who can compromise open-source packages using the publicly available toolchain. The accountability question has shifted from what TeamPCP did to what every organization that has not yet hardened their CI/CD pipeline against this attack class will face from whoever picks up the worm next.


Running this through the VAF:


Origin: the attack entered through the open-source dependency layer, a supply chain component that most organizations treat as an external trust rather than an internal risk surface. The origin of the malicious code was the organization’s own build pipeline.


Voice: developers who ran npm install during the six-minute window had no mechanism to know they were installing something compromised. The attestation system told them otherwise.


Traceability: the worm installs persistence hooks in VS Code and Claude Code that survive reboots. It creates a watchdog daemon that polls GitHub every 60 seconds. If a developer revokes the npm token the malware created, a destructive routine executes and attempts to wipe the machine. The audit trail for what was exfiltrated may not survive the cleanup.


Timing: 84 malicious package versions published in six minutes. Socket’s AI scanner flagged the compromise in six minutes or less after publication. The window was narrow, but in large organizations with lockfiles pinning dependencies, the malicious versions may have been cached before detection.


Response: TanStack’s post-mortem was published quickly and was technically precise. OpenAI disclosed within days and engaged a third-party forensics firm. The response architecture functioned. What did not function was the preventive architecture that was supposed to make the response unnecessary.


Transparency: there is no public accounting of how many organizations are still running compromised package versions, how many credentials were successfully exfiltrated, or what downstream systems those credentials reached. The 400 attacker-created GitHub repositories containing the string “Shai-Hulud: Here We Go Again” are a partial window into the blast radius.


The gap here is not in any individual organization’s response. OpenAI responded well. TanStack responded well. The gap is in the governance architecture surrounding open-source dependency trust. Organizations that consume these packages have no contractual relationship with maintainers, no SLA for breach notification, no audit rights, and no independent verification of the attestation infrastructure their security posture depends on. They extended trust because the ecosystem told them to.


The attestation said the package was safe. The pipeline that issued the attestation had been turned against its maintainers. There was no layer in the accountability architecture designed to detect the difference.


Accountable by Design means knowing exactly what you are trusting and why, before the signature is the only thing standing between your build pipeline and a worm that knows your name.

Vordan publishes Gap Alerts when an accountability gap crosses the threshold of operational consequence. Gap Alert Seven covered the contractor credential exposure pipeline. This alert covers the trust architecture that certified a malicious package as safe and had no mechanism to know the difference.


Security Research
StepSecurity (attribution and SLSA escalation finding)
https://www.stepsecurity.io/blog/mini-shai-hulud-is-back-a-self-spreading-supply-chain-attack-hits-the-npm-ecosystem
Socket (six-minute detection, dead-man’s switch analysis)
https://socket.dev/blog/tanstack-npm-packages-compromised-mini-shai-hulud-supply-chain-attack
Wiz (triple-channel C2 architecture, Session messenger network)
https://www.wiz.io/blog/mini-shai-hulud-strikes-again-tanstack-more-npm-packages-compromised
Snyk (SLSA provenance failure, wave history, shared authorship indicators)
https://snyk.io/blog/tanstack-npm-packages-compromised/
Endor Labs (OIDC trusted publisher configuration analysis)
https://www.endorlabs.com/learn/shai-hulud-compromises-the-tanstack-ecosystem-80-packages-compromised
Aikido Security (initial detection and campaign context)
https://www.aikido.dev/blog/mini-shai-hulud-is-back-tanstack-compromised
OX Security (170 packages, 518M downloads scope)
https://www.ox.security/blog/shai-hulud-here-we-go-again-170-packages-hit-across-npm-pypi/
Orca Security (worm propagation mechanics, source code publication)
https://orca.security/resources/blog/tanstack-npm-supply-chain-worm/
Phoenix Security (OIDC token extraction technical detail, wave timeline)
https://phoenix.security/mini-shai-hulud-teampcp-tanstack/

Reply

Avatar

or to participate

Keep Reading