Attackers exploited Anthropic's Claude Code source leak to create malicious GitHub repos promising 'enterprise features.' The ZIP archive installs Vidar info-stealer and GhostSocks proxy malware. A direct consequence of the March npm leak.
CVE-2026-33017, a critical code injection flaw in the Langflow AI agent framework, was weaponized within hours of disclosure. CISA added it to KEV. Here's what OpenClaw users need to know about the accelerating AI supply-chain threat.
CrowdStrike attributes the supply chain attack on one of npm's most popular HTTP libraries to STARDUST CHOLLIMA, a DPRK-nexus threat actor. The compromise deployed cross-platform ZshBucket malware to Linux, macOS, and Windows — and Axios is downloaded over 100,000 times per week.
Mercor — which provides training data to OpenAI, Anthropic, and Meta — confirmed it was compromised via the TeamPCP supply-chain attack on LiteLLM. Lapsus$ claims 4TB of stolen data including source code, Slack logs, and recordings of AI-contractor conversations. This is the first confirmed high-profile casualty of the attack we covered last week.
A loose crew of young hackers called TeamPCP cascaded through Trivy, LiteLLM, Checkmarx KICS, and Telnyx in March 2026 — stealing cloud credentials from millions of AI developers. The FBI issued a critical alert. Here's what happened, what it means for AI infrastructure, and what OpenClaw users should check.
CERT/CC published four vulnerabilities in CrewAI — including a CVSS 9.6 critical RCE — that chain together through prompt injection. The flaws expose a systemic pattern: AI agent frameworks that silently downgrade security when infrastructure isn't perfect.
Snyk unveils Agent Security and Evo AI-SPM GA at RSAC 2026 — a full-lifecycle enforcement architecture that secures AI coding agents like Claude Code, Cursor, and Devin across environment, artifact, and behavior, with Agent Scan, Studio, and Agent Guard.
After 39 malicious skills delivered macOS malware through OpenClaw registries, Chainguard is applying its container security playbook to AI agent skills — with continuous hardening, scoped permissions, and full audit trails.
Five malicious Rust crates targeted CI/CD pipelines to steal developer secrets. Meanwhile, an AI-powered bot called hackerbot-claw exploited GitHub Actions to hijack the Trivy security scanner and weaponize AI coding assistants against their own users.
Huntress researchers discovered malicious OpenClaw installers promoted through Bing AI search results, delivering info-stealers and proxy malware. Here's what happened and how to protect yourself.
Two critical CVEs in Anthropic's Claude Code exploited MCP configuration to achieve remote code execution and API key theft. What OpenClaw users should know about supply chain attacks on AI agents.