LiteLLM Backdoored, LM Studio Flagged: AI Tools Face Supply Chain Threats

Two major security incidents hit the AI developer ecosystem within hours on March 24, 2026: backdoored versions of LiteLLM, the popular LLM API proxy, were published to PyPI carrying a multi-stage credential stealer, while LM Studio users reported Windows Defender flagging the local AI tool as a trojan. Together, the incidents underscore how the software supply chain powering AI workflows has become a high-value target for attackers.

General Audience

Visualization of a software supply chain attack, showing interconnected package nodes with some compromised in green among safe blue nodes
Illustration generated by AI

LiteLLM: A Real Supply Chain Compromise

LiteLLM is an open-source Python library used by thousands of developers and enterprises to route API calls across LLM providers like OpenAI, Anthropic, and Google. On the morning of March 24, the threat actor group TeamPCP published two backdoored versions — 1.82.7 and 1.82.8 — to the Python Package Index (PyPI).

The attack was the final link in a chain that began with TeamPCP’s earlier compromise of Trivy, an open-source security scanner. LiteLLM used Trivy in its CI/CD pipeline; through that compromised dependency, TeamPCP obtained a maintainer’s PyPI credentials and used them to push malicious releases.

The two versions used different injection techniques:

  • Version 1.82.7 embedded a base64-encoded payload inside litellm/proxy/proxy_server.py, executing when anything imported the proxy module.
  • Version 1.82.8 added a .pth file (litellm_init.pth) that runs automatically on every Python process startup when LiteLLM is installed — regardless of whether the library is actually imported.

The payload was a three-stage attack: a credential harvester sweeping SSH keys, cloud credentials, Kubernetes secrets, cryptocurrency wallets, and .env files; a Kubernetes lateral-movement toolkit deploying privileged pods to every node in a cluster; and a persistent systemd backdoor polling a command-and-control domain for additional binaries.

The compromised versions were available for approximately three hours before PyPI quarantined the package. Berri AI, which maintains LiteLLM, has engaged Google Mandiant for forensic analysis and paused all new releases pending a full supply-chain review. Users of the official LiteLLM Proxy Docker images were not affected, as those images pin dependency versions.

If you installed LiteLLM via pip during this window, the LiteLLM team urges rotating all credentials that were present as environment variables or config files on any affected system, inspecting filesystems for litellm_init.pth, and pinning to version 1.82.6 or earlier.

LM Studio: False Alarm, Real Anxiety

Separately, users of LM Studio 0.4.7 — the popular desktop application for running local LLMs — reported that a Windows Defender update began flagging the app as Trojan:JS/GlassWorm.ZZ!MTB, quarantining files and rendering the application unusable.

The timing was alarming because GlassWorm is a real and active threat: a supply-chain campaign that has compromised over 400 GitHub repositories, npm packages, and VS Code extensions since late 2025. GlassWorm uses invisible Unicode characters to hide malicious payloads in source code and leverages the Solana blockchain as a decentralized command-and-control channel.

However, security analysis determined that the LM Studio detection was a false positive. Only 1 out of 62 antivirus engines on VirusTotal flagged the file, and the flagged code contained only legitimate application strings — standard webpack-bundled Electron patterns that triggered the overly broad GlassWorm signature. The LM Studio team confirmed they do not use LiteLLM and stated the detection stemmed from obfuscated JavaScript patterns common in bundled Electron apps. Microsoft has been advised to adjust the detection signature.

What This Means for AI Developers

These incidents highlight a growing pattern: as AI tools become critical infrastructure, their supply chains become prime targets. The LiteLLM compromise is particularly notable because the attackers didn’t target LiteLLM directly — they compromised a security tool (Trivy) that LiteLLM relied on, then pivoted through the dependency chain. Meanwhile, the LM Studio false positive shows how legitimate AI tools can become collateral damage when threat signatures are too broad.

Practical steps for AI developers:

  • Pin dependency versions in production environments and CI/CD pipelines
  • Enable two-factor authentication on package registry accounts (PyPI, npm)
  • Monitor for unexpected .pth files in Python site-packages directories
  • Audit CI/CD dependencies — security scanners and linters are high-value targets precisely because they run with elevated trust
  • Use lockfiles and hash verification to detect tampered packages before installation

Related Coverage

Sources