LiteLLM CVE-2026-42208: SQL Injection Exploited in 36 Hours — AI Gateway Credentials at Risk

CVE-2026-42208 in LiteLLM — the open-source AI gateway with 45K GitHub stars — was exploited within 36 hours of disclosure with no public PoC. A successful attack yields OpenAI org keys, Anthropic workspace admin keys, and AWS Bedrock credentials.

Share
A cracked database cylinder with API key icons streaming out and a 36h clock beside it. White line art on deep violet background with red-orange accent dots at the crack and key points.

A critical SQL injection vulnerability in LiteLLM — the popular open-source AI gateway with 45,000 GitHub stars — was exploited within 36 hours of disclosure, with Sysdig confirming that a successful attack delivers cloud account compromise rather than just web app data, given the high-value API keys stored in LiteLLM's database.

SAN FRANCISCO — Sysdig Threat Research published findings on April 30, 2026 documenting active exploitation of CVE-2026-42208, a critical SQL injection vulnerability in LiteLLM — the widely-deployed open-source AI gateway used to route requests across OpenAI, Anthropic, AWS Bedrock, Azure OpenAI, and dozens of other LLM providers. Exploitation was observed within 36 hours of the vulnerability's public disclosure, with attackers building working exploits directly from the advisory description — no proof-of-concept code was publicly available at the time. The blast radius of a successful database extraction is not a typical web application breach: a single LiteLLM credentials row typically holds an OpenAI organization key with five-figure monthly spend caps, an Anthropic console key with workspace admin rights, and an AWS Bedrock IAM credential — making successful exploitation closer to a full cloud account compromise than a standard SQL injection.


Vulnerability Profile

Vulnerability Intelligence: CVE-2026-42208 — LiteLLM SQL Injection
DetailInformation
CVECVE-2026-42208 — critical SQL injection in LiteLLM AI gateway
Affected PlatformLiteLLM — open-source AI gateway; 45,000+ GitHub stars, 7,600+ forks; used to proxy requests to OpenAI, Anthropic, AWS Bedrock, Azure, and 60+ LLM providers
Time to Exploitation36 hours from public disclosure — no public PoC available; attacker built exploit directly from advisory description
Attack TypeSQL injection via error logs — untrusted input reaches vulnerable query through disabled error logging path
Blast RadiusFull litellm_credentials database extraction — OpenAI org keys (five-figure spend caps), Anthropic workspace admin keys, AWS Bedrock IAM credentials, Azure API keys
Prior Campaign ContextLiteLLM was previously compromised in the March 2026 TeamPCP supply chain attack — attackers already have familiarity with LiteLLM's credential architecture
Interim MitigationSet "disable_error_logs: true" under "general_settings" in litellm config — removes the vulnerable query path until patch is applied
Patch StatusPatch available in latest LiteLLM release — update immediately; no version pinning workaround is sufficient
Exploitation PatternSysdig TRT observed attackers connecting to the vulnerable endpoint and beginning credential extraction in under 3 minutes after initial access

Why LiteLLM Credentials Are Uniquely High-Value

LiteLLM's purpose is to be a unified gateway to all major LLM providers — which means its database is specifically designed to store all of an organization's AI provider credentials in one place. A single successful extraction of the litellm_credentials table may yield OpenAI organization-level API keys (which grant access to all models and fine-tunes under the organization, with spending limits in the tens of thousands of dollars per month), Anthropic console keys with workspace admin rights, AWS Bedrock IAM credentials with broad cloud service access, Azure OpenAI deployment keys, and credentials for dozens of other specialized AI providers. Unlike a typical database breach that yields user PII, a LiteLLM credential extraction yields active API access to production AI infrastructure — immediately monetizable through unauthorized model usage, and immediately useful for AI-assisted attacks against the victim's other systems. This follows the March 2026 TeamPCP supply chain attack on LiteLLM, covered in our prior AI infrastructure vulnerability coverage. All vulnerability coverage is tracked on The CyberSignal.

The 36-Hour Exploitation Window and What It Means for AI Infrastructure

Sysdig's documentation of exploitation within 36 hours — with no public proof-of-concept available — reflects the same pattern seen in the Marimo RCE (exploited within 10 hours of disclosure) and Langflow CVE-2026-33017 (exploited within 20 hours). Attackers are now routinely monitoring security advisory publications and building functional exploits directly from technical descriptions in under two days. For AI infrastructure specifically, this means the standard "assess, test, schedule patch" vulnerability management workflow — which often takes weeks — is structurally incompatible with the actual exploitation timeline for widely-used AI tools.

What to do now

Update LiteLLM to the latest release immediately — there is no version pinning workaround. If immediate patching is not possible, apply the interim mitigation: set "disable_error_logs: true" under "general_settings" in your LiteLLM configuration to remove the vulnerable query path. After patching, rotate all API keys stored in your LiteLLM credentials database as a precaution — given the 36-hour exploitation window, treat any unpatched instance as potentially compromised. Audit LiteLLM access logs for anomalous database query patterns. If you are running LiteLLM in a cloud environment, review cloud provider access logs for any unauthorized API usage on keys stored in LiteLLM.


The CyberSignal Analysis

Signal 01 — AI Infrastructure Is Now a Primary Attack Target

LiteLLM has been targeted twice in six weeks — once via supply chain attack (March, TeamPCP) and once via direct vulnerability exploitation (April, CVE-2026-42208). This is not coincidence. AI infrastructure tools are disproportionately valuable as attack targets because they aggregate credentials across multiple high-value services in one database, run with elevated network access to route traffic to external AI providers, and are often deployed rapidly by engineering teams without the same security review applied to core production services.

Signal 02 — Credential Aggregators Are the New Password Managers for Attackers

LiteLLM, like any AI gateway, is a credential aggregator — its entire purpose is to store and route API keys. The same architectural pattern that makes it useful (one place to manage all AI credentials) makes it an extremely high-value single point of failure. Organizations that have implemented strong secrets management for their core infrastructure but store AI provider keys in a LiteLLM database without equivalent protection have created a new privileged attack surface in their environment.

Signal 03 — The Sub-48-Hour Exploit Development Timeline Requires a Policy Response

The industry assumption underlying most vulnerability management frameworks is that organizations have days to weeks between a vulnerability disclosure and active exploitation. For widely-used developer and AI tools, that assumption is now demonstrably false — exploitation in under 48 hours from disclosure, with no public PoC, is becoming a documented pattern. This requires a corresponding policy shift: critical AI and developer infrastructure vulnerabilities should trigger emergency patching procedures, not standard patch cycle inclusion.


Sources

TypeSource
Primary ResearchThe Hacker News: LiteLLM CVE-2026-42208 SQL Injection Exploited Within 36 Hours
ResearchSysdig TRT: LiteLLM CVE-2026-42208 — 36-Hour Exploitation Window Analysis
ContextThe Register: Ongoing Supply Chain Campaign Targets Security and Dev Tools Including LiteLLM
RelatedThe CyberSignal: LMDeploy SSRF CVE-2026-33626 Exploited Within 12 Hours