University of California researchers have discovered that some third-party AI large language model (LLM) routers can pose security vulnerabilities that can lead to crypto theft.
A paper measuring malicious intermediary attacks on the LLM supply chain, published on Thursday by the researchers, revealed four attack vectors, including malicious code injection and extraction of credentials.
“26 LLM routers are secretly injecting malicious tool calls and stealing creds,” said the paper’s co-author, Chaofan Shou, on X.
LLM agents increasingly route requests through third-party API intermediaries or routers that aggregate access to providers like OpenAI, Anthropic and Google. However, these routers terminate Internet TLS (Transport…







