AI-enabled scams rose 500% in 2025 as crypto theft goes ‘industrial’ — TradingView News
The use of large language models in scams increased fivefold in 2025, according to TRM Labs, as fraudsters leveraged the technology to boost outreach, make scams more convincing and launch them at scale.
It comes as $35 billion in cryptocurrency was sent to scammer addresses in 2025, a slight decrease from $38 billion in the previous year, TRM Labs said in its 2026 crypto crime report on Wednesday.
“Large language models (LLMs) enable scams to cross language and cultural contexts with less friction, while AI-generated images, voice cloning, and deepfake videos reduce the cost of creating convincing personas,” the firm said.
In March last year, at least three crypto founders reported foiling an attempt from alleged North Korean…




