Threat Intel April 21, 2026 · 4 min read · By Forum Desk

Criminals Are Still Skeptical About AI — And That's the Real Story

A new forum-analysis study finds cybercriminals are curious about AI but full of doubts about effectiveness and operational security. The diffusion-of-innovation framing reframes the 2026 threat story from 'AI arms race' to 'early adoption, slow uptake.'

  • #threat-intel
  • #ai-security
  • #policy
A single worn laptop glowing on a dim apartment desk at night, condensation on a cold coffee mug, soft light from one lamp, no visible screen content

A new academic study analyzing seven months of underground-forum chatter lands on a finding that cuts against the prevailing threat-intel narrative of the last two years: most cybercriminals are still unconvinced by AI. Bruce Schneier highlighted the paper on April 14, drawing attention to a conclusion that deserves a wider audience than the usual “AI arms race” coverage allows.

What the study actually found

The paper — What hackers talk about when they talk about AI: Early-stage diffusion of a cybercrime innovation — applied a diffusion-of-innovation framework to more than 160 cybercrime-forum conversations. The researchers catalogued attempts to misuse commercial models, discussions of purpose-built “criminal” LLMs, and the community’s reactions to both. Schneier’s summary captures the dual finding bluntly: cybercriminals show “growing curiosity about AI’s criminal applications” alongside “doubts and anxieties” about effectiveness, cost, and operational-security impact.

That pairing matters. The threads the researchers sampled are not a triumphant adoption curve. They read closer to an early-1990s Linux mailing list — people asking whether the new thing will actually work for their workflow, whether it will get them caught, and whether the jailbroken API key they are about to pay for is a law-enforcement honeypot. Adoption is beginning. Dominance is not.

Why the skepticism is the signal

There is a reflex in threat-intel reporting to amplify every AI-enabled criminal proof-of-concept into a capability statement. The study is a useful corrective. The diffusion curve for criminal technology follows roughly the same shape as legitimate technology adoption — slow, skeptical, uneven — which means the defender’s window to get ahead of it is longer than the “AI doomsday” framing suggests.

For practitioners, this reorders the priority stack. The highest-signal category for the next twelve months is not “criminals may someday use AI for X.” It is “which specific, reproducible criminal workflows already show AI savings in forum evidence?” The study’s framing implies those workflows are still narrow: drafting phishing lures in target languages, producing passable scam scripts, and automating reconnaissance against easy-to-parse web surfaces. Those are the areas worth tuning detection engineering against first — not future-looking speculation about fully autonomous attack agents.

What this means

The policy takeaway is subtler and arguably more important. The paper explicitly positions its insights for law enforcement and policymakers, and the early-diffusion framing gives those audiences a credible, evidence-based footing that is not panic-driven.

If criminals are in the “kick-the-tires” phase rather than mid-adoption, regulators and AI platform operators have real leverage to make the adoption curve more expensive: tighter controls on API abuse, upstream detection of custom-model training on stolen data, and meaningful consequences for the resellers of jailbroken access. Threat-intel teams can contribute by feeding back which criminal workflows are not succeeding with AI — that evidence is as useful for platform policy as evidence of the ones that are.

In short, the 2026 criminal-AI story is a diffusion story, not a singularity story. Reporting on it — and defending against it — accordingly is a more durable posture than chasing every headline about a new underground model.