Hackers Push Back Against AI Posts on Underground Forums

Hackers Push Back Against AI Posts on Underground Forums
Wired AI

Key Points

  • Study examined 97,895 AI‑related posts on cybercrime forums from late 2022 to 2023.
  • Forum members repeatedly demanded an end to AI‑generated content, calling it "AI shit."
  • Researchers found AI has not lowered the skill barrier for sophisticated attacks.
  • AI mainly boosted automated scams such as SEO fraud, bots and romance scams.
  • Flashpoint noted sophisticated actors can bypass AI guardrails but remain cautious of AI‑laden marketplaces.
  • A minority suggested limited AI assistance for grammar, not full post generation.
  • Proposals for AI‑enhanced cybercrime markets met with strong opposition from forum users.

Researchers tracking conversations on cybercrime message boards have found a growing backlash against generative‑AI content. From late 2022 through 2023, forum members complained that AI‑generated tutorials, bullet‑point explainers and low‑quality posts cluttered their spaces and threatened the credibility of seasoned hackers. The study, led by the University of Edinburgh and supported by Cambridge and Strathclyde, analyzed nearly 98,000 AI‑related threads and documented the tension between low‑level cybercriminals and the AI tools they are beginning to use.

Cybercrime forums that once thrived on human‑to‑human exchange are now wrestling with a new irritant: generative artificial intelligence. A joint study by researchers at the University of Edinburgh, the University of Cambridge and the University of Strathclyde examined almost 98,000 AI‑related posts on underground marketplaces and discussion boards from the launch of ChatGPT in late 2022 through the end of 2023. The analysis revealed a steady stream of complaints from forum members who felt AI‑generated content lowered the quality of discourse and threatened their reputations.

“People don’t like it,” said Ben Collier, a senior lecturer in security research at Edinburgh, referencing the tone of dozens of posts that explicitly demanded an end to AI‑driven explanations. On Hack Forums, a well‑known space for hacking enthusiasts, users posted messages such as “Stop posting AI shit” and lamented that newcomers were using AI to churn out basic tutorials without any effort. The sentiment echoed across multiple Russian‑origin platforms, where seasoned scammers and script‑kiddies alike rely on reputation scores to secure deals and trade stolen data.

Researchers traced the rise of AI chatter to the broader hype surrounding large‑language models. Early in the ChatGPT rollout, many low‑level actors experimented with AI to automate mundane tasks: drafting phishing scripts, translating social‑engineering messages or generating code snippets. While some praised the efficiency gains, a growing faction grew skeptical, arguing that reliance on AI diluted the skill‑based culture that underpins these communities.

Flashpoint, a cyber‑intelligence firm, observed a parallel trend. Its analysts noted that more sophisticated threat actors were already circumventing the guardrails built into commercial models, but they remained wary of AI‑laden marketplaces. “There are weaknesses and vulnerabilities, sometimes exposing the underlying infrastructure,” said Ian Gray, Flashpoint’s vice president of intelligence, describing the potential fallout of unchecked AI integration.

The study found that AI’s impact on the underground economy was uneven. It has not dramatically lowered the barrier to entry for complex attacks, nor has it upended established business models. Instead, AI appears to have accelerated already automated schemes, such as SEO fraud, social‑media bots and romance scams. In those niches, AI‑generated content can scale quickly, flooding the internet with low‑cost, high‑volume spam.

Despite the irritation, a minority of forum members expressed a nuanced view. Some suggested an AI assistant could help polish grammar or structure posts, drawing a line at fully automated submissions. “An AI generator for posts would turn this into a clanker forum of AI’s talking to each other,” one user warned, underscoring the desire to preserve human interaction as the core value of these spaces.

Flashpoint also reported discussions about an “AI‑enhanced” cybercrime market, a proposal to streamline the sale of stolen credentials using AI‑driven matchmaking. The idea sparked fierce opposition, with one poster labeling it “a stupid fucking idea to put AI into your market.” The backlash highlights a broader cultural clash: underground actors value the perception of expertise and fear that AI could erode the mystique that fuels their illicit reputations.

Overall, the research paints a picture of a community in transition. While AI tools offer undeniable convenience, the social fabric of hacking forums—built on trust, reputation and a sense of camaraderie—resists wholesale automation. The findings suggest that any future integration of AI into cybercrime ecosystems will have to contend with a wary audience that prefers human ingenuity over machine‑generated shortcuts.

#cybercrime#hacking forums#generative AI#ChatGPT#cybersecurity research#underground markets#AI-generated content#Flashpoint#security researchers#online fraud
Generated with  News Factory -  Source: Wired AI

Also available in: