Cybercriminals have been trying to integrate artificial intelligence into their illicit activities, but a new study from the University of Edinburgh suggests they are largely coming away empty-handed. The research, which examined more than 100 million posts from underground forums via the CrimeBB database, found that AI has not yet delivered a meaningful boost to hacking or fraud operations.
The study, currently available as a preprint, used both manual analysis and a large language model to sift through discussions on forums where hackers and scammers share techniques. The conclusion: while there is genuine curiosity about AI, the technology has not fundamentally altered how cybercriminals work. “Many of the reviews and discussions describe [AI] tools as not particularly useful,” the authors note.
Why AI Fails Hackers
One key finding is that AI coding assistants, such as those offered by OpenAI or Anthropic, are most beneficial to people who already possess strong programming skills. For hackers trying to break into systems or find security vulnerabilities, these tools provide little advantage. As one forum user quoted in the study put it: “You’ve gotta first learn the ropes of programming by yourself before you can use AI and ACTUALLY benefit from it.”
The researchers found “no significant evidence” that AI has helped hackers improve their core activities, whether as a learning aid or in developing more effective malicious tools. Instead, the main impact has been limited to easily automated tasks like creating social media bots, running romance scams, or conducting search engine optimization fraud—where fake websites are pushed up in rankings to generate ad revenue.
Even experienced hackers appear to use chatbots mainly for mundane purposes, such as answering coding questions or generating “cheatsheets.” The study notes that the AI tools actually being used are “mainstream and legitimate products” like Anthropic’s Claude or OpenAI’s Codex, rather than specialized cybercrime models like WormGPT, which was designed to produce malware or phishing emails.
Guardrails Holding Up
A significant portion of the forum posts analyzed involved hackers seeking ways to bypass the safety restrictions built into mainstream AI models. However, the study found that these attempts are largely unsuccessful. Cybercriminals are often forced to fall back on older, lower-quality open-source AI models that are easier to jailbreak but “require significant resources” and are less useful.
The findings suggest that the guardrails implemented by AI companies are working—at least for now. This contrasts with broader concerns about AI safety, as seen in other European research such as a study showing AI models matching or beating doctors in complex medical reasoning, which highlights the technology’s potential in legitimate fields.
The study also underscores a broader point: AI’s transformative potential in cybersecurity cuts both ways. While defenders are exploring AI for threat detection, criminals are struggling to weaponize it effectively. This dynamic is part of a larger European conversation about digital resilience, including issues like psychosocial risks in the workplace that affect tech workers.
For now, the University of Edinburgh research offers a cautious reassurance: the hype around AI-powered cybercrime may be overblown. But as the technology evolves, so too will the cat-and-mouse game between security teams and those who seek to exploit it.


