Contrary to the widespread fear that artificial intelligence would instantly supercharge cybercrime, a new study suggests the reality is far more mundane. Research from the University of Edinburgh indicates that cybercriminals are struggling to integrate AI into their operations, finding the technology largely ineffective for sophisticated attacks.
While the digital underworld has expressed keen interest in AI tools, the technology has failed to revolutionize their methods. Instead of creating a new breed of “super-hackers,” AI has primarily served as a minor convenience for routine tasks, leaving complex criminal activities largely unchanged.
The Myth of the AI-Powered Hacker
The findings come from a comprehensive analysis of over 100 million forum posts scraped from underground communities via the CrimeBB database. By combining manual review with Large Language Model (LLM) analysis, researchers sought to determine if AI was enhancing the capabilities of malicious actors.
The results were stark: there is no significant evidence that hackers have successfully used AI to improve their intrusion techniques, develop better malware, or bypass security measures more effectively.
“Many of the reviews and discussions describe [AI] tools as not particularly useful,” the study notes.
The core issue appears to be a skill gap. AI coding assistants are designed to augment existing programming knowledge, not replace it. For cybercriminals who lack deep technical expertise, AI offers little advantage. As one forum post quoted in the study bluntly stated: “You’ve gotta first learn the ropes of programming by yourself before you can use AI and ACTUALLY benefit from it.”
Where AI Is Actually Being Used
If AI isn’t helping hackers break into systems, what are they doing with it? The study identifies a narrow range of applications where AI has made a tangible, albeit limited, impact:
- Social Media Automation: Creating bots for engagement or spam.
- Romance Scams: Generating convincing but generic dialogue for fraudsters.
- SEO Fraud: Mass-producing low-quality content to manipulate search engine rankings.
- Fake Websites: Creating sites designed to harvest ad revenue through deceptive ranking strategies.
These activities are largely automated and do not require the sophisticated technical prowess that defines high-level cybercrime. For experienced hackers, the primary utility of AI remains trivial: using chatbots to answer basic coding questions or generate quick reference “cheatsheets.”
The Failure of Specialized Crime AI
Interestingly, the study found that cybercriminals are largely ignoring AI models specifically designed for illicit purposes, such as WormGPT, which was marketed to help write malware and phishing emails. Instead, they prefer mainstream, legitimate products like Anthropic’s Claude or OpenAI’s Codex.
This preference has created a new bottleneck. Since these legitimate models have robust safety guardrails, cybercriminals are constantly seeking ways to bypass them. However, the research suggests these efforts are largely failing. Hackers are finding it difficult to “jailbreak” or override the safety settings of major AI providers.
Consequently, many are forced to pivot to older, open-source models that are easier to manipulate. These alternatives, however, are less capable and often require significant computational resources to run effectively, negating any potential efficiency gains.
Guardrails Are Holding
The broader implication of this study is reassuring for the cybersecurity industry. The safety mechanisms implemented by major AI developers are proving effective. Cybercriminals are not easily able to coerce these systems into generating harmful code or bypassing security protocols.
While the allure of AI-driven crime remains a potent narrative, the data suggests that human expertise remains the primary driver of sophisticated cyberattacks. AI, for now, is not a shortcut to success for the digital criminal; it is merely another tool that requires skill to wield effectively.
Conclusion: The integration of AI into cybercrime has stalled due to technical limitations and effective safety guardrails. Rather than empowering hackers, AI has largely been relegated to low-level automation tasks, proving that sophisticated cyber threats still depend on human skill rather than artificial assistance.