Post

List of AI Tools Promoted by Threat Actors in Underground Forums and Their Capabilities

List of AI Tools Promoted by Threat Actors in Underground Forums and Their Capabilities

Summary

Analysis of underground advertisements reveals striking commonalities across malicious AI platforms. Most notably, nearly every notable tool advertised emphasized its ability to support phishing campaigns. This universal focus reflects phishing’s continued dominance as the leading attack vector, with AI-generated phishing representing the top enterprise threat of 2025. Security analysts documented a 1,265% surge in phishing attacks driven by generative AI capabilities, with AI-written phishing proving just as effective as human-crafted lures while requiring significantly less time and skill.

Beyond phishing, underground AI tools advertised capabilities spanning malware development, vulnerability research, technical support for code generation, and reconnaissance operations. Several platforms, including WormGPT, FraudGPT, and MalwareGPT, promoted their ability to generate polymorphic malware that constantly changes to evade antivirus detection. This capability represents a significant escalation in threat sophistication, as Google researchers recently identified five new malware families using AI to regenerate their own code and hide from security software.

By 2025, the underground AI marketplace will have evolved beyond simple jailbroken models to encompass sophisticated, multi-functional platforms. Xanthorox AI represents this next generation of malicious tools, marketing itself as the “Killer of WormGPT and all EvilGPT variants”. First detected in Q1 2025, Xanthorox distinguishes itself through its modular, self-hosted architecture that operates entirely on private servers rather than relying on public cloud infrastructure. This design drastically reduces detection and traceability risks while offering an all-in-one solution for phishing, social engineering, malware creation, deepfake generation, and vulnerability research.

The low barrier to entry exemplified by tools like Evil-GPT, priced at just $0 per copy, demonstrates how AI has democratized sophisticated cybercrime capabilities. This accessibility enables financially motivated threat actors with limited technical expertise to conduct operations that previously required years of training. The FBI and multiple cybersecurity agencies have warned that AI greatly increases the speed, scale, and automation of phishing schemes while helping fraudsters craft highly convincing messages tailored to specific recipients.

To read the complete article see: Cyber Security News

This post is licensed under CC BY 4.0 by the author.