Post

Foreign threat actors adopting ChatGPT, AI to bolster "old playbook" of attacks, OpenAI finds

Foreign adversaries are now building AI into their existing workflows – from crafting phishing campaigns, tweaking malware, and generating propaganda, to researching ways to automate their cyber kill chain, according to a new report by OpenAI.

Overall, OpenAI said it banned several Russian-speaking criminal accounts attempting to use GPT models “to help develop and refine malware, including a remote-access trojan, credential stealers, and features to evade detection,” more aligned with refining offensive tooling rather than executing them.

Besides malware, one case featured Korean operators attempting to use LLM models for command-and-control (C2) development, cryptocurrency-themed phishing content, experimenting with HTML obfuscation, and proxying reCAPTCHA for convincing login pages.

To read the complete article see: Full Article

This post is licensed under CC BY 4.0 by the author.