{"id":14354,"date":"2025-06-30T08:33:32","date_gmt":"2025-06-30T08:33:32","guid":{"rendered":"https:\/\/newestek.com\/?p=14354"},"modified":"2025-06-30T08:33:32","modified_gmt":"2025-06-30T08:33:32","slug":"cybercriminals-take-malicious-ai-to-the-next-level","status":"publish","type":"post","link":"https:\/\/newestek.com\/?p=14354","title":{"rendered":"Cybercriminals take malicious AI to the next level"},"content":{"rendered":"<div>\n<div id=\"remove_no_follow\">\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<section class=\"wp-block-bigbite-multi-title\">\n<div class=\"container\"><\/div>\n<\/section>\n<p>Cybercriminals have begun refining malicious large language models (LLMs) using underground forum posts and breach dumps to tailor AI models for specific fraud schemes, threat intel firm Flashpoint warns.<\/p>\n<p>More specifically, fraudsters are fine-tuning illicit LLMs \u2014 including <a href=\"https:\/\/www.csoonline.com\/article\/4008912\/wormgpt-returns-new-malicious-ai-variants-built-on-grok-and-mixtral-uncovered.html\">WormGPT<\/a> and FraudGPT \u2014 using malicious datasets such as breached credentials, scam scripts, and infostealer logs. As adversaries use these models to generate outputs, they gather user feedback to fine-tune responses, creating a loop where offensive capability keeps improving over time.<\/p>\n<p><strong>[ See also: <a href=\"https:\/\/www.csoonline.com\/article\/3819176\/top-5-ways-attackers-use-generative-ai-to-exploit-your-systems.html\">Top 5 ways attackers use generative AI to exploit your systems<\/a> ]<\/strong><\/p>\n<p>\u201cThis trend is particularly concerning because it demonstrates adversaries \u2018closing the loop on model tuning\u2019 \u2014 their offensive capabilities constantly improving over time through real-time feedback and illicit data,\u201d Ian Gray, Flashpoint VP of cyber threat intelligence, tells CSO.<\/p>\n<p>Flashpoint has also observed private chat groups where users submitted failed prompt attempts back to LLM developers, leading to rapid iteration and improved performance within days. In one instance, a user reported formatting issues with a financial fraud prompt, and shortly after, the developer shared an updated version with refined templates, Flashpoint observed.<\/p>\n<p>\u201cThis adaptive and self-improving nature of malicious AI, fueled by compromised data and criminal collaboration, makes it an especially potent and difficult threat to counter,\u201d Gray says.<\/p>\n<p>Cybercriminals are tailoring AI models for specific fraud schemes, including <a href=\"https:\/\/www.csoonline.com\/article\/3850783\/11-ways-cybercriminals-are-making-phishing-more-potent-than-ever.html\">generating phishing emails<\/a> tailored by sector or language, as well as writing fake job posts, invoices, or verification prompts.<\/p>\n<p>\u201cSome vendors even market these tools with tiered pricing, API access, and private key licensing, mirroring the [legitimate] SaaS economy,\u201d Flashpoint researchers found.<\/p>\n<p>\u201cThis specialization leads to potentially greater success rates and automated complex attack stages,\u201d Flashpoint\u2019s Gray tells CSO.<\/p>\n<h2 class=\"wp-block-heading\" id=\"deepfake-as-a-service-goes-mainstream\">Deepfake as a service goes mainstream<\/h2>\n<p>Cybercrime vendors are also lowering the barrier for creating synthetic video and voice, with deepfake as a service (DaaS) offerings that include:<\/p>\n<ul class=\"wp-block-list\">\n<li>Custom face generation for dating scams<\/li>\n<li>Audio spoofing for voice verification fraud<\/li>\n<li>On-demand video avatars that lip-sync based on customer-submitted scripts<\/li>\n<\/ul>\n<p>These services are increasingly offered with add-ons such as pre-loaded backstories,<\/p>\n<p>matching fake documents, and automated scheduling for calls.<\/p>\n<h2 class=\"wp-block-heading\" id=\"prompt-engineering-as-a-service\">Prompt engineering as a service<\/h2>\n<p>Underground communities have also emerged around the art of crafting jailbreak prompts.<\/p>\n<p>These \u201cbypass builders\u201d specialize in defeating guardrails of mainstream LLMs (e.g., ChatGPT or Gemini) to unlock restricted outputs such as social engineering scripts, step-by-step hacking tutorials, and bank fraud playbooks, including \u201cknow your customer\u201d (KYC) bypass guides.<\/p>\n<p>\u201cThis \u2018prompt engineering as a service\u2019 (PEaaS) lowers the barrier for entry, allowing a wider range of actors to leverage sophisticated AI capabilities through pre-packaged malicious prompts,\u201d Gray warns.<\/p>\n<p>\u201cTogether, these trends create an adaptive threat: tailored models become more potent when refined with illicit data, PEaaS expands the reach of threat actors, and the continuous refinement ensures constant evolution against defenses,\u201d he says.<\/p>\n<h2 class=\"wp-block-heading\" id=\"deep-dive\">Deep dive<\/h2>\n<p>Flashpoint analysts tracked these developments in real-time across more than 100,000 illicit sources, monitoring everything from dark web marketplaces and Telegram groups to<\/p>\n<p>underground LLM communities.<\/p>\n<p>Between Jan. 1 and May 30, 2025, the researchers logged more than 2.5 million AI-related posts covering various nefarious tactics, including jailbreak prompts, deepfake service ads, phishing toolkits, and bespoke language models built for fraud and other forms of cybercrime.<\/p>\n<h2 class=\"wp-block-heading\" id=\"underground-llm-tactics-and-strategies\">Underground LLM tactics and strategies<\/h2>\n<p><a href=\"https:\/\/blog.talosintelligence.com\/cybercriminal-abuse-of-large-language-models\/\">Related research from Cisco Talos warns<\/a> that cybercriminals continue to adopt LLMs to streamline their processes, write tools and scripts that can be used to compromise users, and generate content that can more easily bypass defenses.<\/p>\n<p>Talos observed cybercriminals resorting to using uncensored LLMs or even custom-built criminal LLMs for illicit purposes.<\/p>\n<p>Advertised features of malicious LLMs suggest that cybercriminals are linking these systems to various external tools to scan sites for vulnerabilities, verify stolen credit card numbers, and other malicious actions.<\/p>\n<p>At the same time, adversaries are often jailbreaking legitimate models faster than LLM developers can secure them, Talos warns.<\/p>\n<h2 class=\"wp-block-heading\" id=\"defense-against-the-dark-ai-arts\">Defense against the dark (AI) arts<\/h2>\n<p><a href=\"https:\/\/flashpoint.io\/blog\/ai-threat-intelligence-defenders-guide\/\">Flashpoint\u2019s \u201cAI and Threat Intelligence: The Defenders\u2019 Guide\u201d<\/a> explains that while AI is a double-edged sword in cybersecurity, defenders who thoughtfully integrate AI into their threat intelligence and response workflows can outpace adversaries.<\/p>\n<p>Enterprises need to balance automation with expert analysis, separating hype from reality, and continuously adapt to the rapidly evolving threat landscape.<\/p>\n<p>\u201cDefenders should start by viewing AI as an augmentation of human expertise, not a replacement,\u201d Flashpoint\u2019s Gray says. \u201cThis philosophy ensures AI strengthens existing workflows, driving value by reducing noise and accelerating decision-making, rather than creating new blind spots.\u201d<\/p>\n<p>Gray adds: \u201cThe organizing principle should enhance their collection advantage by utilizing AI to derive insights from high-signal data, accelerating discovery, and structuring unstructured content. Ultimately, the aim is to improve efficiency by empowering analysts with tools that assist their judgment, maintain human control, and provide context.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Cybercriminals have begun refining malicious large language models (LLMs) using underground forum posts and breach dumps to tailor AI models for specific fraud schemes, threat intel firm Flashpoint warns. More specifically, fraudsters are fine-tuning illicit LLMs \u2014 including WormGPT and FraudGPT \u2014 using malicious datasets such as breached credentials, scam scripts, and infostealer logs. As adversaries use these models to generate outputs, they gather user&#8230; <\/p>\n<p class=\"more\"><a class=\"more-link\" href=\"https:\/\/newestek.com\/?p=14354\">Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-14354","post","type-post","status-publish","format-standard","hentry","category-uncategorized","is-cat-link-borders-light is-cat-link-rounded"],"_links":{"self":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14354","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=14354"}],"version-history":[{"count":0,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14354\/revisions"}],"wp:attachment":[{"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=14354"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=14354"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=14354"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}