{"id":15290,"date":"2025-12-09T07:12:17","date_gmt":"2025-12-09T07:12:17","guid":{"rendered":"https:\/\/newestek.com\/?p=15290"},"modified":"2025-12-09T07:12:17","modified_gmt":"2025-12-09T07:12:17","slug":"ignoring-ai-in-the-threat-chain-could-be-a-costly-mistake-experts-warn","status":"publish","type":"post","link":"https:\/\/newestek.com\/?p=15290","title":{"rendered":"Ignoring AI in the threat chain could be a costly mistake, experts warn"},"content":{"rendered":"<div>\n<div id=\"remove_no_follow\">\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<section class=\"wp-block-bigbite-multi-title\">\n<div class=\"container\"><\/div>\n<\/section>\n<p>As AI adoption accelerates across enterprises \u2014 and among digital adversaries \u2014 a heated debate has erupted over whether AI\u2019s role in the cyber threat chain should be a top concern for CISOs.<\/p>\n<p>A <a href=\"https:\/\/doublepulsar.com\/cyberslop-meet-the-new-threat-actor-mit-and-safe-security-d250d19d02a4\">vocal handful of experts<\/a>, along with <a href=\"https:\/\/socket.dev\/blog\/security-community-slams-mit-linked-report-claiming-ai-powers-80-of-ransomware\">one cybersecurity vendor<\/a>, insist that warnings about AI-enhanced threats are exaggerated hype pushed by cyber-intel firms and AI companies eager to sell new defensive tools.<\/p>\n<p>\u201cYou have all these people worrying about hypothetical scenarios in which AI just magically bypasses all cybersecurity policies and technologies,\u201d <a href=\"https:\/\/www.linkedin.com\/in\/malwaretech\/\">Marcus Hutchins<\/a>, principal threat researcher at Expel, tells CSO. \u201cWhat you actually have is executives moving away from tried and tested cybersecurity policies, tools, and mitigations, and gravitating toward generative AI products that are unproven and most likely aren\u2019t going to work when it actually comes down to it.\u201d<\/p>\n<p>But most frontline practitioners and veteran threat-intel leaders sharply disagree. They argue that AI-assisted threats are not speculative \u2014 <a href=\"https:\/\/www.csoonline.com\/article\/4025139\/novel-malware-from-russias-apt28-prompts-llms-to-create-malicious-windows-commands.html\">they\u2019re already here<\/a> \u2014 and that dismissing them puts organizations at risk as increasingly agile adversaries experiment with AI to speed and scale their attacks.<\/p>\n<p>\u201cWe are absolutely seeing AI used in capabilities that traditional malware doesn\u2019t have,\u201d <a href=\"https:\/\/www.linkedin.com\/in\/stevenstone618\/\">Steve Stone<\/a>, SVP of threat discovery and response at SentinelOne, tells CSO. \u201cWe see AI being used to refine malware much quicker, used as a sidekick to generate code, or deployed for social engineering. Across the attack lifecycle, attackers are using AI.\u201d<\/p>\n<p>Two recent research reports underscore the view that AI is a growing \u2014 and potentially more dangerous \u2014 part of the cyberattack cycle, and suggest that CISOs might be running out of time to assess how well they can defend against adversaries who currently hold a significant speed advantage.<\/p>\n<h2 class=\"wp-block-heading\" id=\"evidence-of-ai-usage-in-the-attack-chain-is-mounting\">Evidence of AI usage in the attack chain is mounting<\/h2>\n<p>Although many leading cybersecurity and AI companies, including <a href=\"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2025\/10\/30\/the-5-generative-ai-security-threats-you-need-to-know-about-detailed-in-new-e-book\/\">Microsoft<\/a> and <a href=\"https:\/\/cdn.openai.com\/threat-intelligence-reports\/7d662b68-952f-4dfd-a2f2-fe55b041cc4a\/disrupting-malicious-uses-of-ai-october-2025.pdf?utm_source=chatgpt.com\">OpenAI<\/a>, have issued reports detailing how AI can enhance cyberattacks, two recent research reports add weight to this view, suggesting that adversaries are moving beyond AI for simple productivity gains and beginning to integrate it more directly into their operational tooling.<\/p>\n<p>On Nov. 5, Google Threat Intelligence Group (GTIG) <a href=\"https:\/\/cloud.google.com\/blog\/topics\/threat-intelligence\/threat-actor-usage-of-ai-tools\">released a report<\/a> concluding that threat actors have entered a new operational phase of AI abuse, extending beyond the traditional productivity use of AI to create better phishing emails or write code faster, and are using tools that dynamically alter behavior mid-execution. According to the report, \u201cgovernment-backed threat actors and cyber criminals are integrating and experimenting with AI across the industry throughout the entire attack lifecycle.\u201d<\/p>\n<p>Google identified five recent malware samples that were developed using AI, including the first use of \u201cjust in time\u201d <a href=\"https:\/\/www.csoonline.com\/article\/4085494\/google-researchers-detect-first-operational-use-of-llms-in-active-malware-campaigns.html\">AI in experimental malware families, such as PROMPTFLUX and PROMPTSTEAL<\/a>, that use large language models (LLMs) during execution.<\/p>\n<p>\u201cProductivity tools are probably, in terms of the overall picture, the biggest slice of the pie that we\u2019re seeing today, in terms of how [threat actors] are using LLMs and other gen AI tools for enabling their own capabilities,\u201d <a href=\"https:\/\/www.cyberwarcon.com\/billy-leonard\">Billy Leonard<\/a>, GTIG\u2019s global head of analysis of state-sponsored hacking and threats, tells CSO.<\/p>\n<p>Leonard sees a day coming soon when threat actors engage in prompt injection, where they manipulate an AI\u2019s model input to leak information or generate harmful content. So far, the AI-assisted attacks his group has witnessed don\u2019t reach these highly sophisticated levels.<\/p>\n<p>But, he warns, \u201cwe should expect to start seeing threat actors deploying their own AI agents, which gets us closer to that sort of autonomous system [<a href=\"https:\/\/www.csoonline.com\/article\/4069075\/autonomous-ai-hacking-and-the-future-of-cybersecurity.html\">attacks that some fear<\/a>]. There are a number of open-source tools now for doing AI red teaming and other things. Threat actors are likely using those for non-red teaming purposes. Over the next 12 months, we should start to see more of that.\u201d<\/p>\n<p>The Google report <a href=\"https:\/\/bsky.app\/profile\/doublepulsar.com\/post\/3m4vwp4ktqs2g\">came under initial criticism<\/a> as fostering needless fear by Hutchins and other researchers, although Hutchins, for one, later<a href=\"https:\/\/www.linkedin.com\/posts\/malwaretech_after-speaking-to-a-researcher-on-googles-activity-7393059511948918785-jYwJ?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAAh8QsB_UqeQaQ57J4KNhMjors3v6xHoOk\"> retracted<\/a> his complaints, suggesting how uncharted the new AI cyber threat terrain is.<\/p>\n<p>\u201cThe research report we released was used as both the talking point for the AI [cyber threat] is garbage camp as well as the sky is falling AI viewpoint,\u201d Leonard says. \u201cThey both pointed to the same report and the same findings as their justification for their side of the argument. It\u2019s like, alright, you got to pick a side.\u201d<\/p>\n<p>Just a week after GTIG issued its report, on Nov. 13, AI company Anthropic <a href=\"https:\/\/www.anthropic.com\/news\/disrupting-AI-espionage\">issued<\/a> a bombshell report in which it claimed to have discovered the first orchestrated cyber espionage by a <a href=\"https:\/\/www.csoonline.com\/article\/4092571\/ai-controlled-cyber-attack-causes-a-stir.html\">Chinese state-sponsored group that manipulated the company\u2019s Claude Code\u00a0tool<\/a> into trying to infiltrate around 30 global targets, succeeding in a small number of cases.<\/p>\n<p>The attack relied on several features of AI models that did not exist, or were in much more nascent form, just a year ago, according to Anthropic, even though much of the attack involved traditional human intervention at various stages during the process. Anthropic said it is sharing this case publicly to help \u201cthose in industry, government, and the wider research community strengthen their own cyber defenses.\u201d<\/p>\n<p>Critics of AI-enabled threat reports <a href=\"https:\/\/infosec.exchange\/@GossiTheDog@cyberplace.social\/115547042825824627\">quickly seized<\/a> on Anthropic\u2019s decision not to release indicators of compromise (IOCs), claiming the omission undercuts the value of the research.<\/p>\n<p>But experienced threat leaders say this criticism misunderstands the nature of AI-driven attacks \u2014 and the realities of disclosure.<\/p>\n<p>\u201cResearchers always want to see all the IOCs,\u201d <a href=\"https:\/\/www.linkedin.com\/in\/morgan-adamski-501094240\/\">Morgan Adamski<\/a>, PwC principal and former executive director of US Cyber Command, tells CSO. \u201cBut there might be very specific reasons those weren\u2019t included. Detailing how an adversary actually conducted it could essentially give the playbook to our adversaries.\u201d<\/p>\n<p><a href=\"https:\/\/www.sans.org\/profiles\/rob-lee\">Rob T. Lee<\/a>, chief AI officer at the SANS Institute, is even more blunt. \u201cAnthropic is not a cybersecurity company like Mandiant or Google, so give them a break. And what indicators of compromise are actually going to help defenders? If they were very clear about how they detected this, that\u2019s on their end. So what are they supposed to do \u2014 release IOCs only they can use? It\u2019s ridiculous.\u201d<\/p>\n<p>For its part, Anthropic is playing its cards close to the vest. \u201cReleasing IOCs, prompts, or technical specifics can give threat actors a playbook to use more widely,\u201d the company tells CSO. \u201cWe weigh this tradeoff case by case, and in this instance, we are sharing directly with industry and government partners rather than publishing broadly.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"how-cisos-could-cut-through-the-confusion\">How CISOs could cut through the confusion<\/h2>\n<p>The conflicting narratives around AI threats leave many <a href=\"https:\/\/www.csoonline.com\/article\/4011384\/the-cisos-5-step-guide-to-securing-ai-operations.html\">CISOs struggling to reconcile hype with operational reality<\/a>.<\/p>\n<p>Given the emergence of AI-enabled cyber threats amid pushback from some cyber experts who contend these threats are not real, Sophos CEO <a href=\"https:\/\/www.sophos.com\/en-us\/company\/management\/joe-levy\">Joe Levy<\/a> tells CSO that AI is becoming a \u201cRorschach test, meaning that however individuals will choose to look at it, that is the pattern that they will find there.\u201d<\/p>\n<p>However, Levy cautions that leaders need to take a more balanced view of the situation. \u201cThere is indeed novelty in the use of AI and the threat of agentic AI being used in a much more scalable way by attackers than we\u2019ve seen through previous forms of either manual attacks or even automated attacks,\u201d he says. \u201cThat element of it is certainly real. But I don\u2019t think to this point we\u2019ve seen a significant escalation that inhibits our ability to use our current set of defenses to the same level of effectiveness.\u201d<\/p>\n<p>PwC\u2019s Adamski stresses that CISOs should be prepared to turn around new defenses on a dime, given how fast the new AI era will be. \u201cFrom a defensive perspective, it\u2019s going to have to be seconds,\u201d she says.<\/p>\n<p>She also believes it\u2019s important to dispel any confusion that AI threats are not real. \u201cThe bottom line is that it is an emerging technology and capability that our adversaries can leverage. It exists, and we know that there are people out there testing it, deploying it, and quite honestly being successful in its use,\u201d she says.<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/clyde-williamson-6211192\/\">Clyde Williamson<\/a>, senior product security architect at Protegrity, agrees that it\u2019s dangerous to assume attackers won\u2019t <a href=\"https:\/\/www.csoonline.com\/article\/4014238\/cybercriminals-take-malicious-ai-to-the-next-level.html\">exploit generative AI and agentic tools<\/a>. \u201cAnybody who has that hacker mindset when presented with an automation tool like what we have now with generative AI and agentic models, it would be ridiculous to assume that they\u2019re not using that to improve their skills,\u201d he tells CSO.<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/jimmymesta\/\">Jimmy Mesta<\/a>, CTO and co-founder of RAD Security, says CISOs should be preparing their boards now for difficult budget decisions. \u201cBoards will have to be presented with the options of being insecure or being secure, what it\u2019s going to cost, and what it\u2019s going to take,\u201d he tells CSO. \u201cCISOs aren\u2019t going to be able to walk in and say we must do everything to 100%. There will be more trade-offs than ever.\u201d<\/p>\n<p>Even as CISOs prepare for the <a href=\"https:\/\/www.csoonline.com\/article\/4075912\/ai-enabled-ransomware-attacks-cisos-top-security-concern-with-good-reason.html\">coming wave of AI-assisted attacks<\/a>, they must maintain focus on cybersecurity fundamentals, <a href=\"https:\/\/www.linkedin.com\/in\/alexandra-rose3\/\">Alexandra Rose<\/a>, global head of government partnerships and director of CTU threat research at Sophos, tells CSO. \u201cWe come back to the basics so often because they\u2019re the most effective at stopping what we see \u2014 from every level of sophistication, including threat actors experimenting with AI,\u201d she says.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>As AI adoption accelerates across enterprises \u2014 and among digital adversaries \u2014 a heated debate has erupted over whether AI\u2019s role in the cyber threat chain should be a top concern for CISOs. A vocal handful of experts, along with one cybersecurity vendor, insist that warnings about AI-enhanced threats are exaggerated hype pushed by cyber-intel firms and AI companies eager to sell new defensive tools&#8230;. <\/p>\n<p class=\"more\"><a class=\"more-link\" href=\"https:\/\/newestek.com\/?p=15290\">Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-15290","post","type-post","status-publish","format-standard","hentry","category-uncategorized","is-cat-link-borders-light is-cat-link-rounded"],"_links":{"self":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/15290","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=15290"}],"version-history":[{"count":0,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/15290\/revisions"}],"wp:attachment":[{"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=15290"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=15290"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=15290"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}