{"id":14368,"date":"2025-07-01T13:33:53","date_gmt":"2025-07-01T13:33:53","guid":{"rendered":"https:\/\/newestek.com\/?p=14368"},"modified":"2025-07-01T13:33:53","modified_gmt":"2025-07-01T13:33:53","slug":"llms-are-guessing-login-urls-and-its-a-cybersecurity-time-bomb","status":"publish","type":"post","link":"https:\/\/newestek.com\/?p=14368","title":{"rendered":"LLMs are guessing login URLs, and it\u2019s a cybersecurity time bomb"},"content":{"rendered":"<div>\n<div id=\"remove_no_follow\">\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<section class=\"wp-block-bigbite-multi-title\">\n<div class=\"container\"><\/div>\n<\/section>\n<p>Large language models (LLMs) are casually sending users to the wrong web addresses, including unregistered, inactive, and even malicious sites, when asked where to log in for specific branded content.<\/p>\n<p>In a new study from Netcraft, researchers found that when they asked a popular LLM where to log into well-known brands, 34% of the URLs it gave were not owned by those brands. Even worse, one of the links led directly to an active phishing site.<\/p>\n<p>\u201cThe research shows the importance of vigilance against hackers mimicking well-recognized brand URLs to gain access to sensitive information and\/or bank accounts,\u201d said Melinda Marks, senior analyst at Enterprise Strategy Group. \u201cCompanies, especially larger, established brands, should protect their reputations by communicating with customers about which URLs to trust for important communications and secure transactions.\u201d<\/p>\n<p>The Netcraft research underlined that nearly 30% of the rogue URLs were unregistered or inactive, making them prime real estate for threat actors looking to set up malicious sites.<\/p>\n<p>The prompts used weren\u2019t even obscure and just reflected how people naturally ask for help online, Netcraft\u2019s analyst Bilal Rashid noted, adding that the risk is systemic, scalable, and already in the wild.<\/p>\n<h2 class=\"wp-block-heading\" id=\"ai-hallucinated-a-phishing-domain\">AI hallucinated a phishing domain<\/h2>\n<p>Five percent of these URLs led to entirely unrelated businesses and, most unsettling of all, one of them pointed to a phishing domain. Perplexity, the AI-powered search engine, recommended a Google Sites page \u2018hxxps:\/\/sites[.]google[.]com\/view\/wells-fargologins\/home\u2019, posing as the Wells Fargo login page with a convincing clone of the real site. The URL surfaced directly because the AI thought it belonged there, Netcraft researchers noted in a blog post, explaining what happens <a href=\"https:\/\/www.netcraft.com\/blog\/large-language-models-are-falling-for-phishing-scams\">when AI gives you the wrong URL<\/a>.<\/p>\n<p>\u201cThis creates a perfect storm for cybercriminals,\u201d said J Stephen Kowski, Field CTO at SlashNext. \u201cWhen AI models hallucinate URLs pointing to unregistered domains, attackers can simply register those exact domains and wait for victims to arrive.\u201d He likens it to giving attackers a roadmap to future victims. \u201cA single malicious link recommended can compromise thousands of people who would normally be more cautious.\u201d<\/p>\n<p>The findings from Netcraft research are particularly concerning as National brands, mainly in finance and fintech, were found among the hardest hit. Credit unions, regional banks, and mid-sized platforms fared worse than global giants. Smaller brands, which are less likely to appear in LLM training data, were highly hallucinated.<\/p>\n<p>\u201cLLMs don\u2019t retrieve information, they generate it,\u201d said Nicole Carignan, Field CISO at Darktrace. \u201cAnd when users treat those outputs as fact, it opens the door for massive exploitation.\u201d She pointed to an underlying structural flaw: models are designed to be helpful, not accurate, and unless AI responses are grounded in validated data, they will continue to invent URLs, often with dangerous consequences.<\/p>\n<p>Researchers pointed out that registering all the hallucinated domains in advance, a seemingly viable solution, will not work as the variations are infinite and LLMs are always going to invent new ones, leading to <a href=\"https:\/\/www.csoonline.com\/article\/3961304\/ai-hallucinations-lead-to-new-cyber-threat-slopsquatting.html\">slopsquatting attacks<\/a>.<\/p>\n<h2 class=\"wp-block-heading\"><a><\/a>Github poisoning for AI training<\/h2>\n<p>Not all hallucinated URLs were unintentional. In an unrelated research, Netcraft found evidence of attackers deliberately poisoning AI systems by seeding GitHub with <a href=\"https:\/\/www.csoonline.com\/article\/4010125\/github-hit-by-a-sophisticated-malware-campaign-as-banana-squad-mimics-popular-repos.html?utm=hybrid_search\">malicious code repositories<\/a>.<\/p>\n<p>\u201cMultiple fake GitHub accounts shared a project called Moonshot-Volume-Bot, seeded across accounts with rich bios, profile images, social media accounts and credible coding activity,\u201d researchers said. \u201cThese weren\u2019t throwaway accounts\u2014they were crafted to be indexed by AI training pipelines.\u201d\u00a0<\/p>\n<p>The Moonshot project involved a counterfeit Solana blockchain API that rerouted funds directly into an attacker\u2019s wallet.<\/p>\n<p>\u201cThe compromise of data corpuses used in the AI training pipeline underscores a growing AI supply chain risk,\u201d Carignan said. \u201cThis is not just a hallucination, it\u2019s targeted manipulation. Data integrity, sourcing, cleansing, and verification are critical to ensuring the safety of LLM outputs.\u201d<\/p>\n<p>While researchers recommended reactive solutions like monitoring and takedown to tackle the issue, Gal Moyal, CTO office at Noma Security, suggested a proactive approach. \u201cAI Guardrails should validate domain ownership before recommending login,\u201d he said. \u201c You can\u2019t just let models \u2018guess\u2019 URLs. Every request with a URL needs to be vetted.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Large language models (LLMs) are casually sending users to the wrong web addresses, including unregistered, inactive, and even malicious sites, when asked where to log in for specific branded content. In a new study from Netcraft, researchers found that when they asked a popular LLM where to log into well-known brands, 34% of the URLs it gave were not owned by those brands. Even worse,&#8230; <\/p>\n<p class=\"more\"><a class=\"more-link\" href=\"https:\/\/newestek.com\/?p=14368\">Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-14368","post","type-post","status-publish","format-standard","hentry","category-uncategorized","is-cat-link-borders-light is-cat-link-rounded"],"_links":{"self":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14368","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=14368"}],"version-history":[{"count":0,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14368\/revisions"}],"wp:attachment":[{"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=14368"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=14368"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=14368"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}