Large language models (LLMs) are casually sending users to the wrong web addresses, including unregistered, inactive, and even malicious sites, when asked where to log in for specific branded content.
In a new study from Netcraft, researchers found that when they asked a popular LLM where to log into well-known brands, 34% of the URLs it gave were not owned by those brands. Even worse, one of the links led directly to an active phishing site.
“The research shows the importance of vigilance against hackers mimicking well-recognized brand URLs to gain access to sensitive information and/or bank accounts,” said Melinda Marks, senior analyst at Enterprise Strategy Group. “Companies, especially larger, established brands, should protect their reputations by communicating with customers about which URLs to trust for important communications and secure transactions.”
The Netcraft research underlined that nearly 30% of the rogue URLs were unregistered or inactive, making them prime real estate for threat actors looking to set up malicious sites.
The prompts used weren’t even obscure and just reflected how people naturally ask for help online, Netcraft’s analyst Bilal Rashid noted, adding that the risk is systemic, scalable, and already in the wild.
AI hallucinated a phishing domain
Five percent of these URLs led to entirely unrelated businesses and, most unsettling of all, one of them pointed to a phishing domain. Perplexity, the AI-powered search engine, recommended a Google Sites page ‘hxxps://sites[.]google[.]com/view/wells-fargologins/home’, posing as the Wells Fargo login page with a convincing clone of the real site. The URL surfaced directly because the AI thought it belonged there, Netcraft researchers noted in a blog post, explaining what happens when AI gives you the wrong URL.
“This creates a perfect storm for cybercriminals,” said J Stephen Kowski, Field CTO at SlashNext. “When AI models hallucinate URLs pointing to unregistered domains, attackers can simply register those exact domains and wait for victims to arrive.” He likens it to giving attackers a roadmap to future victims. “A single malicious link recommended can compromise thousands of people who would normally be more cautious.”
The findings from Netcraft research are particularly concerning as National brands, mainly in finance and fintech, were found among the hardest hit. Credit unions, regional banks, and mid-sized platforms fared worse than global giants. Smaller brands, which are less likely to appear in LLM training data, were highly hallucinated.
“LLMs don’t retrieve information, they generate it,” said Nicole Carignan, Field CISO at Darktrace. “And when users treat those outputs as fact, it opens the door for massive exploitation.” She pointed to an underlying structural flaw: models are designed to be helpful, not accurate, and unless AI responses are grounded in validated data, they will continue to invent URLs, often with dangerous consequences.
Researchers pointed out that registering all the hallucinated domains in advance, a seemingly viable solution, will not work as the variations are infinite and LLMs are always going to invent new ones, leading to slopsquatting attacks.
Github poisoning for AI training
Not all hallucinated URLs were unintentional. In an unrelated research, Netcraft found evidence of attackers deliberately poisoning AI systems by seeding GitHub with malicious code repositories.
“Multiple fake GitHub accounts shared a project called Moonshot-Volume-Bot, seeded across accounts with rich bios, profile images, social media accounts and credible coding activity,” researchers said. “These weren’t throwaway accounts—they were crafted to be indexed by AI training pipelines.”
The Moonshot project involved a counterfeit Solana blockchain API that rerouted funds directly into an attacker’s wallet.
“The compromise of data corpuses used in the AI training pipeline underscores a growing AI supply chain risk,” Carignan said. “This is not just a hallucination, it’s targeted manipulation. Data integrity, sourcing, cleansing, and verification are critical to ensuring the safety of LLM outputs.”
While researchers recommended reactive solutions like monitoring and takedown to tackle the issue, Gal Moyal, CTO office at Noma Security, suggested a proactive approach. “AI Guardrails should validate domain ownership before recommending login,” he said. “ You can’t just let models ‘guess’ URLs. Every request with a URL needs to be vetted.”