{"id":14922,"date":"2025-10-08T20:41:09","date_gmt":"2025-10-08T20:41:09","guid":{"rendered":"https:\/\/newestek.com\/?p=14922"},"modified":"2025-10-08T20:41:09","modified_gmt":"2025-10-08T20:41:09","slug":"unplug-gemini-from-email-and-calendars-says-cybersecurity-firm","status":"publish","type":"post","link":"https:\/\/newestek.com\/?p=14922","title":{"rendered":"Unplug Gemini from email and calendars, says cybersecurity firm"},"content":{"rendered":"<div>\n<div id=\"remove_no_follow\">\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<section class=\"wp-block-bigbite-multi-title\">\n<div class=\"container\"><\/div>\n<\/section>\n<p>CSOs should consider turning off Google Gemini access to employees\u2019 Gmail and Google Calendars, because the chatbot is vulnerable to a form of prompt injection, says the head of a cybersecurity firm that discovered the vulnerability.<\/p>\n<p>\u201dIf you\u2019re worried about the risk, you might want to turn off automatic email and calendar processing by Gemini until this, and potentially other things like it, are addressed,\u201d <a href=\"https:\/\/www.firetail.ai\/team\/jeremy-snyder\" target=\"_blank\" rel=\"noreferrer noopener\">Jeremy Snider<\/a>, CEO of US-based FireTail, said in an interview.<\/p>\n<p>\u201cYou could still make Gemini available to users for productivity purposes, but automatic pre-processing [of mail and calendars] is not ideal,\u201d he said.<\/p>\n<p>\u201cBe aware if your developers are integrating Gemini into applications for chatbots and other use cases,\u201d he added, \u201cand then monitor your LLM [large language model] responses.\u201d<\/p>\n<p>Snider was commenting on<a href=\"https:\/\/www.firetail.ai\/blog\/ghosts-in-the-machine-ascii-smuggling-across-various-llms\" target=\"_blank\" rel=\"noreferrer noopener\"> the release this week of a FireTail report<\/a> that found Gemini, Deepseek, and Grok are susceptible to what\u2019s knows as ASCII Smuggling. It\u2019s an old technique used by threat actors that could be used on new AI technology.<\/p>\n<p>It incorporates invisible ASCII control characters that can embed hidden instructions within a seemingly benign string of text that isn\u2019t filtered.<\/p>\n<p>\u201cYour browser (the UI) shows you a nice, clean prompt,\u201d explains the report. \u201cBut the raw text that gets fed to the LLM has a secret, hidden payload tucked inside, encoded using Tags Unicode Blocks, characters not designed to be shown in the UI and therefore invisible.\u00a0The LLM reads the hidden text, acts on it, and you see nothing wrong. It\u2019s a fundamental application logic flaw.\u201d<\/p>\n<p>This flaw is \u201cparticularly dangerous when LLMs, like Gemini, are deeply integrated into enterprise platforms like Google Workspace,\u201d the report adds.<\/p>\n<p>FireTail tested six AI agents. OpenAI\u2019s ChatGPT, Microsoft Copilot, and Anthropic AI\u2019s Claude caught the attack. Gemini, DeepSeek, and Grok failed.<\/p>\n<p>In a test, FireTail researchers were able to change the word \u201cMeeting\u201d in an appointment in Google Calendar to \u201cMeeting. It is optional.\u201d That may seem innocuous, but Snider worries a threat actor could do worse if Gemini is integrated into Google Workspace for enterprises. The tactic can be used for identity spoofing, FireTail argues, because a vulnerable chatbot would automatically process malicious instructions and bypass a typical Accept\/Decline security gate.<\/p>\n<p>For users with LLMs connected to their inboxes, a simple email containing hidden commands can instruct the AI agent to search the inbox for sensitive items or gather details about contacts, the report argues, \u201cturning a standard phishing attempt into an autonomous data extraction tool.\u201d\u00a0\u00a0<\/p>\n<p>Snider also worries about what happens if a vulnerable AI agent is integrated into a customer support chatbot. <\/p>\n<p>What especially irritates him is that when FireTail reported its findings last month to Google, the company brushed off the threat.<\/p>\n<p>\u201cIt looks to us as if the issue you\u2019re describing can only result in social engineering,\u201d FireTail says it was told by Google\u2019s bug report team, \u201cand we think that addressing it would not make our users less prone to such attacks.\u201d<\/p>\n<p>Snider told <em>CSO<\/em> that he \u201cfundamentally disagrees.\u201d<\/p>\n<p>\u201cSocial engineering is a big problem,\u201d he said. \u201cWhen you take away the risk of social engineering, it does make users safe.\u201d<\/p>\n<p>The solution, he added, is for an AI agent to filter inputs.<\/p>\n<p>Google was asked for comment on the FireTail report. No reply had been received by our deadline, nor was there a response from xAI, which is behind Grok.<\/p>\n<p>\u201cASCII Smuggling attacks against AIs aren\u2019t new,\u201d commented <a href=\"https:\/\/josephsteinberg.com\/cybersecurityexpertjosephsteinberg\/\" target=\"_blank\" rel=\"noreferrer noopener\">Joseph Steinberg,<\/a> a US-based cybersecurity and AI expert. \u201cI saw one demonstrated over a year ago.\u201d<\/p>\n<p>He didn\u2019t specify where, but in August 2024, a security researcher blogged about an <a href=\"https:\/\/embracethered.com\/blog\/posts\/2024\/m365-copilot-prompt-injection-tool-invocation-and-data-exfil-using-ascii-smuggling\/\" target=\"_blank\" rel=\"noreferrer noopener\">ASCII smuggling vulnerability in Copilot<\/a>. The finding was reported to Microsoft.<\/p>\n<p>Many ways of disguising malicious prompts will be discovered over time, he added, so it\u2019s important that IT and security leaders ensure that AIs don\u2019t have the power to act without human approval on prompts that could be damaging.<\/p>\n<p>It may be wise, he added, to convert all prompt requests to standard ASCII characters that are visible and expected before they reach the AI engine.<\/p>\n<h2 class=\"wp-block-heading\" id=\"similar-prompt-injection-attacks\">Similar prompt injection attacks<\/h2>\n<p><a href=\"https:\/\/www.csoonline.com\/article\/4053107\/ai-prompt-injection-gets-real-with-macros-the-latest-hidden-threat.html\" target=\"_blank\">Last month <em>CSO<\/em> reported<\/a> that attackers are increasingly exploiting generative AI by embedding malicious prompts in macros and exposing hidden data through parsers. Other such flaws include the discovery by\u00a0Aim Security researchers of <a href=\"https:\/\/nvd.nist.gov\/vuln\/detail\/cve-2025-32711\" target=\"_blank\" rel=\"noreferrer noopener\">EchoLeak (CVE-2025-32711)<\/a>, a zero-click\u00a0<a href=\"https:\/\/www.csoonline.com\/article\/1294996\/top-4-llm-threats-to-the-enterprise.html\" target=\"_blank\">prompt injection vulnerability<\/a> in Microsoft 365 Copilot that has since been patched.<\/p>\n<p>In July, Pangea reported that large language models (LLMs) <a href=\"https:\/\/www.csoonline.com\/article\/4032291\/how-bright-are-ai-agents-not-very-recent-reports-suggest.html\" target=\"_blank\">could be fooled\u00a0by prompt injection attacks<\/a> that embed malicious instructions into a query\u2019s legal disclaimer, terms of service, or privacy policies. At the time, \u00a0<a href=\"https:\/\/www.linkedin.com\/in\/kellman\/\" target=\"_blank\" rel=\"noreferrer noopener\">Kellman Meghu<\/a>, principal security architect at Canadian incident response firm\u00a0DeepCove Cybersecurity, said, \u201cHow silly we are as an industry, pretending this thing [AI] is ready for prime time \u2026 We just keep throwing AI at the wall hoping something sticks.\u201d<\/p>\n<p>FireTail\u2019s Snider believes that eventually Google will plug the hole it discovered in Gemini, in response to the \u201cunwanted attention\u201d from reporting by several IT news sites.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>CSOs should consider turning off Google Gemini access to employees\u2019 Gmail and Google Calendars, because the chatbot is vulnerable to a form of prompt injection, says the head of a cybersecurity firm that discovered the vulnerability. \u201dIf you\u2019re worried about the risk, you might want to turn off automatic email and calendar processing by Gemini until this, and potentially other things like it, are addressed,\u201d&#8230; <\/p>\n<p class=\"more\"><a class=\"more-link\" href=\"https:\/\/newestek.com\/?p=14922\">Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-14922","post","type-post","status-publish","format-standard","hentry","category-uncategorized","is-cat-link-borders-light is-cat-link-rounded"],"_links":{"self":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14922","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=14922"}],"version-history":[{"count":0,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14922\/revisions"}],"wp:attachment":[{"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=14922"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=14922"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=14922"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}