{"id":14644,"date":"2025-08-20T02:57:32","date_gmt":"2025-08-20T02:57:32","guid":{"rendered":"https:\/\/newestek.com\/?p=14644"},"modified":"2025-08-20T02:57:32","modified_gmt":"2025-08-20T02:57:32","slug":"nists-attempts-to-secure-ai-yield-many-questions-no-answers","status":"publish","type":"post","link":"https:\/\/newestek.com\/?p=14644","title":{"rendered":"NIST\u2019s attempts to secure AI yield many questions, no answers"},"content":{"rendered":"<div>\n<div id=\"remove_no_follow\">\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<section class=\"wp-block-bigbite-multi-title\">\n<div class=\"container\"><\/div>\n<\/section>\n<p>When the US National Institute of Standards and Technology (NIST) late last week published a report on how enterprises can protect themselves from AI systems, it focused on categorizing the problems without suggesting any specific mitigation tactics.\u00a0<\/p>\n<p>For that, the organization turned to the industry and asked for suggestions.<\/p>\n<p>\u201cNIST is interested in feedback on the concept paper and proposed action plan, and invites all interested parties to join the NIST Overlays for Securing AI (#NIST-Overlays-Securing-AI) Slack channel,\u201d <a href=\"https:\/\/csrc.nist.gov\/News\/2025\/control-overlays-for-securing-ai-systems\" target=\"_blank\" rel=\"noreferrer noopener\">the page describing the report<\/a> said. \u201cThrough the Slack channel, stakeholders can contribute to the development of these overlays, get updates, engage in facilitated discussions with the NIST principal investigators and other subgroup members, and provide real-time feedback and comments.\u201d<\/p>\n<p>Analysts and security industry advocates see the challenges of AI security controls as extensive, but that\u2019s mostly because enterprises are now using\u2014or fighting\u2014AI in so many different ways.\u00a0<\/p>\n<p>From a technical NIST perspective, the group said that it wants to tweak its current rules to accommodate AI controls, as opposed to creating something new. Specifically, NIST said that it wants to build on top of <a href=\"https:\/\/csrc.nist.gov\/projects\/cprt\/catalog#\/cprt\/framework\/version\/SP_800_53_5_1_1\/home\" target=\"_blank\" rel=\"noreferrer noopener\">NIST Special Publication (SP) 800-53 controls<\/a>. This provides the core NIST cybersecurity protections dealing with traditional defense issues including access control, awareness, audit, incident response, contingency planning and risk assessment.<\/p>\n<h2 class=\"wp-block-heading\" id=\"building-on-existing-rules-makes-sense\">Building on existing rules makes sense<\/h2>\n<p>\u201cThe decision to anchor these overlays in SP 800-53 controls demonstrates sophisticated strategic thinking. Organizations already possess institutional knowledge around these frameworks,\u201d said <a href=\"https:\/\/www.prsa.org\/person\/perkins-aaron\" target=\"_blank\" rel=\"noreferrer noopener\">Aaron Perkins<\/a>, founder at Market-Proven AI. \u201cThey understand implementation processes, have established assessment methodologies, and most importantly, their teams know how to work within these structures. This familiarity eliminates one of the most significant barriers to effective AI security by removing the learning curve that accompanies entirely new approaches.\u201d<\/p>\n<p>Forrester Senior Analyst <a href=\"https:\/\/www.forrester.com\/analyst-bio\/janet-worthington\/BIO18144\" target=\"_blank\" rel=\"noreferrer noopener\">Janet Worthington<\/a> agreed that leveraging an existing NIST framework makes sense.\u00a0<\/p>\n<p>\u201cOverlays are a natural extension, as many organizations are already familiar with SP 800-53, offering flexibility to tailor security measures to specific AI technologies and use cases while integrating seamlessly with existing NIST, \u201d Worthington said. \u201cThese overlays are specifically crafted to safeguard the confidentiality, integrity, and availability of critical AI components.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"challenges-to-consider\">Challenges to consider<\/h2>\n<p>The <a href=\"https:\/\/csrc.nist.gov\/csrc\/media\/Projects\/cosais\/documents\/NIST-Overlays-SecuringAI-concept-paper.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">NIST report<\/a> talked about various categories of AI integration that forced serious cybersecurity considerations, including: using genAI to create new content; fine-tuning predictive AI; using single AI agents as well multiple agents; and security controls for AI developers.\u00a0<\/p>\n<p>The potentially most challenging element of securing AI in enterprises is visibility. But the visibility problem takes many forms, including visibility into what the model makers train on and how the models are coded to make recommendations, how enterprise data fed into the models is used, the <a href=\"https:\/\/www.computerworld.com\/article\/4025938\/time-to-consider-ai-models-that-dont-steal.html\" target=\"_blank\">copyright, patent and other legal protections attached to the data that was used for training<\/a>, how much AI is being used in SaaS apps and cloud deployments, and how employees, contractors and third-parties are using genAI globally.<\/p>\n<p>If CISOs don\u2019t have meaningful visibility into all of those issues, the task of securing the information that flows into and out of those models is close to impossible.<\/p>\n<p>Many cybersecurity specialists were not sure how the tidal wave of AI activity, much of it deployed in enterprises with seemingly insufficient due diligence beforehand, can now be properly secured.\u00a0<\/p>\n<h2 class=\"wp-block-heading\" id=\"assume-a-doomsday-scenario\">Assume a doomsday scenario<\/h2>\n<p>\u201cAI was all the hype at <a href=\"https:\/\/www.rsaconference.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">RSA<\/a>, <a href=\"https:\/\/www.blackhat.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">Blackhat<\/a> and <a href=\"https:\/\/defcon.org\/\" target=\"_blank\" rel=\"noreferrer noopener\">Defcon<\/a>. It was at the beginning and end of every vendor sentence,\u201d said <a href=\"https:\/\/www.linkedin.com\/in\/jeffreyeman\/\" target=\"_blank\" rel=\"noreferrer noopener\">Jeff Mann<\/a>, an industry veteran who today serves as the senior information security consultant at Online Business Systems. \u201cIt was amazing how AI was going to solve all of the problems [and] we were also discovering amazing vulnerabilities.\u201d<\/p>\n<p>Mann also stressed the visibility issues, especially in terms of how AI is deployed company-wide. \u201cHave an inventory and know what you are dealing with. But I am not sure it\u2019s even possible to take a complete inventory of what is out there. You have to assume a doomsday scenario.\u201d<\/p>\n<p>Another longtime cybersecurity observer, <a href=\"https:\/\/formergov.com\/directory\/brianlevine\" target=\"_blank\" rel=\"noreferrer noopener\">Brian Levine<\/a>, managing director at Ernst &amp; Young and the CEO of a directory of former government\/military security experts called FormerGov, sees much of the AI security challenge coming from how extensively it is being used for almost every business function \u2014 and how little it was tested beforehand.<\/p>\n<p>\u201cWe are seeing that AI is becoming ubiquitous, and executives rushed to use it before they fully understood it and could grapple with the security issues,\u201d Levine said. \u201c[AI] is a little bit of a black box and everyone was rushing to incorporate it into everything they were doing. Over time, the more you outsource technology, the more risk you are taking.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"inventory-visibility-priority-one\">Inventory visibility priority one<\/h2>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/zacharylewis1\/\" target=\"_blank\" rel=\"noreferrer noopener\">Zach Lewis<\/a>, the CIO and CISO at the University of Health Sciences and Pharmacy in St. Louis, Missouri also put visibility at the top of his AI risk list.\u00a0<\/p>\n<p>\u201cYou can\u2019t patch what you don\u2019t know is running. That applies to AI, too. NIST should make AI model inventories step one,\u201d Lewis said. \u201cIf companies don\u2019t even know which models employees are using, the rest of the controls don\u2019t matter.\u201d<\/p>\n<p>It\u2019s also often necessary to assume that all AI is already poisoned, given that it is the only safe assumption, noted <a href=\"https:\/\/www.linkedin.com\/in\/audian\/\" target=\"_blank\" rel=\"noreferrer noopener\">Audian Paxson<\/a>, principal technical strategist at Ironscales.<\/p>\n<p>\u201cAssume every AI model in your environment will be weaponized. That means implementing adversarial robustness at the model level, essentially teaching your defensive AI to expect lies. Think of it like training a boxer by having them spar with dirty fighters,\u201d Paxson said. \u201cYour models need to learn from poisoned data attempts, prompt injection attacks, and model inversion techniques before they hit production.\u201d<\/p>\n<p>Paxson suggested extending the assume-the-worst thinking to all AI security strategies.\u00a0<\/p>\n<p>\u201cWhen you\u2019re thinking about best practices for securing ML pipelines and training data, start with the assumption your training data is already compromised because it probably is. Implement differential privacy and regular model health checks essentially asking your AI if it feels poisoned,\u201d Paxson said. \u201cUse federated learning where possible so sensitive data never centralizes. Most importantly, implement model retirement dates. An AI model trained six months ago is like milk left on the counter. It\u2019s probably gone bad.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"carefully-review-existing-security-tools\">Carefully review existing security tools<\/h2>\n<p>Forrester\u2019s Worthington stressed that CISOs need to carefully review all current cybersecurity tools because they may not be especially effective at protecting the enterprise from relatively new AI threats.<\/p>\n<p>\u201cAI agents and agentic systems introduce new risks that traditional security models are ill-equipped to manage,\u201d Worthington said. \u201cWe are seeing growing concerns around the lack of mature detection surfaces, the risk of cascading failures, and the challenge of securing intent rather than just outcomes.\u201d<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/vincent-berk-a12314b\/\" target=\"_blank\" rel=\"noreferrer noopener\">Vince Berk<\/a>, partner at Apprentis Ventures, was even more skeptical that current standards efforts will be able to make a meaningful difference in protecting companies from AI threats.\u00a0<\/p>\n<p>\u201cIt is fantastic that we have the Institute for Standards to provide us a guidepost to manage our AI risks by. However, standards are typically formed after a large body of experiences have been gathered, and a common approach or vision to a particular area of engineering starts to form. For AI cybersecurity problems, this is very far from the case,\u201d Berk said. \u201cEvery day, new cases are discovered that were unanticipated and raise questions about the utility in a broad sense from AI at all.\u201d<\/p>\n<p>He added that the nature of NIST might not make it the best source for such guidelines. \u201cFor now, a better place for these sorts of controls would be <a href=\"https:\/\/www.cisecurity.org\/\" target=\"_blank\" rel=\"noreferrer noopener\">CIS<\/a> or <a href=\"https:\/\/owasp.org\/\" target=\"_blank\" rel=\"noreferrer noopener\">OWASP<\/a>,\u201d Berk said.<\/p>\n<h2 class=\"wp-block-heading\" id=\"what-if-ai-floods-the-comments\">What if AI floods the comments?<\/h2>\n<p><a href=\"https:\/\/www.infotech.com\/profiles\/erik-avakian\" target=\"_blank\" rel=\"noreferrer noopener\">Erik Avakian<\/a>, technical counselor at Info-Tech Research Group, said that he applauds NIST\u2019s efforts to reach out for community feedback, but he also cautioned that it might backfire. For example, what if AI agents flood the comments with self-serving suggestions?<\/p>\n<p>Such an AI comment flood could do various bad things, he said, including making the final recommendations \u201cbad guy friendly\u201d or simply \u201cAI poisoning the actual feedback.\u201d If that attack happened, Avakian said, the best response from NIST would be to conduct in-person interviews. \u201cThat would be the only way. Maybe human interviews or regional workshops where they bring people in,\u201d he said.<\/p>\n<p>Although Avakian said that the initial NIST report is \u201ccertainly a welcome start, there are potential risks that the overlays may not go far enough as they relate to emerging attack vectors unique to AI. The [report] addresses fundamentals such as model integrity and access control, but these alone might not dig deep enough into addressing cutting-edge attack vectors unique to AI,\u201d Avakian said.<\/p>\n<p>\u201cAdvanced threat scenarios could slip through the cracks,\u201d he said. \u201cIn addition, they might not go far enough when it comes to people-related risks such as insider misuse, shadow AI adoption, or common human issues such as errors, omissions, and mistakes. Many AI systems and architectures also vary widely, and the overlays could benefit from more granularity to truly fit the diversity we\u2019re seeing across real-world AI deployments.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>When the US National Institute of Standards and Technology (NIST) late last week published a report on how enterprises can protect themselves from AI systems, it focused on categorizing the problems without suggesting any specific mitigation tactics.\u00a0 For that, the organization turned to the industry and asked for suggestions. \u201cNIST is interested in feedback on the concept paper and proposed action plan, and invites all&#8230; <\/p>\n<p class=\"more\"><a class=\"more-link\" href=\"https:\/\/newestek.com\/?p=14644\">Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-14644","post","type-post","status-publish","format-standard","hentry","category-uncategorized","is-cat-link-borders-light is-cat-link-rounded"],"_links":{"self":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14644","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=14644"}],"version-history":[{"count":0,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14644\/revisions"}],"wp:attachment":[{"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=14644"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=14644"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=14644"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}