{"id":14456,"date":"2025-07-17T07:12:10","date_gmt":"2025-07-17T07:12:10","guid":{"rendered":"https:\/\/newestek.com\/?p=14456"},"modified":"2025-07-17T07:12:10","modified_gmt":"2025-07-17T07:12:10","slug":"how-ai-is-changing-the-grc-strategy","status":"publish","type":"post","link":"https:\/\/newestek.com\/?p=14456","title":{"rendered":"How AI is changing the GRC strategy"},"content":{"rendered":"<div>\n<div id=\"remove_no_follow\">\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<section class=\"wp-block-bigbite-multi-title\">\n<div class=\"container\"><\/div>\n<\/section>\n<p>As businesses incorporate cybersecurity into governance, risk and compliance (GRC), it is important to revisit existing GRC programs to ensure that the growing use and risks of generative and agentic AI are addressed so businesses continue to meet regulatory requirements.<\/p>\n<p>\u201c[AI] It\u2019s a hugely disruptive technology in that it\u2019s not something you can put into a box and say \u2018well that\u2019s AI\u2019,\u201d says Jamie Norton, member of the ISACA board of directors and CISO with the Australian Securities and Investment Commission (ASIC).<\/p>\n<p>It\u2019s hard to quantify AI risk, but data as to how the adoption of AI expands and transforms an organization\u2019s risk surface provides a clue. According to Check Point\u2019s 2025 AI security <a href=\"https:\/\/engage.checkpoint.com\/2025-ai-security-report\">report<\/a>, 1 in every 80 prompts (1.25%) sent to generative AI services from enterprise devices had a high risk of sensitive data leakage.<\/p>\n<p>CISOs have the challenge to keep pace with business demands for innovation while securing AI deployments with these risks in view. \u201cWith their pure security hat on, they\u2019re trying to stop shadow AI from becoming a cultural thing where we can just adopt and use it [without guardrails],\u201d Norton tells CSO.<\/p>\n<h2 class=\"wp-block-heading\" id=\"ai-is-not-a-typical-risk-so-how-do-grc-frameworks-help\">AI is not a typical risk, so how do GRC frameworks help?<\/h2>\n<p>Governance, risk and compliance is a concept that originated with the <a href=\"https:\/\/www.oceg.org\/ideas\/what-is-grc\/\">Open Compliance and Ethics Group (OCEG)<\/a> in the early 2000s as a way to define a set of critical capabilities to address uncertainty, act with integrity, and ensure compliance to support organizational objectives. Since then, GRC has developed from rules and checklists focused on compliance to a broader approach of managing risk. Data protection requirements, the growing regulatory landscape, digital transformation efforts, and board-level focus have driven this shift in GRC.<\/p>\n<p>At the same time, cybersecurity has become a core enterprise risk and CISOs have helped ensure compliance with regulatory requirements and establish effective governance frameworks. Now as AI expands, there\u2019s a need to incorporate this new category of risk into GRC frameworks.<\/p>\n<p>However, industry surveys suggest there\u2019s still a long way to go for the guardrails to catch up with AI. Only 24% of organizations have fully enforced enterprise AI GRC policies, according to the 2025 Lenovo CIO playbook. At the same time, AI governance and compliance is the number one priority, the report <a href=\"https:\/\/investor.lenovo.com\/en\/ai\/ai_investments_reports.php\">found<\/a>.<\/p>\n<p>The industry research suggests that CISOs will need to help strengthen AI risk management as a matter of urgency, driven by leadership\u2019s hunger to realize some pay-off without moving the risk dial.<\/p>\n<p>CISOs are in a tough spot because they have a dual mandate to increase productivity and leverage this powerful emerging technology, while still maintaining governance, risk and compliance obligations, according to Rich Marcus, CISO at AuditBoard. \u201cThey\u2019re being asked to leverage AI or help accelerate the adoption of AI in organizations to achieve productivity gains. But don\u2019t let it be something that kills the business if we do it wrong,\u201d says Marcus.<\/p>\n<p>To support risk-aware adoption of AI, Marcus\u2019 advice is for CISOs to avoid going alone and foster broad trust and buy-in to risk management across the organization. \u201cThe really important thing to be successful with managing AI risk is to approach the situation with a collaborative mindset and broadcast the message to folks that we\u2019re all in it together and you\u2019re not here to slow them down.\u201d<\/p>\n<p>This approach should help encourage transparency about how and where AI is being used across the organization. Cybersecurity leaders must try and get visibility by establishing a security process operationally that will capture where AI\u2019s being used currently or where there\u2019s an emerging request for new AI, says Norton.<\/p>\n<p>\u201cEvery single product you\u2019ve got these days has some AI and there\u2019s not one governance forum that\u2019s picking it all up across the spectrum of different forms [of AI],\u201d he says.<\/p>\n<p>Norton suggests CISOs develop strategic and tactical approaches to define the different types of AI tools, capture the relative risks, and balance potential pay-off in productivity and innovation. Tactical measures such as <a href=\"https:\/\/www.csoonline.com\/article\/575051\/7-countries-unite-to-push-for-secure-by-design-development.html\">secure by design<\/a> processes, IT change processes, shadow AI discovery programs or risk-based AI inventory and classification are practical ways to deal with the smaller AI tools. \u201cWhere you have more day-to-day AI \u2014 that bit of AI sitting in some product or some SaaS platform, which is growing everywhere \u2014 this might be managed through a tactical approach that identifies what [elements] need oversight,\u201d Norton says.<\/p>\n<p>The strategic approach applies to the big AI changes that are coming with major tools such as Microsoft Copilot and ChatGPT. Securing these \u2018big ticket\u2019 AI tools using internal AI oversight forums is somewhat easier than securing the plethora of other tools that are adding AI.<\/p>\n<p>CISOs can then focus their resources on the highest-impact risks in a way that doesn\u2019t create processes that are unwieldy or unworkable. \u201cThe idea is not to bog this down so that it\u2019s almost impossible to get anything, because organizations typically want to move quickly. So, it\u2019s more of a relatively lightweight process that applies this consideration [of risk] to either allow AI or be used to prevent it if it\u2019s risky,\u201d Norton says.<\/p>\n<p>Ultimately, the task is for security leaders to apply a security lens to AI using governance and risk as part of the broader GRC framework in the organization. \u201cA lot of organizations will have a chief risk officer or someone of that nature who owns the broader risk across the environment, but security should have a seat at the table,\u201d Norton says. \u201cThese days, it\u2019s no longer about CISOs saying \u2018yes\u2019 or \u2018no\u2019. It\u2019s more about us providing visibility of the risks involved in doing certain things and then allowing the organization and the senior executives to make decisions around those risks.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"adapting-existing-frameworks-with-ai-risk-controls\">Adapting existing frameworks with AI risk controls<\/h2>\n<p>AI risks include data safety, misuse of AI tools, privacy considerations, shadow AI, bias and ethical considerations, hallucinations and validating results, legal and reputational issues, and model governance to name a few.<\/p>\n<p>AI-related risks should be established as a distinct category within the organization\u2019s risk portfolio by integrating into GRC pillars, says Dan Karpati, VP of AI technologies at Check Point. Karpati suggests four pillars:<\/p>\n<ul class=\"wp-block-list\">\n<li>Enterprise risk management defines AI risk appetite and establishes an AI governance committee.<\/li>\n<li>Model risk management monitors model drift, bias and adversarial testing.<\/li>\n<li>Operational risk management includes contingency plans for AI failures and human oversight training.<\/li>\n<li>IT risk management includes regular audits, compliance checks for AI systems, governance frameworks and aligning with business objectives.<\/li>\n<\/ul>\n<p>To help map these risks, CISOs can look at the <a href=\"https:\/\/www.nist.gov\/itl\/ai-risk-management-framework\">NIST AI Risk Management Framework<\/a> and other frameworks, such as COSO and COBIT, and apply their core principles \u2014 governance, control, and risk alignment \u2014 to cover AI characteristics such as probabilistic output, data dependency, opacity in decision making, autonomy, and rapid evolution. An emerging benchmark, <a href=\"https:\/\/www.iso.org\/standard\/81230.html\">ISO\/IEC 42001<\/a> provides a structured framework for AI for oversight and assurance that\u2019s intended to embed governance and risk practices across the AI lifecycle.<\/p>\n<p>Adapting these frameworks offers a way to elevate AI risk discussion, align AI risk appetite with the organization\u2019s overarching risk tolerance, and embed robust AI governance across all business units. \u201cInstead of reinventing the wheel, security leaders can map AI risks to tangible business impacts,\u201d says Karpati.<\/p>\n<p>AI risks can also be mapped to the potential for financial losses from fraud or flawed decision-making, reputational damage from data breaches, biased outcomes or customer dissatisfaction, operational disruption from poor integration with legacy systems and system failures, and legal and regulatory penalties. CISOs can utilize frameworks like FAIR (factor analysis of information risk) to assess the likelihood of an AI-related event, estimate loss in monetary terms, and access risk exposure metric. \u201cBy analyzing risks from both qualitative and quantitative perspectives, business leaders can better understand and weigh security risks against financial benchmarks,\u201d says Karpati.<\/p>\n<p>In addition, with emerging regulatory requirements, CISOs will need to monitor draft regulations, track requests for comment periods, have early warnings about new standards, and then prepare for implementation before ratification, says Marcus.<\/p>\n<p>Tapping into industry networks and peers can help CISOs stay across threats and risks as they happen, while reporting functions in GRC platforms monitor any regulatory changes. \u201cIt\u2019s helpful to know what risks are manifesting in the field, what would have protected other organizations, and collectively building key controls and procedures that will make us as an industry more resilient to these types of threats over time,\u201d Marcus says.<\/p>\n<p>Governance is a critical part of the broader GRC framework and CISOs have an important role in setting the organisational rules and principles for how AI is used responsibly.<\/p>\n<h2 class=\"wp-block-heading\" id=\"developing-governance-policies\">Developing governance policies<\/h2>\n<p>In addition to defining risks and managing compliance, CISOs are having to develop new governance policies. \u201cEffective governance needs to include acceptable use policies for AI,\u201d says Marcus. \u201cOne of the early outputs of an assessment process should define the rules of the road for your organization.\u201d<\/p>\n<p>Marcus suggests a stoplight system \u2014 red, yellow, green \u2014 that classifies AI tools for use, or not, within the business. It provides clear guidance to employees, allows technically curious employees a safe space to explore while enabling security teams to build detection and enforcement programs. Importantly, it also let security teams offer a collaborative approach to innovation.<\/p>\n<p>\u2018Green\u2019 tools have been reviewed and approved, \u2018yellow\u2019 require additional assessment and specific use cases, and those labelled \u2018red\u2019 lack the necessary protections and are prohibited from employee use.<\/p>\n<p>At AuditBoard, Marcus and the team have developed a standard for AI tool selection that includes protecting proprietary data and retaining ownership of all inputs and outputs among other things. \u201cAs a business, you can start to develop the standards you care about and use these as a yardstick to measure any new tools or use cases that get presented to you.\u201d<\/p>\n<p>He recommends CISOs and their teams define the guiding principles up front, educate the company about what\u2019s important and help teams self-enforce by filtering out things that don\u2019t meet that standard. \u201cThen by the time [an AI tool] gets to the CISO, people have an understanding of what the expectations are,\u201d Marcus says.<\/p>\n<p>When it comes to specific AI tools and use cases, Marcus and the team have developed \u2018model cards\u2019, one-page documents that outline the AI system architecture including inputs, outputs, data flows, intended use case, third parties, and how the data for the system is trained. \u201cIt allows our risk analysts to evaluate whether that use case violates any privacy laws or requirements, any security best practices and any of the emerging regulatory frameworks that might apply to the business,\u201d he tells CSO.<\/p>\n<p>The process is intended to identify potential risks and be able to communicate those to stakeholders within the organization, including the board. \u201cIf you\u2019ve evaluated dozens of these use cases, you can pick out the common risks and common themes, aggregate those and then come up with strategies to mitigate some of those risks,\u201d he says.<\/p>\n<p>The team can then look at what compensating controls can be applied, how far they can be applied across different AI tools and provide this guidance to the executive. \u201cIt shifts the conversation from a more tactical conversation about this one use case or this one risk to more of a strategic plan for dealing with the \u2018AI risks\u2019 in your organization,\u201d Marcus says.<\/p>\n<p>Jamie Norton warns that now the shiny interface on AI is readily accessible to everyone, security teams need to train their focus on what\u2019s happening under the surface of these tools. Applying strategic risk analysis, utilizing risk management frameworks, monitoring compliance, and developing governance policies can help CISOs guide the organization in its AI journey.<\/p>\n<p>\u201cAs CISOs, we don\u2019t want to get in the way of innovation, but we have to put guardrails around it so that we\u2019re not charging off into the wilderness and our data is leaking out,\u201d says Norton.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>As businesses incorporate cybersecurity into governance, risk and compliance (GRC), it is important to revisit existing GRC programs to ensure that the growing use and risks of generative and agentic AI are addressed so businesses continue to meet regulatory requirements. \u201c[AI] It\u2019s a hugely disruptive technology in that it\u2019s not something you can put into a box and say \u2018well that\u2019s AI\u2019,\u201d says Jamie Norton,&#8230; <\/p>\n<p class=\"more\"><a class=\"more-link\" href=\"https:\/\/newestek.com\/?p=14456\">Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-14456","post","type-post","status-publish","format-standard","hentry","category-uncategorized","is-cat-link-borders-light is-cat-link-rounded"],"_links":{"self":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14456","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=14456"}],"version-history":[{"count":0,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14456\/revisions"}],"wp:attachment":[{"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=14456"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=14456"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=14456"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}