{"id":14968,"date":"2025-10-16T10:38:28","date_gmt":"2025-10-16T10:38:28","guid":{"rendered":"https:\/\/newestek.com\/?p=14968"},"modified":"2025-10-16T10:38:28","modified_gmt":"2025-10-16T10:38:28","slug":"coming-ai-regulations-have-it-leaders-worried-about-hefty-compliance-fines","status":"publish","type":"post","link":"https:\/\/newestek.com\/?p=14968","title":{"rendered":"Coming AI regulations have IT leaders worried about hefty compliance fines"},"content":{"rendered":"<div>\n<div id=\"remove_no_follow\">\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<section class=\"wp-block-bigbite-multi-title\">\n<div class=\"container\"><\/div>\n<\/section>\n<p>More than seven in 10 IT leaders are worried about their organizations\u2019 ability to keep up with regulatory requirements as they deploy generative AI, with many concerned about a potential patchwork of regulations on the way.<\/p>\n<p>More than 70% of IT leaders named regulatory compliance as one of their top three challenges related to gen AI deployment, according to a <a href=\"https:\/\/www.gartner.com\/en\/newsroom\/press-releases\/2025-10-06-gartner-predicts-ai-regulatory-violations-will-result-in-a-30-percent-increase-in-legal-disputes-for-tech-companies-by-2028\">recent survey from Gartner<\/a>. Less than a quarter of those IT leaders are very confident that their organizations can manage security and governance issues, including regulatory compliance, when using gen AI, the survey says.<\/p>\n<p>IT leaders appear to be worried about complying with the potential for a growing number of AI regulations, including some that may conflict with one another, says <a href=\"https:\/\/www.gartner.com\/en\/experts\/lydia-cloughertyjones\">Lydia Clougherty Jones<\/a>, a senior director analyst at Gartner.<\/p>\n<p>\u201cThe number of legal nuances, especially for a global organization, can be overwhelming, because the frameworks that are being announced by the different countries vary widely,\u201d she says.<\/p>\n<p>Gartner predicts that AI regulatory violations will create a 30% increase in legal disputes for tech companies by 2028. By mid-2026, new categories of illegal AI-informed decision-making will cost more than $10 billion in remediation costs across AI vendors and users, the analyst firm also projects.<\/p>\n<h2 class=\"wp-block-heading\" id=\"just-the-start\">Just the start<\/h2>\n<p>Government efforts to regulate AI are likely in their infancy, with the <a href=\"https:\/\/commission.europa.eu\/news-and-media\/news\/ai-act-enters-force-2024-08-01_en\">EU AI Act<\/a>, which went into effect in August 2024, one of the first major pieces of legislation targeting the use of AI.<\/p>\n<p>While the US Congress has so far taken a hands-off approach, a handful of US states have passed AI regulations, with the 2024 <a href=\"https:\/\/leg.colorado.gov\/sites\/default\/files\/images\/fpf_legislation_policy_brief_the_colorado_ai_act_final.pdf\">Colorado AI Act<\/a> requiring AI users to maintain risk management programs and conduct impact assessments and requiring both vendors and users to protect consumers from algorithmic discrimination.<\/p>\n<p>Texas has also passed its own AI law, which goes into effect in January 2026. The Texas <a href=\"https:\/\/www.dlapiper.com\/en\/insights\/publications\/2025\/06\/texas-adopts-the-responsible-ai-governance-act\">Responsible Artificial Intelligence Governance Act<\/a> (TRAIGA) requires government entities to inform individuals when they are interacting with an AI. The law also prohibits using AI to manipulate human behavior, such as inciting self-harm, or engaging in illegal activities.<\/p>\n<p>The Texas law includes <a href=\"https:\/\/natlawreview.com\/article\/traiga-key-provisions-texas-new-artificial-intelligence-governance-act\">civil penalties<\/a> of up to $200,000 per violation or $40,000 per day for ongoing violations.<\/p>\n<p>Then, in late September, California Governor Gavin Newsom signed the <a href=\"https:\/\/www.gov.ca.gov\/2025\/09\/29\/governor-newsom-signs-sb-53-advancing-californias-world-leading-artificial-intelligence-industry\/\">Transparency in Frontier Artificial Intelligence Act<\/a>, which requires large AI developers to publish descriptions on how they have incorporated national standards, international standards, and industry-consensus best practices into their AI frameworks.<\/p>\n<p>The California law, which also goes into effect in January 2026, also mandates that AI companies report critical safety incidents, including cyberattacks, within 15 days, and provides provisions to protect whistleblowers who report violations of the law.<\/p>\n<p>Companies that fail to comply with the disclosure and reporting requirements face fines of up to $1 million per violation.<\/p>\n<p>California IT regulations have an outsize impact on global practices because the state\u2019s population of about 39 million gives it a huge number of potential AI customers protected under the law.\u00a0 California\u2019s population is larger than more than 135 countries.<\/p>\n<p>California also is the AI capital of the world, containing the headquarters of <a href=\"https:\/\/www.forbes.com\/lists\/ai50\/\">32 of the top 50<\/a> AI companies worldwide, including OpenAI, Databricks, Anthropic, and Perplexity AI. All AI providers doing business in California will be subject to the regulations.<\/p>\n<h2 class=\"wp-block-heading\" id=\"cios-on-the-forefront\">CIOs on the forefront<\/h2>\n<p>With US states and more countries potentially passing AI regulations, CIOs are understandably nervous about compliance as they deploy the technology, says <a href=\"https:\/\/www.linkedin.com\/in\/dhinchcliffe\">Dion Hinchcliffe<\/a>, vice president and practice lead for digital leadership and CIOs, at market intelligence firm Futurum Equities.<\/p>\n<p>\u201cThe CIO is on the hook to make it actually work, so they\u2019re the ones really paying very close attention to what is possible,\u201d he says. \u201cThey\u2019re asking, \u2018How accurate are these things? How much can data be trusted?\u2019\u201d<\/p>\n<p>While some AI regulatory and governance compliance solutions exist, some CIOs fear that those tools won\u2019t keep up with the ever-changing regulatory and AI functionality landscape, Hinchcliffe says.<\/p>\n<p>\u201cIt\u2019s not clear that we have tools that will constantly and reliably manage the governance and the regulatory compliance issues, and it\u2019ll maybe get worse, because regulations haven\u2019t even arrived yet,\u201d he says.<\/p>\n<p>AI regulatory compliance will be especially difficult because of the nature of the technology, he adds. \u201cAI is so slippery,\u201d Hinchcliffe says. \u201cThe technology is not deterministic; it\u2019s probabilistic. AI works to solve all these problems that traditionally coded systems can\u2019t because the coders never thought about that scenario.\u201d<\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/tina-joros-0ab53413\/\">Tina Joros<\/a>, chairwoman of the Electronic Health Record Association AI Task Force, also sees concerns over compliance because of a fragmented regulatory landscape. The various regulations being passed could widen an already large digital divide between large health systems and their smaller and rural counterparts that are struggling to keep pace with AI adoption, she says.<\/p>\n<p>\u201cThe various laws being enacted by states like California, Colorado, and Texas are creating a regulatory maze that\u2019s challenging for health IT leaders and could have a chilling effect on the future development and use of generative AI,\u201d she adds.<\/p>\n<p>Even bills that don\u2019t make it into law require careful analysis, because they could shape future regulatory expectations, Joros adds.<\/p>\n<p>\u201cConfusion also arises because the relevant definitions included in those laws and regulations, such as \u2018developer,\u2019 \u2018deployer,\u2019 and \u2018high risk,\u2019 are frequently different, resulting in a level of industry uncertainty,\u201d she says. \u201cThis understandably leads many software developers to sometimes pause or second-guess projects, as developers and healthcare providers want to ensure the tools they\u2019re building now are compliant in the future.\u201d<\/p>\n<p><a href=\"https:\/\/contractpodai.com\/news\/leadership-team-expansion-leah-innovation\/\">James Thomas<\/a>, chief AI officer at contract software provider ContractPodAi, agrees that the inconsistency and overlap between AI regulations creates problems.<\/p>\n<p>\u201cFor global enterprises, that fragmentation alone creates operational headaches \u2014 not because they\u2019re unwilling to comply, but because each regulation defines concepts like transparency, usage, explainability, and accountability in slightly different ways,\u201d he says. \u201cWhat works in North America doesn\u2019t always work across the EU.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"look-to-governance-tools\">Look to governance tools<\/h2>\n<p>Thomas recommends that organizations adopt a suite of governance controls and systems as they deploy AI. In many cases, a major problem is that AI adoption has been driven by individual employees using personal productivity tools, creating a fragmented deployment approach.<\/p>\n<p>\u201cWhile powerful for specific tasks, these tools were never designed for the complexities of regulated, enterprise-wide deployment,\u201d he says. \u201cThey lack centralized governance, operate in silos, and make it nearly impossible to ensure consistency, track data provenance, or manage risk at scale.\u201d<\/p>\n<p>As IT leaders struggle with regulatory compliance, Gartner also recommends that the focus on training AI models to self-correct, create rigorous use-case review procedures, increase model testing and sandboxing, and deploy content moderation techniques\u00a0such as buttons to report abuse AI warning labels.<\/p>\n<p>IT leaders need to be able to defend their AI results, requiring a deep understanding of how the models work, says Gartner\u2019s Clougherty Jones. In certain risk scenarios, this may mean using an external auditor to test the AI.<\/p>\n<p>\u201cYou have to defend the data, you have to defend the model development, the model behavior, and then you have to defend the output,\u201d she says. \u201cA lot of times we use internal systems to audit output, but if something\u2019s really high risk, why not get a neutral party to be able to audit it? If you\u2019re defending the model and you\u2019re the one who did the testing yourself, that\u2019s defensible only so far.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>More than seven in 10 IT leaders are worried about their organizations\u2019 ability to keep up with regulatory requirements as they deploy generative AI, with many concerned about a potential patchwork of regulations on the way. More than 70% of IT leaders named regulatory compliance as one of their top three challenges related to gen AI deployment, according to a recent survey from Gartner. Less&#8230; <\/p>\n<p class=\"more\"><a class=\"more-link\" href=\"https:\/\/newestek.com\/?p=14968\">Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-14968","post","type-post","status-publish","format-standard","hentry","category-uncategorized","is-cat-link-borders-light is-cat-link-rounded"],"_links":{"self":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14968","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=14968"}],"version-history":[{"count":0,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14968\/revisions"}],"wp:attachment":[{"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=14968"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=14968"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=14968"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}