{"id":14676,"date":"2025-08-25T15:54:30","date_gmt":"2025-08-25T15:54:30","guid":{"rendered":"https:\/\/newestek.com\/?p=14676"},"modified":"2025-08-25T15:54:30","modified_gmt":"2025-08-25T15:54:30","slug":"need-help-with-ai-safety-stay-ahead-of-risks-with-these-tools-and-frameworks","status":"publish","type":"post","link":"https:\/\/newestek.com\/?p=14676","title":{"rendered":"Need help with AI safety? Stay ahead of risks with these tools and frameworks"},"content":{"rendered":"<div>\n<div id=\"remove_no_follow\">\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<section class=\"wp-block-bigbite-multi-title\">\n<div class=\"container\"><\/div>\n<\/section>\n<p>The Cloud Security Alliance (CSA) has spent the past 14 years bringing together experts to help make complex technologies like cloud computing and artificial intelligence more manageable.<\/p>\n<p>In late 2023, CSA launched its most ambitious project yet: the <a href=\"https:\/\/cloudsecurityalliance.org\/ai-safety-initiative\">AI Safety Initiative<\/a>. Supported by major players like Amazon, Google, Microsoft, and OpenAI \u2014 along with the Cybersecurity and Infrastructure Security Agency (CISA) and universities \u2014 the AI Safety Initiative gives companies reliable guidance on how to use AI tools safely and responsibly.<\/p>\n<p>The AI Safety Initiative also helps close the divide between fast-moving technology and slower-moving government regulations. With practical tools like readiness checklists, hands-on frameworks, and recommendations that evolve alongside new laws, the initiative makes it easier for businesses to roll out AI without getting bogged down by compliance worries.<\/p>\n<h2 class=\"wp-block-heading\"><a><\/a>From AI readiness to AI roadmaps<strong><\/strong><\/h2>\n<p>The AI Safety Initiative\u2019s main priority is sharing practical guardrails for the generative AI of today, while anticipating the needs of more advanced AI systems coming soon (artificial general intelligence and artificial superintelligence).<\/p>\n<p>Its focus areas span everything organizations need to use AI safely, including:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Comprehensive AI readiness lists<\/strong> for organizations to evaluate how prepared they really are for AI.<\/li>\n<li><strong>Usage guidelines<\/strong> that align with existing security and governance practices.<\/li>\n<li><strong>Strategies for how to tackle AI ethical risks<\/strong> like bias and transparency.<\/li>\n<li><strong>AI security instructions<\/strong> for how to use AI safely to strengthen cybersecurity.<\/li>\n<li><strong>Attack resilience<\/strong> <strong>guidelines<\/strong> for understanding how AI systems can be penetrated and how to defend them.<\/li>\n<li><strong>Threat intelligence<\/strong> on how bad actors are already using AI.<\/li>\n<li><strong>Future-proof planning<\/strong> with roadmaps for tomorrow\u2019s AI challenges.<\/li>\n<\/ul>\n<p>\u201cWith the AI Safety Initiative, we\u2019re working hard to keep best practices in step with AI\u2019s fast pace \u2014 all while staying true to our roots of offering free tools that help bridge industry and government,\u201d says Illena Armstrong, President of the Cloud Security Alliance.<\/p>\n<h2 class=\"wp-block-heading\"><a><\/a>The challenge of securing AI as it constantly changes<\/h2>\n<p>Launching a global AI safety coalition was no small task. One of the many challenges for the CSA, says Armstrong, was laying out a clear and concise vision. \u201cThe key was making sure the roadmap for the initiative could be understood and then fully supported by a diverse group of stakeholders,\u201d says Armstrong.<\/p>\n<p>Another big challenge was the AI\u2019s unprecedented pace of evolution. The CSA needed to develop frameworks and tools\u2014such as its AI Controls Matrix (AICM), with 18 domains and 243 control objectives\u2014that would be current yet also forward-looking. This required continuous input from CSA\u2019s research working groups and its executive leadership council, which includes cybersecurity executives from Sallie Mae, Procter &amp; Gamble, Microsoft, and Anthropic, to name a few.<\/p>\n<p>Adding to the challenge was the need to adapt guidance for different industries so that recommendations worked equally well for financial services, healthcare, manufacturing, and other sectors. CSA\u2019s CSO Strategic Advisory Council is currently working on these industry-specific AI safety guidelines, says Armstrong.<\/p>\n<p>\u201cYou\u2019re managing tech giants, government agencies, academic researchers, and security professionals,\u201d Armstrong notes. \u201cEveryone has different priorities, but with our solid internal team and the committed experts supporting this effort, we\u2019re staying on top of these challenges.\u201d<\/p>\n<h2 class=\"wp-block-heading\"><a><\/a>The impact of turning frameworks into real-world tools<\/h2>\n<p>In a year and a half, the AI Safety Initiative has already produced tangible results. More than 20 <a href=\"https:\/\/cloudsecurityalliance.org\/research\/publications\">research publications<\/a> have been released as part of the initiative, thanks to the work of <a href=\"https:\/\/cloudsecurityalliance.org\/research\/working-groups\">CSA\u2019s research working groups<\/a> covering AI governance and compliance, AI technology risk, AI controls, and AI organizational responsibilities.<\/p>\n<p>The initiative\u2019s flagship <a href=\"https:\/\/cloudsecurityalliance.org\/artifacts\/ai-controls-matrix\">AI Controls Matrix (AICM)<\/a>, a vendor-agnostic framework for cloud-based AI systems, has received positive feedback from the experts and industry leaders who have downloaded and applied it, says Armstrong.<\/p>\n<p>Additionally, as part of the initiative, the CSA launched <a href=\"http:\/\/riskrubric.ai\/\">RiskRubric.ai<\/a> in partnership with Harmonic, Noma Security, and Haize Labs. Risk Rubric is a scoring system for large language models (LLMs) that rates more than 40 models per month on transparency, reliability, security, privacy, safety, and reputation. The end goal of Risk Rubric is to give enterprise leaders the information they need to make more responsible decisions when adopting AI.<\/p>\n<p>Education is another big win for the AI Safety Initiative. Through its <a href=\"https:\/\/cloudsecurityalliance.org\/education\/taise-support\">Trusted AI Safety Expert (TAISE) certificate program<\/a> \u2014 a partnership with Northeastern University \u2014 the CSA is helping to close a major skills gap by teaching professionals to develop, deploy, and govern AI responsibly.<\/p>\n<p><em>For its AI Safety Initiative, the CSA earned a <\/em><a href=\"https:\/\/event.foundryco.com\/cso-conference-awards\/awards\/\"><em>2025 CSO Award<\/em><\/a><em>. The award honors security projects that <\/em><a href=\"https:\/\/www.csoonline.com\/article\/570667\/us-cso50-2022-awards-showcase-world-class-security-strategies.html\"><em>demonstrate outstanding thought leadership and business value<\/em><\/a><em>.<\/em><strong><\/strong><\/p>\n<h2 class=\"wp-block-heading\"><a><\/a>With big initiatives, have a unified vision but stay flexible<\/h2>\n<p>In rolling out the AI Safety Initiative, the CSA learned lessons that organizations can apply to any complex project, such as:<\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Start with a unified vision.<\/strong> A clear mission statement and scope help get all stakeholders on the same page.<\/li>\n<li><strong>Bring different voices to the table.<\/strong> Include government, enterprise, nonprofit, academic, and service-provider perspectives to boost credibility.<\/li>\n<li><strong>Stay vendor neutral.<\/strong> Remaining unbiased helps build trust and improves the chance your work is accepted across industries.<\/li>\n<li><strong>Communicate early and often.<\/strong> Sharing progress and success stories is vital to building momentum.<\/li>\n<li><strong>Be ready to adjust.<\/strong> In a field like AI, the ability to pivot in response to new developments is essential.<\/li>\n<\/ul>\n<p>\u201cThe need for flexibility is a common thread,\u201d says Armstrong. \u201cBeing able to adapt quickly as things evolve is fundamental to the AI Safety Initiative\u2019s longevity and success.\u201d<\/p>\n<p><strong>Discover More Insights from Security Leaders<\/strong><br \/>Want to see how top organizations are tackling today\u2019s most complex cybersecurity challenges? Join us at the CSO Conference &amp; Awards, where industry leaders share strategies, tools, and real-world lessons you can apply immediately. <a href=\"https:\/\/event.foundryco.com\/cso-conference-awards\/?utm_source=cso.com&amp;utm_medium=blog&amp;utm_campaign=CSO2025_Cloud_Security_Alliance\">Register now to secure your spot.<\/a><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>The Cloud Security Alliance (CSA) has spent the past 14 years bringing together experts to help make complex technologies like cloud computing and artificial intelligence more manageable. In late 2023, CSA launched its most ambitious project yet: the AI Safety Initiative. Supported by major players like Amazon, Google, Microsoft, and OpenAI \u2014 along with the Cybersecurity and Infrastructure Security Agency (CISA) and universities \u2014 the&#8230; <\/p>\n<p class=\"more\"><a class=\"more-link\" href=\"https:\/\/newestek.com\/?p=14676\">Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-14676","post","type-post","status-publish","format-standard","hentry","category-uncategorized","is-cat-link-borders-light is-cat-link-rounded"],"_links":{"self":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14676","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=14676"}],"version-history":[{"count":0,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14676\/revisions"}],"wp:attachment":[{"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=14676"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=14676"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=14676"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}