{"id":14944,"date":"2025-10-13T12:16:51","date_gmt":"2025-10-13T12:16:51","guid":{"rendered":"https:\/\/newestek.com\/?p=14944"},"modified":"2025-10-13T12:16:51","modified_gmt":"2025-10-13T12:16:51","slug":"ai-red-flags-ethics-boards-and-the-real-threat-of-agi-today","status":"publish","type":"post","link":"https:\/\/newestek.com\/?p=14944","title":{"rendered":"AI red flags, ethics boards and the real threat of AGI today"},"content":{"rendered":"<div>\n<div id=\"remove_no_follow\">\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<section class=\"wp-block-bigbite-multi-title\">\n<div class=\"container\"><\/div>\n<\/section>\n<p>Paul Dongha is head of responsible AI and AI strategy at NatWest Group, where he leads the development of frameworks to ensure artificial intelligence is deployed safely, ethically and in line with regulatory expectations.<\/p>\n<p>Previously serving as group head of data and AI ethics at Lloyds Banking Group, <a href=\"https:\/\/champions-speakers.co.uk\/speaker-agent\/paul-dongha\" target=\"_blank\" rel=\"noreferrer noopener\">Paul Dongha<\/a> has been at the forefront of embedding transparency, accountability and trust into enterprise AI systems.<\/p>\n<p>With extensive experience shaping how financial institutions approach emerging technologies, Dongha offers a clear-eyed perspective on both the opportunities and the risks that AI presents for businesses and society.<\/p>\n<p>In this exclusive interview with the Champions Speakers Agency, he discusses the ethical red flags CISOs and boards must monitor, the responsibilities of regulators and the real-world risks that demand attention today.<\/p>\n<h2 class=\"wp-block-heading\" id=\"q-what-ethical-red-flags-should-cisos-and-boards-watch-for-when-deploying-ai-inside-their-organizations\">Q: What ethical red flags should CISOs and boards watch for when deploying AI inside their organizations?<\/h2>\n<p><strong>Paul Dongha<\/strong>: \u201cI think some of the standout issues and risks that we have with AI systems that have come to light recently are things like human agency.<\/p>\n<p>\u201cAI systems have the ability to create sophisticated outputs and, to some extent, that takes away from humans their ability to make the right decisions. The loss of human agency is something that we have to be very aware of and that risk has to be mitigated.<\/p>\n<p>\u201cAnother risk is robustness. AI systems have the ability to sometimes give different answers to the same questions, so I think technical robustness \u2014 ensuring AI systems generate the same result for the same question over time \u2014 is something that has to be looked at as well.<\/p>\n<p>\u201cData privacy is another. The ability of AI systems to inadvertently leak confidential or private information about individuals or organizations is something we also have to guard against.<\/p>\n<p>\u201cI think transparency is a really important one. The way a machine learning or an AI system works is nonlinear, so understanding how it arrives at a decision is hard to do. There are techniques that allow us to introspect how an AI system derived a particular answer, but they are only approximations. Transparency of the algorithm, whether it\u2019s machine learning or generative AI, is something that we have to pay close attention to.<\/p>\n<p>\u201cThen there\u2019s bias. We\u2019ve seen bias creep into many systems and that really takes away their ability to support diversity and inclusion. Those biases can be inherent in the data that trains our AI systems or within the system development life cycle. It\u2019s an ongoing area of work and with generative AI, it\u2019s a particular problem because of the vast amount of training data involved.<\/p>\n<p>\u201cAnd finally, accountability. Organizations, particularly commercial organizations, need to demonstrate that they\u2019ve got processes in place where, if people need to seek redress for the output of an AI system, they\u2019re able to do so. Firms should take full accountability for how they create systems and how they operate.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"q-should-every-large-enterprise-have-an-ai-ethics-board-and-what-should-its-remit-include\">Q: Should every large enterprise have an AI ethics board \u2014 and what should its remit include?<\/h2>\n<p><strong>Paul Dongha<\/strong>: \u201cWhen it comes to the executives and decision-makers of large corporations, I think there are a few things here.<\/p>\n<p>\u201cFirstly, I believe an ethics board is absolutely mandatory. It should be comprised of senior executives drawn from a diverse background within the organization, where those participants have a real feel for their customers and what their customers want.<\/p>\n<p>\u201cThose members should be trained in ethics, should understand the pitfalls of artificial intelligence and should make decisions around which AI applications are exposed to customers.<\/p>\n<p>\u201cImportantly, those ethics boards shouldn\u2019t rely just on IT systems to answer ethical questions. Ethics boils down to a discussion between different stakeholders. An ethics board is there to debate and to discuss edge cases \u2014 for example, the launch of an application where there may be disagreement over whether it could cause harm or whether it could be a surprise to customers.<\/p>\n<p>\u201cI also believe a chief responsible AI officer should be appointed to the board of every bank \u2014 and arguably every large organization \u2014 to oversee the end-to-end risk management of applications both during build and post-deployment. Ethics has to be considered at every stage of development and launch.<\/p>\n<p>\u201cRisk management practices and the audit function should all be folded into the remit of a responsible AI officer to ensure strong oversight.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"q-are-regulators-and-governments-moving-fast-enough-to-keep-ai-risks-under-control\">Q: Are regulators and governments moving fast enough to keep AI risks under control?<\/h2>\n<p><strong>Paul Dongha<\/strong>: \u201cI believe our governments and democratically elected institutions, as well as sectoral regulators, have a huge role to play in this.<\/p>\n<p>\u201cWe as a society elect our governments to look after us. We have a legislative process \u2014 even with something as simple as driving, we have rules to ensure that vehicles are maneuvered correctly. Without those rules, driving would be very dangerous. AI is no different: Legislation and rules around how AI is used and deployed are incredibly important.<\/p>\n<p>\u201cCorporations are accountable to shareholders, so the bottom line is always going to be very important to them. That means it would be unwise to let corporations themselves implement the guardrails around AI. Governments have to be involved in setting what is and isn\u2019t reasonable, what is too high a risk and what is in the public interest.<\/p>\n<p>\u201cTechnology companies need to be part of that conversation, but they should not be leading it. Those conversations must be led by the institutions we elect to look after society.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"q-how-real-is-the-threat-of-artificial-general-intelligence-and-what-risks-demand-our-attention-today\">Q: How real is the threat of artificial general intelligence \u2014 and what risks demand our attention today?<\/h2>\n<p><strong>Paul Dongha<\/strong>: \u201cArtificial general intelligence, which is about AI approaching human-level intelligence, has been the holy grail of AI research for decades. We\u2019re not there yet. Many aspects of human intelligence \u2014 social interactions, emotional intelligence, even elements of computer vision \u2014 are things the current generation of AI is simply incapable of.<\/p>\n<p>\u201cThe recent transformer-based technologies look extremely sophisticated, but when you open the hood and examine how they operate, they do not work in the way humans think or behave. I don\u2019t believe we\u2019re anywhere near achieving AGI and in fact the current approaches are unlikely to get us there.<\/p>\n<p>\u201cSo my message is that there\u2019s no need to be worried about any imminent superintelligence or Terminator situation. But we do need to be aware that, in the future, it\u2019s possible. That means we have to guard against it.<\/p>\n<p>\u201cIn the meantime, there are real and pressing risks with today\u2019s generation of AI: weaponization, disinformation and the ability for nefarious states to use generative AI to influence electorates. Even without AGI, current systems have great power \u2014 and in the wrong hands, that power can cause serious harm to society.\u201d<\/p>\n<\/p>\n<p><strong>This article is published as part of the Foundry Expert Contributor Network.<br \/><\/strong><a href=\"https:\/\/www.csoonline.com\/expert-contributor-network\/\"><strong>Want to join?<\/strong><\/a><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Paul Dongha is head of responsible AI and AI strategy at NatWest Group, where he leads the development of frameworks to ensure artificial intelligence is deployed safely, ethically and in line with regulatory expectations. Previously serving as group head of data and AI ethics at Lloyds Banking Group, Paul Dongha has been at the forefront of embedding transparency, accountability and trust into enterprise AI systems&#8230;. <\/p>\n<p class=\"more\"><a class=\"more-link\" href=\"https:\/\/newestek.com\/?p=14944\">Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-14944","post","type-post","status-publish","format-standard","hentry","category-uncategorized","is-cat-link-borders-light is-cat-link-rounded"],"_links":{"self":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14944","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=14944"}],"version-history":[{"count":0,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/14944\/revisions"}],"wp:attachment":[{"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=14944"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=14944"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=14944"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}