{"id":15929,"date":"2026-03-10T09:41:03","date_gmt":"2026-03-10T09:41:03","guid":{"rendered":"https:\/\/newestek.com\/?p=15929"},"modified":"2026-03-10T09:41:03","modified_gmt":"2026-03-10T09:41:03","slug":"openai-to-acquire-promptfoo-to-strengthen-ai-agent-security-testing","status":"publish","type":"post","link":"https:\/\/newestek.com\/?p=15929","title":{"rendered":"OpenAI to acquire Promptfoo to strengthen AI agent security testing"},"content":{"rendered":"<div>\n<div id=\"remove_no_follow\">\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<section class=\"wp-block-bigbite-multi-title\">\n<div class=\"container\"><\/div>\n<\/section>\n<p>OpenAI said it plans to acquire AI testing startup Promptfoo, a move aimed at strengthening security checks for AI agents as enterprises move toward deploying autonomous systems in business workflows.<\/p>\n<p>Promptfoo\u2019s tools allow developers to test LLM applications against adversarial prompts, including prompt injection and jailbreak attempts, and to evaluate whether models follow safety and reliability guidelines.<\/p>\n<p>In a statement, OpenAI said Promptfoo\u2019s technology will be integrated into OpenAI Frontier, its platform for <a href=\"https:\/\/www.computerworld.com\/article\/4135372\/with-frontier-openai-hopes-to-own-the-enterprise-agent-stack.html\">building and operating AI coworkers<\/a>.<\/p>\n<p>OpenAI added that the Promptfoo team has built tools used by more than 25% of Fortune 500 companies, including an open-source command line interface and library designed to evaluate and red-team large language model applications. OpenAI plans to continue developing the open-source project while expanding enterprise capabilities within its Frontier platform.<\/p>\n<p>Analysts say the acquisition reflects a broader inflection point in AI agent deployment, with enterprises shifting their focus from raw model capabilities to secure and governed AI systems.<\/p>\n<p>Industry research reflects these concerns. IDC\u2019s 2025 Asia\/Pacific Security Study showed that organizations cite AI-enhanced phishing and impersonation attacks such as deepfakes and voice cloning, AI-powered ransomware, and LLM prompt injection or model manipulation among their top concerns.<\/p>\n<p>Additional risks include automated malware creation using AI, AI-driven business logic attacks and disinformation campaigns, as well as model poisoning during training, said <a href=\"https:\/\/my.idc.com\/getdoc.jsp?containerId=PRF005665\" target=\"_blank\" rel=\"noreferrer noopener\">Sakshi Grover<\/a>, senior research manager for IDC Asia Pacific Cybersecurity Services.<\/p>\n<p>\u201cThese reflect that enterprises view AI not only as a productivity tool but also as an expanding attack surface,\u201d Grover said. \u201cIn this context, the ability to systematically test AI systems for vulnerabilities such as prompt injection, data leakage, and unsafe model behavior becomes essential.\u201d<\/p>\n<h2 class=\"wp-block-heading\" id=\"ai-testing-becomes-baseline\">AI testing becomes baseline<\/h2>\n<p>LLMs introduce <a href=\"https:\/\/www.csoonline.com\/article\/4047974\/agentic-ai-a-cisos-security-nightmare-in-the-making.html\">new types of vulnerabilities<\/a> that traditional application testing tools were not designed to detect. Companies moving generative AI projects from pilot stages into production are increasingly forced to consider evaluation and red-teaming tools as a core part of their AI development pipelines.<\/p>\n<p>\u201cRed-teaming, governance, and evaluation tools are becoming the new table stakes,\u201d said <a href=\"https:\/\/www.linkedin.com\/in\/meetneilshah\/\" target=\"_blank\" rel=\"noreferrer noopener\">Neil Shah<\/a>, VP for research at Counterpoint Research. \u201cSecurity must be multi-layered, integrated first at the development stage to simulate vulnerabilities, and second during real-time monitoring and prompt execution.\u201d<\/p>\n<p>Many organizations are now adopting testing practices for AI that mirror traditional application security processes, according to <a href=\"https:\/\/confidis.co\/about\/our-leadership-team\/\">Keith Prabhu<\/a>, founder and CEO of Confidis.<\/p>\n<p>\u201cThis \u2018shift-left\u2019 approach is used extensively today for application security testing,\u201d Prabhu said. \u201cThis tried and tested approach has helped improve the security of the final output. It is logical that AI models and tools will also follow a similar \u2018shift-left\u2019 approach to testing.\u201d<\/p>\n<p>System integrators and managed security service providers are also increasingly incorporating AI testing tools into their service offerings, particularly as organizations begin deploying AI-assisted security operations centers.<\/p>\n<p>\u201cIn autonomous SOC environments, where AI systems may triage alerts, generate responses, or trigger playbooks, continuous evaluation of model behavior is essential to prevent misuse or operational disruption,\u201d Grover said. \u201cEnterprises are increasingly embedding AI evaluation platforms into DevSecOps workflows so that models, prompts, and agent behaviors can be tested continuously before and after deployment.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI said it plans to acquire AI testing startup Promptfoo, a move aimed at strengthening security checks for AI agents as enterprises move toward deploying autonomous systems in business workflows. Promptfoo\u2019s tools allow developers to test LLM applications against adversarial prompts, including prompt injection and jailbreak attempts, and to evaluate whether models follow safety and reliability guidelines. In a statement, OpenAI said Promptfoo\u2019s technology will&#8230; <\/p>\n<p class=\"more\"><a class=\"more-link\" href=\"https:\/\/newestek.com\/?p=15929\">Read More<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-15929","post","type-post","status-publish","format-standard","hentry","category-uncategorized","is-cat-link-borders-light is-cat-link-rounded"],"_links":{"self":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/15929","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=15929"}],"version-history":[{"count":0,"href":"https:\/\/newestek.com\/index.php?rest_route=\/wp\/v2\/posts\/15929\/revisions"}],"wp:attachment":[{"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=15929"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=15929"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newestek.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=15929"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}