AI is making it ever easier for bad actors to launch attacks, and a newly-identified open source platform, CyberStrikeAI, seems to be lowering the bar even further.
The platform packages end-to-end attack automation into a single AI-native orchestration engine, and is linked to the threat actor behind the recent campaign that breached hundreds of Fortinet FortiGate firewalls. That developer is believed to have “some ties” to the Chinese government, according to research from cybersecurity company Team Cymru.
According to its GitHub repository, CyberStrikeAI ships with 100-plus curated tools covering “the whole kill chain.” It comprises an “intelligent” orchestration engine, role-based testing with predefined security roles, a system featuring what it calls specialized testing skills, and “comprehensive” lifecycle management capabilities, the researchers said.
This type of easy-to-use tool is increasingly giving threat actors of all kinds, including novices, the ability to launch attacks with just a few quick keystrokes.
“The adoption of CyberStrikeAI is poised to accelerate, representing a concerning evolution in the proliferation of AI-augmented offensive security tools,” Will Thomas, a senior threat intelligence advisor at Team Cymru, warned in a blog post.
Providing end-to-end automation
On its GitHub page, CyberStrikeAI claims it is an “auditable, traceable, and collaborative testing environment for security teams.” It features native Model Context Protocol (MCP), so it can easily connect with external data, tools, and systems without requiring separate integrations. It says it supports end-to-end automation, “from conversational commands to vulnerability discovery, attack-chain analysis, knowledge retrieval, and result visualization.”
The GitHub page outlines the product highlights:
- 100-plus prebuilt tool recipes and a human-readable YAML-based extension system;
- Attack-chain graph, risk scoring, and “step-by-step replay”;
- Password-protected web user interfaces (UIs) and audit logs;
- A knowledge base with vector search, hybrid retrieval, and searchable archives;
- Vulnerability management with create, read, update, delete (CRUD) operations, severity tracking, status workflow, and statistics;
- Batch task management that can organize task queues and add and execute multiple tasks sequentially.
In addition, integrated chatbots, dubbed DingTalk and Lark, allow users to talk to CyberStrikeAI from their mobile devices.
CyberStrikeAI’s tooling supports a full attack chain, and includes network and vulnerability scanning; web and app testing; password cracking; exploitation and post-exploitation frameworks; container, cloud, and API security; subdomain enumeration (used to uncover vulnerabilities); capture the flag (CTF) utilities; and forensic and binary analysis.
A dashboard helps users quickly understand core features and current state. Basic users can perform quick start one-command deployment, while more advanced users can dive into more complex tasks. These include predefined role-based testing (pen testing, CTF, web app scanning), custom prompts and tool restrictions, skills systems (with 20-plus skills, including SQL injection and API security) that can be called on demand by AI agents, tool orchestrations and extensions, and attack chain intelligence.
“Making this kind of tooling available as public open source, given its sophistication and the ability to cause real harm, is irresponsible,” said David Shipley of Beauceron Security. “This is a whole new ballgame from past tools that can be used by ethical hackers and security researchers responsibly.”
Prediction: a proliferation of AI-augmented offensive security tools
CyberStrikeAI’s GitHub activities suggest its developer, known as Ed1s0nZ, interacts with Chinese private sector firms with known ties to the Chinese Ministry of State Security (MSS).
Between January 20 and 26, the Team Cymru researchers observed 21 unique IP addresses running CyberStrikeAI, with servers primarily hosted in China, Singapore, and Hong Kong. This indicates a “sharp increase in operational usage” since the GitHub repository was created in November 2025, Team Cymru’s Thomas noted.
“As adversaries increasingly embrace AI-native orchestration engines, we expect to see a rise in automated, AI-driven targeting of vulnerable edge devices,” including firewalls and VPN appliances, he warned.
In the near future, defenders must prepare for an environment where tools like this, and other “AI-assisted privilege escalation projects,” lower the barrier to entry for complex network exploitation, he cautioned.
Beauceron’s Shipley added: “We truly have opened Pandora’s Box and a lot of organizations are going to be harmed. There’s no way they can keep up with this.”
It’s analogous to going “from muskets to AK-47s,” he noted, and the knee-jerk reactions from lawmakers will harm even good faith research efforts. “We’re in a lot of trouble in 2026, and this is only one of the tools hitting the streets.”