AI Pentesting Agents Revolutionize Cybersecurity, Threatening Human Pen Testers

Key Points
- Intruder's AI pentesting agents replicate manual pen testing methodology in minutes
- The company's AI agents work by investigating vulnerability scanner findings
- The penetration testing market is valued at approximately 2.5 to 3 billion dollars
- The AI-native segment is growing faster, with companies like xBow reaching unicorn status
- The economics of manual pentesting are structurally broken, with a global cybersecurity workforce gap of 3.4 million unfilled positions
Intruder, a UK cybersecurity startup, has launched AI pentesting agents that replicate manual pen testing methodology in minutes, threatening to replace human pen testers. The company's AI agents work by investigating vulnerability scanner findings, interacting with target systems, and determining whether findings represent genuine exploitable flaws or false positives.
Intruder, a GCHQ-accelerated UK cybersecurity startup, has launched AI pentesting agents that replicate manual pen testing methodology in minutes. The broader market is racing to automate vulnerability discovery as AI compresses the gap between offence and defence. A manual penetration test costs between 10,000 and 50,000 dollars, takes weeks to schedule, days to execute, and produces a report that is out of date before the ink dries.
Intruder's AI pentesting agents work by investigating vulnerability scanner findings using the same methods a human pen tester would employ. When the scanner flags a potential issue, the AI agent interacts directly with the target system, sending requests, analysing responses, and probing for exposed data to determine whether the finding represents a genuine exploitable flaw or a false positive.
The company's chief executive, Chris Wallis, will present the technology at KnowBe4's KB4-CON conference on 13 May. The pitch is simple: the depth of a manual pentest, available on demand, at a fraction of the cost. The timing is not accidental, as the cybersecurity industry is watching AI transform the attack side of the equation faster than the defence side can adapt.
The penetration testing market is valued at approximately 2.5 to 3 billion dollars and growing at 12 to 16 per cent annually. The AI-native segment is growing faster, with companies like xBow reaching unicorn status and Pentera surpassing 100 million dollars in annual recurring revenue. The economics of manual pentesting are structurally broken, with a global cybersecurity workforce gap of 3.4 million unfilled positions, meaning there are not enough qualified pen testers to meet demand.
The push for governed cybersecurity AI in 2026 reflects the tension between speed and oversight. Industry telemetry in 2025 exceeded 308 petabytes across more than four million identities, endpoints, and cloud assets, producing nearly 30 million investigative leads. No human team can process that volume, but the EU AI Act classifies many security automation tools as high-risk AI systems, requiring compliance with requirements around transparency, human oversight, and robustness that autonomous pentesting agents may struggle to meet.
The geopolitics of AI cybersecurity have arrived, with the tools that find vulnerabilities becoming strategic assets, and access to them distributed along lines that favour US technology companies and their chosen partners. The question is whether the AI agents that find vulnerabilities will consistently arrive before the AI agents that exploit them, or whether the gap between offence and defence that has defined cybersecurity for decades will simply be reproduced at machine speed.