A manual pentest costs 50,000 dollars. Intruder built an AI that does it in minutes.
What happened
Intruder, a London cybersecurity company with ties to GCHQ’s Cyber Accelerator, has launched AI-powered penetration testing agents that mimic human pentesters. These AI agents complete security tests in minutes, replacing traditional manual tests that cost between $10,000 and $50,000, take weeks to schedule, and days to run. The AI-driven approach accelerates pentesting by automating the methodology of human experts and delivering faster, up-to-date vulnerability reports.
Why it matters
Manual pentests have long been expensive and slow, leaving organizations exposed because findings are often outdated by the time reports are delivered. Intruder’s AI pentesters pressure that model by making penetration testing cheaper and near-instant. This shifts incentives toward more frequent testing, reducing the window where vulnerabilities can be exploited. It undercuts pentest vendors relying on slow, manual processes and forces businesses to reconsider security budgets and risk cycles. Faster, more affordable tests improve security hygiene but also raise the bar for attackers who now face continuous scrutiny.
What changes in practice
Builders and security engineers can automate routine vulnerability assessments directly into development workflows, speeding fixes before release. Founders can cut pentest costs dramatically and fit security reviews into rapid iteration cycles without lengthy vendor engagements. Buyers of pentest services will demand instant or on-demand scans rather than waiting weeks. This pressures manual pentest providers to adopt automation or lose market share. Security teams can shift focus from scheduling and chasing reports to quickly prioritizing and fixing AI-identified risks.
Smaller businesses especially benefit by gaining professional-grade testing that was previously cost-prohibitive. Investors in security startups must watch companies that automate pentesting as they disrupt traditional service models and reshape compliance dynamics. Compliance auditors might require more frequent evidence of penetration tests, which AI can easily generate. The risk landscape changes as software moves faster but with ongoing automated security checks embedded in release pipelines.
Who should pay attention
Cybersecurity teams and product security engineers face the biggest impact because they will adopt new tools and workflows for AI-driven pentesting. CTOs and founders at startups and SMBs need to reevaluate security budgets to capitalize on lower-cost, faster tests. Manual pentest firms must innovate or risk commoditization. Security-conscious investors should track companies offering AI pentesting to understand evolving competitive and regulatory pressures. Compliance officers will want to track how automated pentest reports fit into audit and risk frameworks.
What to watch next
Watch for adoption signals like pentest vendors integrating AI capabilities, companies shrinking testing cycles, or compliance standards evolving to accept AI-generated reports. Customer case studies showing reduced pentest costs and faster patch timelines will confirm practical benefits. Also track whether attackers shift tactics as AI pentesting reduces low-hanging vulnerabilities. Finally, regulatory bodies’ acceptance or pushback on AI results as proof of security will determine how deeply this technology reshapes the pentest market.
AI Quick Briefs Editorial Desk