AI turns patches into working exploits in 30 minutes, and the 90-day disclosure window is the casualty
What happened
AI language models are now able to reverse-engineer security patches into working exploits in about 30 minutes. This rapid extraction exposes vulnerabilities far faster than humans could, putting pressure on the traditional 90-day vulnerability disclosure window. A veteran security researcher argues the established timeline for responsible disclosure is becoming obsolete due to these AI capabilities.
Why it matters
The 90-day disclosure window was designed to give vendors enough time to patch software before flaws became public knowledge and attackable. With AI cutting that process down to minutes, attackers can weaponize fixes almost immediately. This erodes trust in the current system and forces a rethink of how vulnerabilities are managed. Organizations can no longer rely on a fixed disclosure timeline as a safety buffer, increasing the urgency for proactive detection, faster patch releases, and more rigorous internal security reviews. Companies face greater exposure, and the risk of exploitation spikes dramatically.
What to watch next
Security teams should monitor how vendors and regulators respond to this new AI-driven dynamic. Expect pressure to shorten disclosure deadlines or change vulnerability management frameworks to keep pace with AI’s speed. Look for new tools and practices aimed at rapidly validating patches before release and detecting exploit attempts post-patch. Organizations will need to invest more in real-time threat intelligence and automated monitoring to stay ahead. How quickly the industry adapts to this challenge will shape risk and trust around software security in the AI era.
AI Quick Briefs Editorial Desk