Overview
- Escape.tech scanned 5,600 publicly deployed apps built with AI tools and reported over 2,000 high‑impact vulnerabilities plus 400 exposed secrets, estimating roughly one in three shipped with a serious flaw.
- A CodeRabbit analysis of 470 GitHub pull requests found AI‑authored code created about 1.7 times more issues than human code, with security vulnerabilities up to 2.74 times more common and logic errors 75% higher.
- Security leaders warn of more AI‑generated applications entering production in 2026, with some predicting major incidents unless oversight improves.
- Recommended safeguards include mandatory automated security scanning, human review for code handling authentication, payments or personal data, and practices like agentic engineering under structured oversight.
- Investment in AI development platforms remains strong as firms reassess moats toward data, distribution and integrations, and a remediation ecosystem is growing with services and tools such as Humans Fix AI, Snyk, Semgrep, CodeRabbit and Escape.tech.