This post is also available in:
עברית (Hebrew)
The rise of AI-driven coding tools is revolutionizing software development, but new research highlights serious security risks linked to AI-generated code. As AI assistants like GitHub Copilot, ChatGPT, and Claude become more widespread, companies are facing new challenges in ensuring the security of their applications.
A recent report by application security management firm Apiiro, supported by Gartner Research, warns that the rapid adoption of generative AI (GenAI) in coding has led to a “security trade-off.” While AI tools boost development speed, they also increase vulnerabilities due to coding errors and the lack of sufficient manpower to conduct security reviews.
Apiiro’s deep analysis of millions of lines of code from various industries, including financial services and tech, reveals alarming security flaws in AI-generated code. These include a threefold increase in repositories containing personally identifiable information (PII) and payment data, a tenfold rise in APIs missing input validation, and a surge in exposed sensitive API endpoints. This is a vulnerability that can be used by cybercriminal for several offenses.
The explosive growth of AI tool usage, especially since OpenAI’s ChatGPT launch in late 2022, has intensified these risks. According to Apiiro, the number of pull requests, a key indicator of code creation, surged by 70% since Q3 2022, far outpacing the 30% growth in repositories and the 20% increase in developers. This acceleration in code creation highlights the growing gap between AI-driven development and manual security reviews.
Experts stress that traditional security review processes, which rely on manual oversight, are no longer sufficient in the age of AI coding. As developers race to meet demands for faster code creation, vulnerabilities such as exposed APIs and missing authorizations are becoming more prevalent, leaving businesses vulnerable to cyberattacks.
To mitigate these risks, experts urge businesses to adopt automated security review systems that can keep up with the pace of AI-powered code creation. The challenge lies in balancing innovation with security, ensuring that the benefits of AI tools don’t come at the cost of organizational safety.