AI Coding Surge Overwhelms Security Teams, Creates New Risk

AI Coding Surge Overwhelms Security Teams, Creates New Risk
Digital Trends

Key Points

  • AI coding tools can increase monthly code output by up to ten times.
  • A financial services firm generated a backlog of one million unreviewed lines.
  • Shortage of application security engineers leaves firms vulnerable.
  • Developers often run AI tools on personal laptops, risking data exposure.
  • Anthropic, OpenAI and Cursor are adding automated code‑review features.
  • An AI‑written script caused a major Amazon outage, highlighting risks.

AI-powered coding assistants have accelerated software output dramatically, but the speed boost is outpacing security resources. A financial services firm using the Cursor tool saw monthly code production jump from 25,000 to 250,000 lines, creating a backlog of one million unreviewed lines. Security experts warn that the shortage of application security engineers leaves firms exposed to vulnerabilities, especially as developers download entire codebases onto personal laptops. Companies such as Anthropic, OpenAI and Cursor are now racing to embed automated review features, yet human oversight remains essential.

When a financial services company swapped traditional development practices for the AI coding assistant Cursor, its code output exploded. Monthly lines of code surged from roughly 25,000 to 250,000, a tenfold increase that initially sounded like a triumph. Within weeks the firm faced a backlog of about one million lines of code that had never been reviewed for bugs or security flaws.

"The sheer amount of code being delivered, and the increase in vulnerabilities, is something they can’t keep up with," said Joni Klippert, CEO of StackHawk, a security startup that assists the firm. The rapid rise in unvetted code has turned into a systemic risk across Silicon Valley, where many organizations are now generating more software than their security staff can examine.

Application security engineers—professionals tasked with catching errors in AI‑generated code—are in short supply. "There are not enough application security engineers on the planet to satisfy what American companies need," warned Joe Sullivan, an adviser to Costanoa Ventures. The talent gap means that even as code volumes climb, the workforce capable of safeguarding it remains stagnant.

Beyond staffing, the way AI tools are deployed creates additional hazards. Developers often run the models on personal laptops rather than secure corporate servers, pulling entire codebases onto devices that can be lost or stolen. A single missing laptop could expose sensitive data alongside the newly generated code.

Recognizing the looming threat, several AI firms have begun to embed code‑review capabilities directly into their platforms. Anthropic, OpenAI and Cursor are each working on automated security checks. Cursor recently acquired a startup specializing in code review to weave those functions into its product suite. "The software development factory kind of broke. We’re trying to rearrange the parts," said Cursor’s head of engineering.

Nevertheless, experts caution that AI‑driven reviewers are not a panacea. Human oversight remains critical before any code reaches production. The stakes were underscored when an AI‑written script caused an Amazon outage, resulting in more than 100,000 lost orders and 1.6 million errors. No company wants a repeat of that scenario, and the industry is still grappling with how to balance speed and security.

#AI coding#software development#application security#code review#cybersecurity#StackHawk#Cursor#OpenAI#Anthropic#venture capital
Generated with  News Factory -  Source: Digital Trends

Also available in:

AI Coding Surge Overwhelms Security Teams, Creates New Risk | AI News