Author:Wall Street CN
AI makes it possible for everyone to write code, but no one tells you what to do with the code after it's written.
On April 6, New York Times reporters Mike Isaac and Erin Griffith published an article revealing another side of the widespread adoption of AI programming tools:Code overload.
After a financial services company introduced the AI programming tool Cursor,Monthly code output jumped from 25,000 lines to 250,000 lines—a tenfold increase. This was followed by a backlog of 1 million lines of code awaiting review."They simply can't keep up with the growth in code delivery and the resulting surge in vulnerabilities," said Joni Klippert, co-founder and CEO of security startup StackHawk.
This is not an isolated case, but a new reality that the entire industry is facing.

The code factory "exploded".
In November of last year, Anthropic and OpenAI upgraded the underlying models of their programming tools, Claude Code and Codex, respectively. Reportedly, this upgrade transformed AI programming agents from "occasionally useful assistants" into "fully automated code generation machines"—requiring minimal human guidance, AI can now complete programming tasks that previously took weeks in a very short time.
A Google survey in September 2025 showed that 90% of software developers were already using AI to assist their work, and 71% of programmers were using AI to write code.
The explosion in code output has brought about a thorny problem: who will review it?
Replit President and Head of AI Michele Catasta stated frankly..."Everyone in the company has become a programmer; it's both a blessing and a curse."
In an internal memo this year, Meta's Chief Technology Officer, Andrew Bosworth, wrote: "Projects that used to require hundreds of engineers can now be completed by dozens. Work that used to take months can now be done in days." He added that AI has a "profound impact" on organizations like Meta.
Tido Carriero, Cursor's head of engineering, product, and design, put it more bluntly: "The software development factory has, to some extent, collapsed, and we are trying to reassemble the parts."
Security vulnerabilities: The cost of ignoring them
While the amount of code has increased dramatically, security auditing capabilities have not kept pace.
According to Tencent Technology, in May 2025, Replit employee Matt Palmer scanned 1,645 web applications created on the Vibe Coding platform Lovable.Of these, 170 (approximately 10.3%) were found to have serious security vulnerabilities.—Anyone can access the user database without logging in, and obtain names, emails, financial information, and API keys.
Palantir engineer Daniel Asaria extracted personal debt amounts, home addresses, and sensitive keywords from multiple Lovable applications in just 47 minutes.
Security research firm Escape subsequently conducted a broader scan of over 5,600 Vibe Coding applications, discovering more than 2,000 security vulnerabilities, over 400 exposed keys, and 175 breaches of personal privacy data, including medical records and bank accounts. Most of the creators of these applications lacked any security knowledge.
"The total number of application security engineers worldwide combined wouldn't meet the needs of American companies," said Joe Sullivan, an advisor at Silicon Valley venture capital firm Costanoa Ventures. He added that the large companies he's spoken to would be willing to add another 5 to 10 such positions if they could find enough people.
Sullivan also pointed out a more insidious risk: AI programming tools run better on local laptops, leading more and more engineers to download the entire company's codebase to their personal computers. "This is a crazy risk that nobody thought of six months ago, and now they're figuring out how to solve it."
Open source community: "DDoS attacks" from spam pull requests
The impact of AI-generated code is particularly evident in the open-source community.
According to Tencent Technology, cURL founder Daniel Stenberg shut down his six-year-old bug bounty program in January 2026. The reason wasn't budget constraints, but rather the overwhelming influx of AI-generated fake vulnerability reports into the maintenance team. In the three weeks leading up to the shutdown, cURL received 20 submissions, none of which were confirmed as genuine vulnerabilities. At the FOSDEM 2026 conference, Stenberg revealed that by 2025, approximately one-sixth of cURL security reports would be valid; by the end of 2025, this proportion would have dropped to one-twentieth or even one-thirtieth. He termed this phenomenon "DDoS attacks on open source."
Steve Ruiz, founder of digital whiteboard startup tldraw, told The New York Times that...Last fall, he began noticing a large number of unusual contributors—some would complete all the work only to suddenly abandon it at the final step of signing the agreement, some would ignore clear instructions, and some would submit batches of spam updates. He judged that these were most likely AI bots and closed the external contribution channel in January of this year."The risk to the codebase is extremely high," he said. "This shock could jeopardize the reputation of the team, the community, and the project."
Ghostty creator Mitchell Hashimoto also banned all unapproved AI-generated code contributions in early 2026 and launched the trust-based Vouch system.
Xavier Portilla Edo, head of infrastructure at Voiceflow, provided a quantitative assessment."Only one-tenth of the PRs generated by AI are reasonable; the other nine are a waste of maintainers' time."
In February 2026, GitHub introduced two new settings that allow repositories to completely disable pull requests or restrict them to only collaborators.
When the platform itself starts offering a "shutdown" function, it indicates that the problem is already structural. An AI engineer from a major company summarized this for Tencent Technology:"Developers submitting garbage PRs to Vibe is a waste of the open-source maintainers; security personnel submitting garbage vulnerabilities to Vibe is a waste of the vulnerability reviewers. It completely disregards other people's time."
Efficiency illusion: It feels faster, but it's actually slower.
Have AI programming tools really improved efficiency? The data provides a surprising answer.
According to Tencent Technology, in a randomized controlled trial published in 2025 by METR (Model Evaluation and Threat Research Institute), 16 senior open-source developers completed 246 real-world tasks in a large, familiar code repository and were randomly assigned whether or not they could use AI tools.Result: Developers using AI tools actually took 19% longer to complete tasks.
More concerning is the cognitive bias: these developers expected the AI to make them 24% faster before the experiment, and still believed they were 20% faster after the experiment.
Meanwhile, a 2025 Stack Overflow developer survey showed that developers' trust in the accuracy of AI dropped from 40% the previous year to 29%, with 46% of developers explicitly stating that they did not trust the accuracy of AI tools.
The explosive growth in the number of apps confirms the scale of this "efficiency illusion." According to Tencent Technology, citing Sensor Tower data, the number of iOS app releases in the US increased by 56% year-on-year in December 2025 and by 54.8% year-on-year in January 2026, both marking the fastest growth rates in four years. Appfigures statistics show that 557,000 new apps were submitted to the App Store in 2025, a 24% increase compared to 2024, representing the largest wave of new submissions since 2016.
Apple has removed the Vibe Coding app Anything from the App Store (the app had raised $11 million at a $100 million valuation) and frozen updates to similar tools such as Replit and Vibecode for several months.
Using AI to solve AI manufacturing problems
Faced with code overload, tech companies' answer remains the same: more AI.
Both Anthropic and OpenAI have launched AI-driven code review tools to automatically detect errors. Cursor acquired Graphite, a code review robot startup, last December and integrated its technology into its product to help engineers prioritize the most sensitive code review needs.
Whether this path will be successful remains to be seen.
According to Tencent Technology, Adam Wathan, the creator of Tailwind CSS, revealed in January 2026 that although Tailwind's monthly downloads reached 75 million, document traffic had decreased by about 40% compared to the beginning of 2023, and revenue had declined by nearly 80%. "Documentation is the only channel through which people discover our commercial product; without customers, we cannot sustain the development of the framework."
RedMonk analyst Kate Holterhoff has dubbed this phenomenon "AI Slopageddon." As Tencent Technology put it, the "shit mountain crisis" of AI code has only just begun.












