AI-Generated ‘Vibe Coding’ Raises New Software Supply‑Chain Security Risks

Key Points
- Developers use AI‑generated "vibe coding" to speed up software creation.
- AI models often train on outdated or insecure code, reintroducing known vulnerabilities.
- Lack of commit histories makes tracing code origins and accountability difficult.
- A survey found over 60 % of code in many firms now comes from AI, but only 18 % have approved tools.
- Security experts warn that vulnerable populations may bear the brunt of new risks.
- Calls for human review, approved tool lists, and stronger governance to mitigate threats.
Developers are increasingly turning to AI‑generated code, dubbed “vibe coding,” to accelerate software creation. While the approach mirrors the efficiency of open‑source reuse, experts warn it introduces opaque code, potential vulnerabilities, and weakened accountability. Security firms highlight that AI models often draw on outdated or insecure codebases, making it hard to trace origins or audit outputs. A recent survey found that a third of security leaders report over 60 % of their code now originates from AI, yet fewer than one‑fifth have approved tools for such development. The emerging risk landscape calls for new safeguards and clearer governance.
AI‑driven code becomes the new shortcut
Software engineers are adopting a practice known as “vibe coding,” where large language models (LLMs) generate code snippets that developers can quickly adapt. This mirrors the long‑standing reliance on open‑source libraries, offering speed and reduced effort for building applications.
Security experts raise alarms
Researchers caution that vibe coding adds a layer of opacity to software supply chains. Because AI models are trained on existing code—including legacy, vulnerable, or low‑quality repositories—the same weaknesses can reappear in newly generated code. The lack of transparent commit histories or pull‑request records, which are standard in platforms like GitHub, makes it difficult to trace who contributed what and whether the code has been audited.
Survey reveals widespread AI code adoption
A recent survey of chief information security officers, application security managers, and heads of development reported that a third of respondents said more than 60 % of their organization’s code was generated by AI. Despite this high adoption rate, only 18 % said their organization maintains an approved list of tools for vibe coding, indicating a gap in governance.
Potential impact on vulnerable users
Industry leaders note that while AI‑generated tools can lower barriers for small businesses and underserved populations, the security implications may disproportionately affect those who lack resources to remediate vulnerabilities. The ease of creating functional code could inadvertently expose critical systems to risk.
Calls for new safeguards
Security professionals advocate for stronger lifecycle management, including human review of AI‑generated code, clear accountability mechanisms, and the development of approved toolchains. Without such measures, the software‑supply‑chain security challenges posed by vibe coding could mirror—or exceed—those already seen with open‑source components.