Report: Open source licensing conflicts hit an all-time high as organizations struggle to audit AI-generated code for IP risks


AI-generated code introduces a lot of risk into the development process. A recent Sonatype report found that AI hallucinated 27% of upgrade recommendations for open source projects, while research from Veracode found that AI introduced security vulnerabilities in 45% of 80 coding tasks across 100+ different LLMs. Now, new research from Black Duck is shedding light on another pressing issue related to AI-generated code: IP and licensing risks.

In the company’s 2026 Open Source Security and Risk Analysis (OSSRA) report, it analyzed 947 commercial codebases and found that two-thirds of them had license conflicts—the highest percentage in the history of the report. This represents a 12% increase from last year, which also breaks a record for the largest jump in the report’s history.

One of the codebases that Black Duck audited contained 2,675 distinct licensing conflicts, indicating the complexity of managing IP has grown exponentially.

“This rise is partly driven by ‘license laundering,’ where AI assistants generate code snippets derived from copyleft sources (like GPL) without retaining the original license information,” the company explained in a blog post. For example, the report shows that 17% of open source components are entering codebases outside of traditional package managers, through copy and pasted snippets, direct vendor inclusions, or AI generation. This presents a challenge, as code that enters this way may be invisible to traditional manifest-based scanning tools.

This year’s OSSRA report also found that the mean number of vulnerabilities in code has nearly doubled since last year. Eighty-seven percent of the codebases had at least one vulnerability, 78% had high-risk vulnerabilities, and 44% had critical-risk vulnerabilities.

The company explained that it discovered a “zombie component” problem when digging into the research. Ninety-three percent of codebases contained components that hadn’t seen active development in two years, 92% contained components that were at least four years out of date, and only 7% of components in use were upgraded to the latest version.

“These abandoned components are a ticking time bomb. When a vulnerability is discovered in a project that hasn’t been touched in years, there is often no maintainer left to fix it. Organizations are left with difficult choices: fork the project, refactor the application, or accept the risk,” the researchers wrote.

Black Duck concluded that a key takeaway from this year’s report is that there is a growing gap between AI adoption and governance.

“As regulatory pressure mounts from frameworks such as the EU AI Act and Cyber Resilience Act, the ‘ship and forget’ model of software delivery is no longer viable. Organizations must move toward a model of continuous supply chain transparency, where every component, whether human-written, AI-generated, or open source, is accounted for,” Black Duck said.

Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img