Published in AI

AI-generated bug reports flood Open Sauce projects

by on11 December 2024


Developer Backlash 

A surge of low-quality software vulnerability reports generated by artificial intelligence (AI) has sparked frustration among open-source developers, who warn that the trend wastes valuable time and resources. 

Python Software Foundation security developer-in-residence Seth Larson said that an influx of AI-generated submissions is creating a "new era of slop security reports for open source."

He urged bug hunters to avoid relying on machine learning models to identify vulnerabilities. 

"Recently, I've noticed an uptick in extremely low-quality, spammy, and LLM-hallucinated security reports to open-source projects. These reports appear potentially legitimate at first glance and thus require time to refute," Larson said.

Larson suggested treating such reports as malicious, arguing they detract from essential security work and risk burning out maintainers, many of whom are volunteers. 

The Curl project, which manages a widely used data transfer tool, continues to grapple with the issue.

Project maintainer Daniel Stenberg said he had a gutsful of this sort of thing after dealing with yet another AI-generated bug report. 

"We receive AI slop like this regularly and at volume. You contribute to [the] unnecessary load of Curl maintainers, and I refuse to take that lightly and I am determined to act swiftly against it. Now and going forward," Stenberg said. 

Stenberg criticised the submitter for failing to disclose that the report was AI-generated and for continuing the discussion with what appeared to be further automated responses. 

Larson called on bug reporters to verify their findings manually and avoid using AI tools that "cannot understand code." He urged platforms that accept vulnerability reports to implement measures to curb automated or abusive submissions. 

Last modified on 11 December 2024
Rate this item
(1 Vote)

Read more about: