You're Invited:Meet the Socket Team at BlackHat and DEF CON in Las Vegas, Aug 4-6.RSVP
Socket
Book a DemoInstallSign in
Socket
Back
Security News

Open Source Maintainers Demand Ability to Block Copilot-Generated Issues and PRs

Open source maintainers are urging GitHub to let them block Copilot from submitting AI-generated issues and pull requests to their repositories.

Open Source Maintainers Demand Ability to Block Copilot-Generated Issues and PRs

Sarah Gooding

May 20, 2025

A fiery discussion on GitHub's feedback forums has erupted over a controversial new feature that enables users to submit AI-generated issues and pull requests via Copilot. In a post titled “Allow us to block Copilot-generated issues (and PRs) from our own repositories” software developer Andi McClure sparked a wave of support for stronger maintainer controls, garnering over 500 upvotes in less than a day.

AI Submissions Without Disclosure#

GitHub’s May 19 announcements introduced public previews for Copilot’s natural language issue creation and its new coding agent, which can independently modify repositories, submit PRs, and iterate on feedback. The move was framed as a time-saving enhancement but for many maintainers, it landed as a hostile UX change that threatens to flood open source projects with low-quality, AI-generated slop.

McClure contends that AI submissions violate her projects’ code of conduct and increase the burden of moderation:

This says to me that GitHub will soon start allowing GitHub users to submit issues which they did not write themselves and were machine-generated. I would consider these issues/PRs to be both a waste of my time and a violation of my projects' code of conduct. Filtering out AI-generated issues/PRs will become an additional burden for me as a maintainer, wasting not only my time, but also the time of the issue submitters (who generated 'AI' content I will not respond to), as well as the time of your server (which had to prepare a response I will close without response).

Adding to the frustration is GitHub’s apparent exemption of Copilot bots from the platform’s existing blocking mechanisms. Attempts to block usernames like copilot or copilot-pull-request-reviewer appear to be ineffective, prompting users to call for a simple opt-out or at minimum, a toggle to ban AI-generated content at the repository level.

"As I am not the only person on this website with 'AI'-hostile beliefs, the most straightforward way to avoid wasting a lot of effort by literally everyone is if Github allowed accounts/repositories to have a checkbox or something blocking use of built-in Copilot tools on designated repos/all repos on the account," McClure said.

"If we are not granted these tools, and 'AI' junk submissions become a problem, I may be forced to take drastic actions such as closing issues and PRs on my repos entirely, and moving issue hosting to sites such as Codeberg which do not have these maintainer-hostile tools built directly into the website."

Maintainers Seek More Control Over AI Contributions#

Participants in the discussion expressed concern that GitHub is pushing Copilot deeper into core workflows without offering adequate ways to opt out. While some developers may welcome AI assistance, others argue that the platform should respect projects that explicitly reject machine-generated content, particularly when these submissions arrive with no visible attribution.

Currently, issues created with Copilot do not reliably indicate their origin in the GitHub UI or API. Maintainers worried that this lack of transparency could allow contributors to bypass repository rules or codes of conduct designed to limit AI use.

"Not only does Github provide site-integrated tools that allow users to submit fake, machine-generated issues and PRs in violation of my project's submission rules, they then cover up the fact the tools were used in the first place, preventing me from enforcing my project's rules manually," McClure said.

"This increases the maintainer load of detecting and removing this content even further. I'm going to have to basically introduce a verbal CAPTCHA."

One commenter noted that without clearer signals or enforcement tools, maintainers may have no choice but to abandon GitHub’s issue tracker entirely.

Several developers referenced alternative platforms such as Codeberg, Forgejo, and Gitea as more maintainer-friendly options, albeit with varying levels of CI/CD support. Others pointed to legal or organizational compliance concerns, especially for teams subject to strict data governance or anti-AI policies.

Growing Institutional Pushback on AI Slop Creeping Into Maintainer Workflows#

Outside GitHub, some open source and industry projects are already taking steps to curb the growing presence of AI-generated contributions. Following a frustrating incident with an AI-generated fake vulnerability report, the curl project authored new documentation requiring contributors to disclose any use of AI tools when submitting security reports or pull requests, warning that AI-generated slop wastes valuable triage time. Inspired by curl's move, Swisscom recently updated its bug bounty guidelines to mandate full disclosure of any AI assistance, reinforcing the expectation that all claims be validated before submission.

"There is already a huge problem with low-effort AI-hallucinated issues that plague FOSS projects, that are costing the maintainers a lot of time to filter through to the point of burnout," Mikuláš Hrdlička commented on the GitHub discussion. "This will only make it worse."

The concerns raised in GitHub discussion echo a growing pattern observed across open source and security workflows: the gradual creep of AI-generated “slop” into previously human-vetted communication channels.

While Copilot’s new issue and PR generation features are framed as productivity boosters, many developers see them as amplifying a problem that's already underway. Maintainers are being forced to sort through AI-authored content that often lacks accuracy, context, or meaningful intent. These low-effort, high-volume submissions increase review overhead and erode trust in contributor interactions, especially when AI involvement is not disclosed.

The pushback isn’t limited to Copilot’s role in code contributions. It also touches on its expanding presence in coordination layers, where vague reports, hallucinated bugs, or bot-generated walls of text can drain attention from legitimate contributors.

"I am not against AI necessarily but giving it write-access to most of the world's mission-critical code-bases including building-blocks of the entire web such as Kubernetes and Mongo is an extremely tone-deaf move at this early-stage of AI," @tukkek commented. "People report that the AI-generated section of Google searches hallucinate and provide false, misleading, and potentially-dangerous information daily. The technology simply isn't ready despite Microsoft's 80B$ investment on it this year and this will back-fire either badly or very badly."

Calls for stronger tooling, such as opt-out toggles, bot blocking, or visible attribution of AI involvement, stem from a desire to mitigate this trend before it scales further. This GitHub discussion is not just a feature request, but a flashpoint in the ongoing debate over how AI should interact with collaborative development ecosystems.

Subscribe to our newsletter

Get notified when we publish new security blog posts!

Try it now

Ready to block malicious and vulnerable dependencies?

Install GitHub AppBook a Demo

Related posts

Back to all posts