I wrote this because I made this mistake myself.
The other day, I was attempting to burn through my remaining Claude Code session limit before it reset. I was feeling productive, maybe a little too productive. So I found an open source journalism project I genuinely admire, saw some open issues, and thought I could help. I ran some tests on the code and did my best to verify that the changes were relevant and accurate. But I opened several pull requests — multiple PRs, across multiple repos, in the span of about an hour. All AI-assisted. And that was the problem.
It doesn’t matter that my code was good (I think). The maintainers had no way to know that. To a small team receiving multiple AI-authored PRs from a stranger in rapid succession, the pattern looked like the start of a flood — the kind of flood they’d been reading about other projects drowning in. They had no reason to assume good faith from someone they’d never seen before. They had every reason to be concerned.
A maintainer from the project emailed me. They were gracious and patient about it — far more than they needed to be.
They explained that as a small team, they couldn’t review back-to-back AI-authored pull requests, especially several in one hour. They asked me to pick a single issue, make sure it followed best practices and passed tests in my local dev environment, and then let them know when it was ready for review. No anger. No public shaming. Just a clear, professional request to slow down and do it right.
In my case, the code itself was fine (I think). This was a false positive on quality. But it was a true positive on the pattern — and if they hadn’t said something, I probably would have kept going, submitting PRs on every open issue I felt comfortable tackling. That’s the thing about enthusiasm combined with powerful tools: it doesn’t feel like a flood when you’re the one sending it.
On top of that, even though I did my best to verify what I was submitting, I’m a beginner. There’s an old distinction between “known unknowns” and “unknown unknowns” — the things you know you don’t know versus the things you don’t even know to look for.
As an early-stage contributor, I had plenty of both. There are edge cases, architectural decisions, project-specific conventions, and backward compatibility concerns that an experienced contributor would catch but that I’d walk right past. I didn’t even know what questions to ask, let alone the answers. Following what you think is proper procedure isn’t the same as actually knowing what proper procedure is for a given project.
Every codebase has its own norms, and you can’t learn them from the outside.
That’s why, especially as a beginner, it’s worth going the extra mile before you even think about contributing: actually use the app or project you want to help with. Read through the codebase. Explore the existing issues and past pull requests to understand how the community works. And reach out to the maintainers first — ask if they’re open to AI-assisted contributions, ask if there are norms or practices you should know about, and ask which issues would be most helpful to tackle. A five-minute conversation can save everyone hours of wasted work.
And here’s the uncomfortable truth that goes beyond etiquette: even if you follow every best practice on this page, the maintainer may still not want your code. When AI makes writing code trivial, the code itself stops being the valuable part of a contribution.
Nikita Roy, a data scientist, Knight Fellow at ICFJ, and founder of Newsroom Robots, put it bluntly when I told her about my experience: “AI-generated PRs are putting real strain on maintainers right now, even well-intentioned ones, and it’s a big issue in tech circles. So even with following best practices, I don’t believe that’s necessarily the solution.”
Nikita pointed me to Steve Ruiz’s blog post about shutting down external PRs on tldraw, where he asked: “If writing the code is the easy part, why would I want someone else to write it?” The answer might be that the most valuable thing you can contribute isn’t code — it’s bug reports, documentation, testing, design feedback, or a well-written issue that helps the maintainer understand a problem they haven’t seen yet.
And my situation is still the mild version.
I at least took the time to verify what I was submitting. The problem is made far worse by people who don’t — who point an AI at a repo, generate a patch, and submit it without reading, testing, or understanding any of it. Maintainers can’t tell the difference at a glance between a well-tested AI-assisted PR and a completely untested one. The volume and the pattern look the same from their side.
I got lucky. I got a kind email from a patient person. Many open source maintainers aren’t in a position to be that generous. They’re unpaid volunteers maintaining projects that millions of people depend on, and they’re being hit with a flood of AI-generated contributions from strangers who never bothered to check their work.
Some maintainers have shut down their bug bounty programs. Others have closed their projects to outside contributions entirely. A few have started keeping public lists of repeat offenders. My experience was mild compared to what many of them deal with every day.
Using AI coding agents means you’ll be able to generate code faster than you ever could before. That power comes with a responsibility: as Simon Willison put it, your job is to deliver code you have proven to work.
Just because you can generate a pull request in five minutes doesn’t mean you should.
This post was originally published as part of the course materials for “Advanced prompt engineering for journalists,” a forthcoming MOOC from the Knight Center for Journalism in the Americas at UT Austin.
Read the full guide, list of case studies, and other course resources here.
