devtake.dev

RPCS3's maintainers will ban contributors who submit undisclosed AI pull requests

The PS3 emulator project posted on X on May 10, citing 'AI slop' that has been clogging review. The hard line: ban-on-sight if you don't disclose.

Soren Vanek · · 4 min read · 3 sources
RPCS3 project logo on a solid black background, from the official rpcs3.net press graphic
Image: rpcs3.net · Source

The maintainers of RPCS3, the open-source PlayStation 3 emulator, posted on X on May 10 telling contributors to stop submitting AI-generated “slop” pull requests. The new policy is a ban-on-sight for anyone who submits AI-written code without disclosing it.

This isn’t a one-off frustration. RPCS3 joins a growing list of well-resourced open-source projects pushing back on the same noise pattern: curl, the Linux kernel security teams, and Mozilla have each flagged AI-generated submissions as a maintainer-time tax. The difference at RPCS3 is the explicit ban. “You can’t possibly handwrite the type of shit AI slop we have been seeing,” one maintainer wrote in response to a contributor who asked how they would know.

What we know

The X post and the maintainer responses are the primary source. The team framed the policy in two parts: stop sending AI-generated PRs, and disclose if you used AI as a tool. The combination matters because RPCS3 isn’t taking a blanket position against AI assistance, only against shipping unreviewed model output as a contribution.

The reason emulation makes this worse than average is precision. RPCS3 reverse-engineers Sony’s Cell processor architecture, including the SPU pipeline, the RSX graphics pipeline, and the hypervisor layer. A function that looks plausible to a model but mishandles a specific opcode crashes the entire emulator the moment a real PS3 game touches that code path. Kotaku’s summary calls it bluntly: when you’re translating PS3 hardware to PC, “there is no room for ‘vibes.’ One hallucinated function can crash the entire system or create impossible-to-trace bugs.”

The maintainers were also explicit about how they spot the pattern. The visible markers aren’t just code shape; they include suspicious commit messages, generic variable names that don’t fit the project’s conventions, and changes that touch wide surface areas without addressing any tracked issue. Kotaku characterized the project’s tone in the X replies as “less civil but far more entertaining” than the original announcement, with maintainers telling commenters defending the practice to “kick rocks.”

What we don’t know

RPCS3 hasn’t published a formal CONTRIBUTING.md update yet. The policy is currently a social post and a maintainer norm. Whether the project will tighten its .github/PULL_REQUEST_TEMPLATE.md to add an explicit AI-disclosure checkbox (the way curl did) or simply enforce by closing PRs and banning is unclear from the announcement alone.

The volume of inbound AI PRs that triggered the post is also undisclosed. The maintainers describe a sustained pattern, not a single bad week, but they didn’t give a count. The open PR queue at the time of writing shows a mix of legitimate engineering work and apparent test-suite churn, without an obvious AI-generated cohort separable by metadata alone.

The third unknown is enforcement reach. RPCS3 maintainers control commit rights on their repo; they can close PRs and block contributors. They can’t block the same contributor from forking and continuing to file PRs from a new account. The honor-system part of the policy (“disclose if you used AI”) only works if contributors comply.

Source attribution

Kotaku was the first mainstream outlet to surface the X post, and Newsy Today picked it up shortly after. Slashdot’s writeup on May 11 was where the policy framing as “ban on undisclosed AI” crystallized in the open-source community discussion. The original X post and reply thread from the RPCS3 account are the primary source.

What this means for you

If you contribute to an open-source project, the disclosure norm is becoming load-bearing. The expectation isn’t that you can’t use AI tools; it’s that you tell maintainers what you used and what you reviewed. The cost of a maintainer’s time triaging a hallucinated PR is non-trivial, and projects without RPCS3’s bandwidth will simply auto-close anything that looks suspicious. Disclose, review, take responsibility for the final code.

If you maintain a project, the practical question is what to put in your CONTRIBUTING.md. The shapes that work are an explicit disclosure clause, a PR template checkbox, and a willingness to close on sight. curl’s policy is the most-cited reference. RPCS3’s is more aggressive because the precision floor in their codebase is higher. Either path is defensible; the failure mode is letting the queue fill up without a stated rule.

If you’re using AI to “give back to open source,” recalibrate. Most projects don’t want code; they want patches that solve a specific bug or feature request the maintainers already triaged. Shipping a 500-line model-generated rewrite of a file you don’t understand is the opposite of help. The maintainers’ patience has a floor, and RPCS3 just told everyone where it is.

Share this article

Sources

Mentioned in this article