Skip to content
213 changes: 213 additions & 0 deletions text/3950-ai-contribution-policy.md
Copy link
Copy Markdown
Member

@kennytm kennytm Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Context: https://github.com/rust-lang/rfcs/pull/3951#issuecomment-4286674950

Do we assume this RFC 3950 is already in effect? 🤔

Additionally, assuming this RFC is merged eventually, can it be retroactively applied to all open PRs and issues etc?

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most people would understand a new policy to apply prospectively, unless clearly stated otherwise.

Original file line number Diff line number Diff line change
@@ -0,0 +1,213 @@
- Feature Name: N/A
- Start Date: 2026-03-13
- RFC PR: [rust-lang/rfcs#3950](https://github.com/rust-lang/rfcs/pull/3950)
- Issue: N/A

## Summary
[summary]: #summary

We adopt a Rust Project contribution policy for AI-generated work. This applies to all Project spaces.

## Motivation

In the Rust Project, we've seen an increase in unwanted and unhelpful contributions where contributors used generative AI. These are frustrating and costly to reviewers in the Project. We need to find ways to reduce the incidence of these and to lower the cost of handling them.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this policy meaningfully accomplish its goals? High-quality LLM-generated work, including from trusted contributors, still requires careful review. With an de-facto endorsement of LLM-generated contributions from trusted contributors, I worry that this will worsen review shortages on net.

View changes since the review


We hope that by stating our expectations clearly that fewer contributors will send us unhelpful things and more contributors will send us helpful ones. We hope that this policy will make decisions and communication less costly for reviewers and moderators.

## Policy design approach

People in the Rust Project have diverse — and in some cases, strongly opposed — views on generative AI and on its use. To address the problem in front of us, this policy describes only those items on which Project members agree.

## Normative sections

[Normative sections]: #normative-sections

These sections are normative:

- [Contribution policy for AI-generated work]
- [Definitions, questions, and answers]
- [Normative sections]

Other sections are not normative.

## Contribution policy for AI-generated work
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These all seem like good rules to follow, but are all nearly impossible for reviewers or moderators to enforce.

"is prohibited" feels like the wrong framing here as a result. "Do not X" is much more compatible with rules of this nature. If you would like to pursue a permissive, anti-slop AI policy, a "we will teach people to contribute effectively and pro-socially using these tools" is a much better fit than "we will ban you based on our vibes of your initial submission".

Obviously contributors who are learning-resistant or aggressively spamming will need moderation, but that was true well before LLMs.

View changes since the review


[Contribution policy for AI-generated work]: #contribution-policy-for-ai-generated-work

In all Rust Project spaces:
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO it's important, for the avoidance of doubt, to explicitly say "AI-generated contributions that follow this guidance are allowed by default." I believe that's the intended effect of this policy, but you really have to read between the lines to get there.

View changes since the review


- Submitting AI-generated work when you weren't in the loop is prohibited.
- Submitting AI-generated work when you haven't checked it with care is prohibited.
- Submitting AI-generated work when you don't have reason to believe you understand it is prohibited.
- Submitting AI-generated work when you can't explain it to a reviewer is prohibited.
- Feeding reviewer questions into an AI tool and proxying the output directly back is prohibited.
Comment thread
alice-i-cecile marked this conversation as resolved.

## Definitions, questions, and answers
Comment thread
alice-i-cecile marked this conversation as resolved.
Copy link
Copy Markdown

@xtqqczze xtqqczze Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think Q&A shouldn't be a normative section of the policy.

View changes since the review

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for raising this item. There are two sections that have answers. One is focused on the rationale of the policy itself and is not normative.

The other contains definitions of the terms used in the policy items and specific guidance on how the policy items are to be interpreted. The policy comprises these definitions and this guidance, so this section is normative.

If there are specific items that you do not believe should be normative, I'd be curious to hear which and the reasons for that.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the current revision, the section "Definitions, questions, and answers" is marked as normative. This conflates distinct types of material that should be treated differently.

Copy link
Copy Markdown

@apiraino apiraino Apr 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also raised this point (see comment) and here Travis' answer. I am not sure that adding more meta-content to the Q&A helps a lot.

The additions that you @traviscross drafted afterwards are good and make some fundamental points clear. As a reader I would just expect a policy to give me the "READ THIS PART!!111!!ONE!" more prominently :)


[Definitions, questions, and answers]: #definitions-questions-and-answers

### What is AI-generated work?

Work is AI-generated when agentic or generative machine-learning tools are used to directly create the work.

### What's it mean to be in the loop?

To be in the loop means to be part of the discussion — to be an integral part of the creative back and forth. You were in the loop if you were there, engaged, and contributing meaningfully when the creation happened.
Copy link
Copy Markdown
Member

@programmerjake programmerjake Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this unintentionally prohibit things that were created by someone other than the contributor and that someone else used AI tools but actually verified the output? Maybe we should add something saying that this should not be interpreted as prohibiting contributing code that you weren't directly involved in creating because some other person responsibly created it maybe with AI.

View changes since the review

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Contributing someone else's code would seem weird to me. Why isn't that other person the one submitting the PR? Surely they have a better understanding of the code and would be better placed to respond to feedback?

Copy link
Copy Markdown
Contributor Author

@traviscross traviscross Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this unintentionally prohibit things that were created by someone other than the contributor and that someone else used AI tools but actually verified the output? Maybe we should add something saying that this should not be interpreted as prohibiting contributing code that you weren't directly involved in creating because some other person responsibly created it maybe with AI.

That's a good point. In earlier proposals, I had been carefully trying to avoid tripping over disallowing things otherwise allowed by the DCO. Tripped over that here. Thanks for catching this.

I'm not immediately certain how to draft around this. How could one be sure, when one's contribution contains (appropriately-licensed) open source code written by a third party acting at arms-length that the third-party author created the code in a way that complies with the policy? Maybe we'll have to exempt that but include the arms-length restriction so that it's not just an easy loophole. But that's getting a bit legalistic. Will think about this. Let me know if you have ideas.

Copy link
Copy Markdown
Member

@programmerjake programmerjake Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Contributing someone else's code would seem weird to me. Why isn't that other person the one submitting the PR? Surely they have a better understanding of the code and would be better placed to respond to feedback?

well, you might be contributing code you copied from somewhere because the other person wrote it a long time ago and/or isn't interested in contributing themselves. a good example is if there's some handy method in a library somewhere that you need but you don't want a whole new dependency just for that method, e.g. promoting itertools methods to std

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we'll have to exempt that but include the arms-length restriction so that it's not just an easy loophole

to exploit that loophole would require multiple people collaborating which seems rare enough in combination with fully AI generated code that maybe we can just leave the loophole and do something if/when it becomes a problem?

maybe just add a sentence like so: This does not mean you can't contribute code written by other people if it has the proper open-source licenses.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Contributing someone else's code would seem weird to me. Why isn't that other person the one submitting the PR?

This is fairly common when dealing with abandoned, salvaged PRs.


### What's it mean to check something with care?

To check something with care means to treat its correctness as important to you. It means to assume that you're the last line of defense and that nobody else will catch your mistakes. It means to give it your full attention — the way you would pack a parachute that you're about to wear.

### What's it mean to have reason to believe you understand something?

To understand something means that you have a correct mental model of what that thing is, what its purpose is, what it's doing, and how it works. This is more than we expect. You're allowed to be wrong.

But you must have *reason* to believe that you understand it. You must have put in the work to have a mental model and a personal theory of why that model is correct.

It's not enough to just have heard a theory. If you can close your eyes and map the thing out and why the thing is correct — in a way that you believe and would bet on — then you have reason to believe you understand it.

### What's it mean to be able to explain something to a reviewer?

Reviewers need to build a mental model of their own. They may want to know about yours in order to help them. You need to be able to articulate your mental model and the reasons you believe that model to be correct.

### What's it mean to proxy output directly back to a reviewer?

Reviewers want to have a discussion with you, not with a tool. They want to probe your mental model. When a reviewer asks you questions, we need the answers to come from you. If they come from a tool instead, then you're just a proxy.

### Does this policy ban vibecoding?

This policy bans vibecoding. Andrej Karpathy, who originated the term, [described](https://x.com/karpathy/status/1886192184808149383) *vibecoding* as:

> There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists... I "Accept All" always[;] I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment [—] usually that fixes it. The code grows beyond my usual comprehension... Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away... [I]t's not really coding — I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.

If you didn't read the diffs, then you can't have checked the work with care, you can't have reason to believe you understand it, and you're not in a position to explain it to a reviewer without feeding the questions to the tool and proxying the output back. If it's grown beyond your comprehension, then even reading the diffs won't help — you don't understand it, won't be able to explain it, and can't say you've checked it with care.

Violating even one of these policy items is enough to violate the policy.

<div id="does-this-policy-ban-slop"></div>

### Does this policy ban AI slop?

This policy goes further than banning AI slop. *AI slop* is unwanted, **low-quality**, AI-generated work. This policy does not consider the quality of the work. High-quality AI-generated work is still prohibited if it fails any item in the policy — e.g., because it was *vibecoded*. If you weren't in the loop, didn't check the work with care, don't have reason to believe you understand it, or can't explain it to a reviewer, then the contribution is prohibited — regardless of the quality of the work.

### Does this policy ban fully automated AI-generated contributions?

This policy bans fully automated AI-generated contributions. These are the worst of the unwanted contributions that have come our way, and each item in the policy independently bans these.

If you created the work in a fully automated way, then you weren't in the loop, you can't have checked it with care, you can't have reason to believe you understand it, and you're not in a position to explain it to a reviewer without feeding the questions to the tool and proxying the output back.

Violating even one of these policy items is enough to violate the policy.

### When contributions appear to fall short of this policy, what do reviewers do?

Reviewers may reject any contribution that falls short of this policy without detailed explanation. Simply link to the policy or paste this template:

> On initial review, unfortunately, this contribution appears to be an AI-generated work that falls short of one or more of our policies:
>
> - Submitting AI-generated work when you weren't in the loop is prohibited.
> - Submitting AI-generated work when you haven't checked it with care is prohibited.
> - Submitting AI-generated work when you don't have reason to believe you understand it is prohibited.
> - Submitting AI-generated work when you can't explain it to a reviewer is prohibited.
> - Feeding reviewer questions into an AI tool and proxying the output directly back is prohibited.
>
> For details, see [RFC 3950](https://github.com/rust-lang/rfcs/pull/3950).
>
> We will not be reviewing this work further.
>
> While we trust that you intended to be helpful in making this contribution, these contributions do not help us. Reviewing contributions requires a lot of time and energy. Contributions such as this do not deliver enough value to justify that cost.
>
> We know this may be disappointing to hear. We're sorry about that. It pains us to reject contributions and potentially turn away well-meaning contributors. For next steps you can take, please see:
>
> - *[What should I do if my contribution was rejected under this policy?](https://github.com/rust-lang/rfcs/blob/TC/ai-contribution-policy/text/3950-ai-contribution-policy.md#what-should-i-do-if-my-contribution-was-rejected-under-this-policy)*

### Should reviewers investigate to determine if AI tools were used?

There's no need to investigate to determine if AI tools were used. If the contribution seems on its face to fall short, then just reject the contribution, link to the policy or paste the template, and, at your discretion, notify the moderators.

### What should I do if my contribution was rejected under this policy?

If your contribution was rejected under this policy, first, step back and honestly evaluate whether your contribution did in fact fall short. We appreciate people who are honest with themselves about this. If your contribution failed even one of the policy items above — in letter or spirit — then it fell short of this policy.

If your contribution fell short, reflect on what you could do better. We need contributors who put heart into their contributions — not just point a tool at our repositories. If you do want to contribute, then put a lot of care and attention into your next contribution. If you've already been banned, then reach out to the moderation team and talk about what you've learned and why you want to contribute.

If you're sure that your contribution didn't fall short but you're a new contributor, see the next item. As a new contributor, it's difficult to use these tools in a way that won't appear to reviewers as falling short. We encourage you to try again without using generative AI tools, especially for assisting in creation (rather than learning).

In other cases, please understand that we will sometimes make mistakes. Explain concisely why you believe the contribution to be correct and compatible with this policy; someone will have a look.

### As a new contributor, is it OK to use AI tools?

This policy does not prohibit anyone from using AI tools. But as a new contributor, it's a good practice to first contribute without using generative AI tools, especially for assisting in creation (rather than learning). Using these tools correctly is difficult without a firm baseline understanding. Without this understanding, it's easy to use these tools in a way that will fall short (or appear to reviewers as falling short) of this policy.

### What if I follow the policy but my work sounds like the output of an LLM?

This policy does not prohibit work — that otherwise complies with the policy — from merely *sounding like* the output of an LLM. But keep in mind that we want to hear from you, not from a tool, so we encourage you to speak in your own voice. A contribution that sounds like it came from an LLM will, in practice, have a higher risk of being rejected — as a false positive — by a reviewer, even if it complies with this policy.
Comment thread
kennytm marked this conversation as resolved.

### What happens to me if my contributions are rejected under this policy?

If your contributions are rejected under this policy and reported to the moderators, the moderators will decide on appropriate next steps that could be as severe as banning you from the Project and all of its spaces. The moderators will consider the details of each situation when deciding on these next steps. While this RFC defines what is prohibited, it leaves the handling of violations fully to the discretion of the moderators.

### Does this apply to PRs, issues, proposals, comments, etc.?

This policy applies to pull requests, issues, proposals in all forms, comments in all places, and all other means of contributing to the Rust Project.

### By not banning use of AI tools, does this RFC endorse them?

By not banning use of AI tools, this RFC does not endorse their use. People in the Project have diverse views on generative AI and on its use. This RFC takes no position — positive or negative — on the use of these tools beyond forbidding those things the policy prohibits.
Copy link
Copy Markdown
Member

@kennytm kennytm Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
By not banning use of AI tools, this RFC does not endorse their use. People in the Project have diverse views on generative AI and on its use. This RFC takes no position — positive or negative — on the use of these tools beyond forbidding those things the policy prohibits.
Although use of AI tools is not banned, this RFC does not endorse their use. People in the Project have diverse views on generative AI and on its use. This RFC takes no position — positive or negative — on the use of these tools beyond forbidding those things the policy prohibits.

View changes since the review


### Is this the final policy for contributions or for AI-assisted contributions?

This policy is intended to solve the problem in front of us. The world is moving quickly at the moment, and Project members are continuing to explore, investigate, learn, and discuss. Other policies may be adopted later, and this RFC intends to be easy for other policies — of any nature — to build on.

### Does this policy require disclosure of the use of generative AI tools?

This policy does not require disclosure of the use of generative AI tools. This is a complex question on which Project members have diverse views and where members are continuing to explore, investigate, learn, and discuss. Later policies may further address this.
Copy link
Copy Markdown
Contributor

@juntyr juntyr Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that disclosure of authorship should be required (which can go beyond AI, e.g. to acknowledge co-authors). When reviewing student work, I have found it very helpful to have a clear statement of whether AI was involved or not, since it reduces the guessing game in many cases. If someone falsely declares to have not used AI, it can also simplify moderation choices. While I would understand if a specific policy on disclosure is postponed so that the larger policy can be agreed upon more quickly, I do think disclosure should follow soon after.

View changes since the review

Copy link
Copy Markdown

@xtqqczze xtqqczze Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Authorship is something that applies to a person, not tools; a LLM can generate text, but it isn’t an author.


### Can teams adopt other policies?

This RFC adopts a policy for shared Project spaces and a baseline policy for all team spaces. It does not restrict any team from adopting policies for its own spaces that add prohibitions.

At the same time, there is a cost to having different policies across the Project: it risks surprise and confusion for contributors. By adopting a policy that represents those items on which we have wide agreement and that addresses the concrete problems we're seeing across the Project, we hope to create less need for custom policies and more certainty for contributors.

### What about public communications?

This RFC does not have any policy items focused on the public communications of the Project. But proposals for Project communications are contributions and must follow this policy. Later policies may further address this.

### Does this policy make a distinction between new and existing contributors?

New and existing contributors are treated in the same way under this policy. All contributors — including all Project members — may only make contributions that are compatible with this policy.

At the same time, new contributors face additional challenges in using generative AI tools to produce contributions that reviewers will recognize as compatible with this policy. It's a good practice for new contributors to first work without using generative AI tools, especially for assisting in creation (rather than learning), to build the baseline understanding required.

## Other questions and answers

### Does accepting AI-generated work risk our ability to redistribute Rust?

What about the copyright situation? Since this policy does not ban AI-generated work, does that risk our ability to redistribute Rust under our license? Niko Matsakis [reports](https://nikomatsakis.github.io/rust-project-perspectives-on-ai/feb27-summary.html#the-legality-of-ai-usage):

> On this topic, the Rust Project Directors consulted the Rust Foundation's legal counsel and they did not have significant concerns about Rust accepting LLM-generated code from a legal perspective. Some courts have found that AI-generated code is not subject to copyright and it's expected that others will follow suit. Any human-contributed original expression would be owned by the human author, but if that author is the contributor (or the modifications are licensed under an open source license), the situation is no different from any human-origin contribution. However, this does not present a legal obstacle to us redistributing the code, because, as this code is not copyrighted, it can be freely redistributed. Further, while it is possible for LLMs to generate code (especially small portions) that is identical to code in the training data, outstanding litigation has not revealed that this is a significant issue, and often such portions are too small or contain such limited originality that they may not qualify for copyright protection.

### Is requiring that contributors take care an acceptable policy item?

To take care is to give something your full attention and treat its correctness as important to you. That's a meaningful distinction. As reviewers, we can tell when someone has taken care and when the person has not — there are many signs of this.
Copy link
Copy Markdown

@alice-i-cecile alice-i-cecile Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a seasoned reviewer, I am very skeptical of the claim that reviewers can reliably tell when people have or have not taken care, especially in the context of LLM-assisted work.

View changes since the review


At the same time, taking care is just one requirement of the policy. If a contribution is prohibited by any item in the policy, then it's prohibited by the policy. A contribution may be rejected under this policy even if we cannot tell whether the person took care.

### Is requiring that contributors have reason to believe they understand an acceptable policy item?

Even the best contributors may sometimes misunderstand their own contributions. We do not require that people actually understand the things they submit. But we expect contributors to have *good reason* to expect that they understand what they're submitting to us. This is reasonable to ask, and it's a prerequisite for a contributor being able to explain the contribution to a reviewer and have a productive conversation.

At the same time, having reason to believe that one understands the contribution is just one requirement of the policy. If a contribution is prohibited by any item in the policy, then it's prohibited by the policy. A contribution may be rejected under this policy even if we cannot tell whether the person had good reason for that belief.

### Should the policy require care and attention proportional to that required of reviewers?

An earlier version of the draft that became this RFC stated:

> Submitting AI-generated work without exercising care and attention proportional to what you're asking of reviewers is prohibited.

Is that needed? In drafting this RFC, it came to feel redundant. In explaining what it means to check work carefully, we say that this means to check something with care, to treat its correctness as important to you, and to give it your full attention. That's exactly what it means to exercise care and attention proportional to what's being asked of a reviewer.

## Acknowledgments

Thanks to Jieyou Xu for fruitful collaboration on earlier policy drafts. Thanks to Niko Matsakis, Eric Huss, Tyler Mandry, Oliver Scherer, Jakub Beránek, Rémy Rakic, Pete LeVasseur, Eric Holk, Yosh Wuyts, David Wood, Jack Huey, Jacob Finkelman, and many others for thoughtful discussion.

All views and errors remain those of the author alone.