Contributors mailing list archives

contributors@odoo-community.org

Browse archives

Avatar

Re: Reciprocity in PR opening vs reviews; banning contributors

by
MoaHub, Graeme Gellatly
- 27/01/2026 05:35:59
This might sound like a crazy way to handle, but how about solving by setting a PR limit per repo. So a PSC can set a policy of say, at any given time, only 10 open PR's are allowed, and no PR can be older than say 4 months. Once limit is hit, no new PR can be accepted until limits resolved. 

On Tue, Jan 27, 2026 at 3:47 PM Raphaël Valyi <notifications@odoo-community.org> wrote:
Hello,

I think Enric metrics could be a good basis. But sadly they are also easy to hack, some organizations already put some juniors to approve everything they can to brag about how well ranked in the OCA they are (yup), so sadly any metric will be cheated... That's why it's important we could have a human based process to ban people who are obviously trying to cheat the metrics.

But in fact, I would suggest changing Enric metrics: Yes we want to value reviews because we need people to review instead of only expecting others to spend time reviewing their very own stuff for free.
But what about stopping to count positive reviews once there are more than say 5 positive reviews for 1 negative. Say we agree we find it sane that at least 20% of the PR should detect problems and not just be lenient approval of the company PRs. People will still be able to approve more than 80% of the PRs they review, but at least they won't receive more points for that. And if we see juniors inventing non existing problems to boost their negative reviews quota, we can probably easily catch them.

I don't know, any metric is easily a tyranny, but at some point we will have to play cats and mouses with these KPIs because it will be the only way to scale beyond the usual 50 historical committers. And yes, as Holger explained, generative AI is going to quickly put a huge burden on the already overworked real OCA contributors.


On Mon, Jan 26, 2026 at 9:32 PM Tom Blauwendraat <notifications@odoo-community.org> wrote:

For me it would be too much at once, and a bit of a blunt instrument.

It could also be hacked by someone giving out two blind LGTM reviews in the repo per each PR he/she wants to do.

What about an organisation-wide karma rating based on Enric's contribution statistics, perhaps adding more metrics there where needed, which the oca-git-bot then can read and apply labels to PR's that people then can choose to filter on? There are also some new tools out there that can detect AI contributions, so if someone does a lot of those, that could also negatively impact the karma score. When going below a certain karma rate, it then could lead to auto closing or banning, but that's something we could rather phase in gently.

-Tom

On 1/26/26 6:12 PM, Holger Brunn wrote:
Thanks for your points.




> some of my colleagues who entered academia



> have lamented a similar sort of tendency in graduate students to favor the



> creation and publication of novel work for their theses, rather than review



> or reproduce the work of their peers

and wouldn't it have saved us several replication crises if you simply don't 
get to publish original work without attaching two replication papers?
What I want to do here is shift incentives. We have to recognize that many 
companies use publishing to OCA as a way of externalizing costs of QA and 
maintenance, and as free stamp of quality.
The individual employee then has the problem that they can use work time for 
publishing code, but not for reviewing. Tying those two together hopefully 
changes that.




> And an option less intrusive but maybe more effective as setting a priority



> on a rating of the contributor? 

I'd like that too, but we're constrained by the possibilities github offers, 
and that's simply not in the cards.
If too many see autoclosing as too much of a problem (obviously I don't, we 
really need to lessen cognitive load on maintainers), I could imagine a label 
"reviewing contributor" that is set on PRs of people who do enough reviews, 
and then other reviewers can focus on those PRs. But that won't be as powerful 
for the incentive shifting I talk about above.




> Can we give a way to appeal as well as a way to see what a



> PR reviewed line count is for the repo?

the action I've linked posts a message containing the line counts. Would be 
easy to add to the closing message too.
Appeal goes the same way as other cases like stale autoclosing: Ask 
maintainers to reopen/apply a label to exempt from autoclosing.

There is a class of janitorial PRs that should be exempted anyways, like fixing 
CI or updating from copier, but I wouldn't want to trust a stochastic parrot 
making that decision. And given those PRs tend to be merged fast, they don't 
strain your "line budget" for long. Note I propose to only count *open* PRs, 
once some PR is merged, it's off the ledger.




-- 
Your partner for the hard Odoo problems
https://hunki-enterprises.com

_______________________________________________
Mailing-List: https://odoo-community.org/groups/contributors-15
Post to: mailto:contributors@odoo-community.org
Unsubscribe: https://odoo-community.org/groups?unsubscribe

_______________________________________________
Mailing-List: https://odoo-community.org/groups/contributors-15
Post to: mailto:contributors@odoo-community.org
Unsubscribe: https://odoo-community.org/groups?unsubscribe



--
Raphaël Valyi
Founder and consultant

_______________________________________________
Mailing-List: https://odoo-community.org/groups/contributors-15
Post to: mailto:contributors@odoo-community.org
Unsubscribe: https://odoo-community.org/groups?unsubscribe

Reference