[QGIS-Developer] Floating an idea: ban AI based contributed from non-core developers?

Even Rouault even.rouault at spatialys.com
Tue Mar 31 18:28:47 PDT 2026


Nyall,

As often with that topic, I'm undecided about the best course of action.

For the sake of the discussion, let me play a bit the devil advocate so 
we have counter points to consider:

- restricting AI tool use to core contributors will make the process of 
becoming core contributor harder. Core contributors would be able to 
improve their capabilities further (questionable claim in terms of 
quality. But in quantity&speed, definitely), which will make it harder 
to grow new core contributors. There's a high chance the new generation 
learning coding today will not be able to produce any working code 
without such tools (like I'm 100% dependent on Valgrind to write working 
non-trivial C++ code).

- besides the issue with new comers to the project, we most certainly 
have regular experienced contributors who haven't the status of core 
contributor, probably because nobody thought about proposing them (btw, 
I've no idea how to determine who is a core contributor and who 
isn't...  is there a public list somewhere ?). Why would they be 
discriminated further?

- I would say that we should restrict your proposal even further: "only 
core contributors are allowed to use AI tools, only in the areas where 
they (feel they) are experts  ", possibly relaxed with "or for 
contributions involving non-production code (e.g. CI scripts (*), 
etc.)". If I use a AI tool in a part of QGIS I've never touched, there's 
a high risk I will produce low quality code with it.

- There's an increased risk that non-core contributors would still use 
AI tools, but without telling us. Naive ones will be easily caught; 
smarter ones will go under our detection radar. But is there a 
difference between good/non-perfect/bad code written with or without AI 
assistance that still passes CI and human review...? At the PR unitary 
level, I'd say none. The issue is more the about the increased volume of 
bad contributions with AI help that can saturate our review bandwidth.

- Side point: I'm wondering if the nature of the tool would make a 
difference. I haven't personally used AI tools that can operate on a 
whole code base (Claude code and the like), only chat tools that can 
work/produce limited code fragments. I'd suspect the former are the ones 
where you can vibe code an entire feature, whereas with chatty ones, you 
need to iterate much more and thus have hopefully more critical eye. On 
the other hand, maybe tools that operate at the whole code base level 
can have a better global view... Likely none of those approaches is 
fundamentally better than the other one. Different drawbacks.

To me it looks like we are caught in an arm race we haven't decided to 
be part of but can't easily escape.  So, half joking/half serious, let's 
use AI tools to detect bad AI output ?!? (ignoring who has used the 
tool). I suspect that AI companies would love such outcome...  Or maybe, 
until the AI industry collapses entirely, let's temporarily go back to 
sending patches on 3.5 inches floppy disks (1.44 MB ones only, not 
extended 2.88 MB ones)  through (post) mail.

At the same time I'm writing this, I'm caught in a situation where I'm 
questioning the need for a GDAL PR whose quality isn't necessarily bad 
(I haven't done the in-depth analysis), but which is likely not strictly 
needed (premature optimization/complication), and would have most 
certainly not be submitted at all if AI didn't exist. From my experience 
with recent AI assisted PRs to GDAL, that's actually the main problem. 
Too much code being written in a too short period of time, that will 
make us totally dependent on AI tools to be able to contribute further.

So, all in all, not opposed to your proposal, but we need to be careful 
how we phrase it to not scare away non-core contributors or increase 
unwanted discrimination.

Even

(*) Because I've just played this afternoon with Gemini chat to come up 
with some python clang AST code to write custom code checkers to verify 
project specific code rules (like pairing Reference() in constructor and 
Release() in destructor). The result is .. well... AI typical. Mostly 
sort of works after a couple iterations, but definitely not something 
that would be of the quality that clang-tidy or similar serious tools 
would expect. But good enough for the purpose it was created. Or at 
least I was tricked into believing it was good enough...

-- 
http://www.spatialys.com
My software is free, but my time generally not.



More information about the QGIS-Developer mailing list