Full disclosure: to my opinion, beyond any efficiency considerations, I see LLMs such as Claude as an instrument of mass stealing, domination and white supremacism that should be stopped at all costs, and I’m getting worried to see more and more FOSS projects such as Firefox, systemd, vim or even the kernel actively embrace LLM-generated contributions to their codebase. So I would be in favor of GNOME to urgently issue a statement in order to ban any LLM-generated contributions, not only for technical and legal issues (as written down in the Loupe guidelines) but for ethical ones. I’m not sure how to approach the case of LLM-generated code that could be already present in the GNOME components’ codebase.
Sadly we don’t have it. As you said yourself, individual project’s policies are all we have. Last year there was RFC proposal, maybe when it will mature someone will use it to propose such policy.
Maybe I’ve been living under a rock, but do you have a link that explains this, or can you provide more details?
For “mass stealing”, you refer to copyright violations I suppose, or just pumping data from everywhere. For “domination”, maybe the domination of the GAFAM? For “white supremacism”, I understand it as “there is no Claude, just someone’s else computer” which can provide answers that are biased and push for extremism everywhere in the world (?).
That being said, I could copy the contribution guidelines about LLMs from Loupe and libadwaita and apply it for gedit projects as well.
I admit that this can be a very polarizing subject and I certainly don’t want to see this thread become too heated, so I’ll refrain of writing too much about that here, bur rather point the curious to more comprehensive resources. But to answer your question, basically, I’ll quote this post that I tend to agree with (and sounds a lot like a point Benjamin Bayart made about social networks and GAFAMs):
What are the bad sides of AI ? Ecological catastrophe, social catastrophe, economical catastrophe are known but most importantly it is fundamentally a project of domination. AI doesn’t work without a majority of people being used, being exploited, being dependent on a few people, no matter the regulation or the technical progress that can be made. There is no AI without you not deciding how money should be spent, how land should be used, how people should live or work. There is no AI in a democratic world, only in an authoritarian world (…) To me if someone considers the good sides of AI it means its bad sides aren’t bad enough.
I’ve collected some older resources in French back when I was in uni, but for more primary sources, I can suggest you the following: