Wikipedia:Village pump (policy)/Good faith and AI-generated comments
Should [[WP:Demonstrate good faith]] include mention of AI-generated comments?
{{closed rfc top|1=The result of this discussion is there is no consensus to amend WP:DGF at this time. This has been open for ~145 days and after reviewing everyones comments I'm not seeing consensus forming. Support and opposition are both substantial and grounded in policy, and neither side clearly outweighs the other in strength of argument. A substantial number of editors support adding language that AI-generated comments "run counter to demonstrating good faith", while an equally substantial number oppose, citing AGF's intent-based focus and existing conduct policies (WP:DISRUPTIVE, WP:CIR, WP:IDHT as sufficient for dealing with problematic AI use. Because the arguments on both sides are policy grounded and balanced, the discussion does not establish the clear consensus required for a guideline change. Editors are still fully responsible for any text they post, AI-assisted or not.
As always I welcome feedback on my talk page. If you believe I've gotten this wrong that's ok I'm human (that really sounds like something and LLM would say), and we can talk it out. I don't know everything and there are people in this discussion who have much more experience than I do. If you really believe this close was the wrong choice you can always challenge the close, I won't take it personally. Dr vulpes (Talk) 05:22, 27 May 2025 (UTC)
}}
Using AI to write your comments in a discussion makes it difficult for others to assume that you are discussing in good faith, rather than trying to use AI to argue someone into exhaustion (see example of someone using AI in their replies [https://en.wikipedia.org/w/index.php?title=Wikipedia%3AArticles_for_deletion%2FCriticism_of_fascism&diff=1264697141&oldid=1264682530 "Because I don't have time to argue with, in my humble opinion, stupid PHOQUING people"]). More fundamentally, WP:AGF can't apply to the AI itself as AI lacks intentionality, and it is difficult for editors to assess how much of an AI-generated comment reflects the training of the AI vs. the actual thoughts of the editor.
Should WP:DGF be addended to include that using AI to generate your replies in a discussion runs counter to demonstrating good faith? Photos of Japan (talk) 00:23, 2 January 2025 (UTC)
- 100% Yes and sooner rather than later. Machine-generated material is not produced in "good faith" — it inherently wastes the time of actual editors. Having to sort through machine-generated responses is unacceptable and a growing problem. Passing off your a machine-generated response as your own, no matter how obvious or not it is, is not acceptable. :bloodofox: (talk) 00:39, 2 January 2025 (UTC)
- :Update: I have recently had to respond to not one but two different instances of AI-generated slop on our articles and related talk pages (Talk:Herla#AI_Generated_Content & Talk:Böksta_Runestone#Clarification_on_Image_Caption_–_%22Possibly_Depicting%22_vs._%22Showing%22). This is a complete waste of my time and the time of any other involved human . I find it outright insulting. We need something done to either stop this or at least reduce and provide consequences for this when it happens. Some kind of disclaimer about not posting AI-generated nonsense to Wikipedia before allowing posting would also help. I did not sign up to Wikipedia around 20 years ago to sort through someone's prompt-generated garbage trained on who knows what (in fact, often trained on Wikipedia itself!). :bloodofox: (talk) 05:27, 1 March 2025 (UTC)
- ::Agreed. It's only going to become more common, so it's pressingly important that we figure out how to manage it until we have the guidance and the tools to deal with it properly. Technology is moving faster and unless we clarify what the new rules are, people will just assume the WP community is prepared, or worse, take advantage. I believe keeping good faith is part of what makes this place special, and the users are a part of that, but keeping that principal should not come at the cost of driving away everyone that wants to improve the project but aren't sure how to deal with, or identify, bots. Thanks and cheers. DN (talk) 06:15, 1 March 2025 (UTC)
- :::I don't think that putting a sentence in a "backstage" page is going to do any good. If nothing else, the odds are stacked against it. About 1,000 registered editors (and who knows how many IPs) make their first edit each day. That page gets about 150 page views per day. Even if we assume, to simplifying the math, that every single page view reads the whole thing and every single page view is a newly registered editor, that would still leave 85% of new editors not seeing this rule.
- :::Putting this sentence there, and then expecting it to change people's behavior is like drawing chalk lines across an unlit rural road, and then wondering why the trucks didn't yield to you the middle of the night.
- :::Your [https://en.wikipedia.org/w/index.php?title=Herla&diff=prev&oldid=1277645011 first example] is irrelevant to this discussion, because it was article content and not discussion. Your second example has other problems: You are assuming it's AI-generated, but the comment accurately quotes the Swedish-language Wikipedia, which is not a typical ability of AI tools, and accurately describes the edits that the new editor made, which is also not a typical ability of AI tools. There are also misspellings, typos, and awkwardnesses ("storys", "sometimes seen as kind of as gods(Skaði)") that AI tools wouldn't produce, but non-native English speakers would. Ergo, I doubt that this is "AI slop" or "AI-generated crap".
- :::More importantly for that second example, the editor says that the uncited caption is phrased overly strongly ("showing a Norse god") and recommends softening it to "possibly depicting a Norse god". You have demanded a source for the softer claim, but you aren't demanding a source for the existing claim. You should be treating the old, uncited caption as having been formally WP:CHALLENGED. Instead, one of you has reverted everything and declared that nobody should be talking to this editor because it's all just AI garbage.
- :::Frankly, I think you're wrong in your guess that AI was used to generate these comments, and I think you and @Skyerise have been rude, and I think the other editor is actually correct that if we don't have a source saying that this old image definitely does show a Norse god on skis, then Wikipedia shouldn't have an uncited caption making that claim. We do have a policy about this, and the policy is that anyone wanting to keep old uncited claims has to add a source for it.
- :::Perhaps the question the rest of us should be asking is: Do we want to have a policy that says editors get to ignore core content policies like WP:CHALLENGE if they instead claim that the editors telling them that they're wrong are using AI? WhatamIdoing (talk) 20:54, 2 March 2025 (UTC)
- ::::This is a funny response: You'd know it was AI-generated slop with at most mild adjustment immediately if you were at all familiar with the material. And then there are the usual LLM-generated text give-aways: the structure of the response and lack of references or citations (the fear among LLM-makers of lawsuits is omnipresent, after all). And that's another big problem: Prompt-generated text can be convincing to non-experts like yourself. The text caption matter is a red herring: it is clearly covered by WP:PROVEIT (and I've removed it).
- ::::That said, while you made this wall of text defending the use of generative AI-users on the site wasting my time and the time of other editors, you could have been proposing solutions to a rising tide of AI-spew that those of us here contributing to articles increasingly need to mop up. :bloodofox: (talk) 02:27, 3 March 2025 (UTC)
- No. As with all the other concurrent discussions (how many times do we actually need to discuss the exact same FUD and scaremongering?) the problem is not AI, but rather inappropriate use of AI. What we need to do is to (better) explain what we actually want to see in discussions, not vaguely defined bans of swathes of technology that, used properly, can aid communication. Thryduulf (talk) 01:23, 2 January 2025 (UTC)
- :Note that this topic is discussing using AI to generate replies, as opposed to using it as an aid (e.g. asking it to edit for grammar, or conciseness). As the above concurrent discussion demonstrates, users are already using AI to generate their replies in AfD, so it isn't scaremongering but an actual issue.
- :WP:DGF also does not ban anything ("Showing good faith is not required"), but offers general advice on demonstrating good faith. So it seems like the most relevant place to include mention of the community's concerns regarding AI-generated comments, without outright banning anything. Photos of Japan (talk) 01:32, 2 January 2025 (UTC)
- ::And as pointed out, multiple times in those discussions, different people understand different things from the phrase "AI-generated". The community's concern is not AI-generated comments, but comments that do not clearly and constructively contribute to a discussion - some such comments are AI-generated, some are not. This proposal would, just as all the other related ones, cause actual harm when editors falsely accuse others of using AI (and this will happen). Thryduulf (talk) 02:34, 2 January 2025 (UTC)
- :::Nobody signed up to argue with bots here. If you're pasting someone else's comment into a prompt and asking the chatbot to argue against that comment and just posting it in here, that's a real problema and absolutely should not be acceptable. :bloodofox: (talk) 03:31, 2 January 2025 (UTC)
- ::::Thank you for the assumption of bad faith and demonstrating one of my points about the harm caused. Nobody is forcing you to engage with bad-faith comments, but whether something is or is not bad faith needs to be determined by its content not by its method of generation. Simply using an AI demonstrates neither good faith nor bad faith. Thryduulf (talk) 04:36, 2 January 2025 (UTC)
- ::::I don't see why we have any particular to reason to suspect a respected and trustworthy editor of using AI. Cremastra (u — c) 14:31, 2 January 2025 (UTC)
- :::::I don't think the replies above look like anyone is automating this discussion using a chat bot. However let's maybe ease off on the "FUD" talk. Frankly when people start trying to sell magic beans, doubt is a good thing. Meanwhile it remains very uncertain what functionality these chat bots will actually be able to develop. And as for fear, I don't think those of us who dislike chatbots being used on Wikipedia are afraid of them. They're just kind of disruptive. Simonm223 (talk) 21:00, 3 March 2025 (UTC)
- :::I'm one of those people who clarified the difference between AI-generated vs. edited, and such a difference could be made explicit with a note. Editors are already accusing others of using AI. Could you clarify how you think addressing AI in WP:DGF would cause actual harm? Photos of Japan (talk) 04:29, 2 January 2025 (UTC)
- ::::By encouraging editors to accuse others of using AI, by encouraging editors to dismiss or ignore comments because they suspect that they are AI-generated rather than engaging with them. @Bloodofox has already encouraged others to ignore my arguments in this discussion because they suspect I might be using an LLM and/or be a bot (for the record I'm neither). Thryduulf (talk) 04:33, 2 January 2025 (UTC)
- :::::I think {{u|bloodofox}}'s comment was about "you" in the rhetorical sense, not "you" as in Thryduulf. jlwoodwa (talk) 11:06, 2 January 2025 (UTC)
- ::::Given your relentlessly pro-AI comments here, it seems that you'd be A-OK with just chatting with a group of chatbots here — or leaving the discussion to them. However, most of us clearly are not. In fact, I would immediately tell someone to get lost were it confirmed that indeed that is what is happening. I'm a human being and find the notion of wasting my time with chatbots on Wikipedia to be incredibly insulting and offensive. :bloodofox: (talk) 04:38, 2 January 2025 (UTC)
- :::::My comments are neither pro-AI nor anti-AI, indeed it seems that you have not understood pretty much anything I'm saying. Thryduulf (talk) 04:43, 2 January 2025 (UTC)
- ::::::Funny, you've done nothing here but argue for more generative AI on the site and now you seem to be arguing to let chatbots run rampant on it while mocking anyone who doesn't want to interface with chatbots on Wikipedia. Hey, why not just sell the site to Meta, am I right? :bloodofox: (talk) 04:53, 2 January 2025 (UTC)
- :::::::I haven't been arguing for more generative AI on the site. I've been arguing against banning it on the grounds that such a ban would be unclear, unenforceable, wouldn't solve any problems (largely because whether something is AI or not is completely irrelevant to the matter at hand) but would instead cause harm. Some of the issues identified are actual problems, but AI is not the cause of them and banning AI won't fix them.
- :::::::I'm not mocking anybody, nor am I advocating to {{tpq|let chatbots run rampant}}. I'm utterly confused why you think I might advocate for selling Wikipedia to Meta (or anyone else for that matter)? Are you actually reading anything I'm writing? You clearly are not understanding it. Thryduulf (talk) 05:01, 2 January 2025 (UTC)
- ::::::::So we're now in 'everyone else is the problem, not me!' territory now? Perhaps try communicating in a different way because your responses here are looking very much like the typical AI apologetics one can encounter on just about any contemporary LinkedIn thread from your typical FAANG employee. :bloodofox: (talk) 05:13, 2 January 2025 (UTC)
- :::::::::No, this is not a {{tpq|everyone else is the problem, not me}} issue because most other people appear to be able to understand my arguments and respond to them appropriately. Not everybody agrees with them, but that's not an issue.
- :::::::::I'm not familiar with Linkedin threads (I don't use that platform) nor what a "FAANG employee" is (I've literally never heard the term before now) so I have no idea whether your characterisation is a compliment or a personal attack, but given your comments towards me and others you disagree with elsewhere I suspect it's closer to the latter.
- :::::::::AI is a tool. Just like any other tool it can be used in good faith or in bad faith, it can be used well and it can be used badly, it can be used in appropriate situations and it can be used in inappropriate situations, the results of using the tool can be good and the results of using the tool can be bad. Banning the tool inevitably bans the good results as well as the bad results but doesn't address the reasons why the results were good or bad and so does not resolve the actual issue that led to the bad outcomes. Thryduulf (talk) 12:09, 2 January 2025 (UTC)
- ::::::::::In the context of generating comments to other users though, AI is much easier to use for bad faith than for good faith. LLMs don't understand Wikipedia's policies and norms, and so are hard to utilize to generate posts that productively address them. By contrast, bad actors can easily use LLMs to make low quality posts to waste people's time or wear them down.
- ::::::::::In the context of generating images, or text for articles, it's easy to see how the vast majority of users using AI for those purposes is acting in good faith as these are generally constructive tasks, and most people making bad faith changes to articles are either obvious vandals who won't bother to use AI because they'll be reverted soon anyways, or trying to be subtle (povpushers) in which case they tend to want to carefully write their own text into the article.
- ::::::::::It's true that AI "is just a tool", but when that tool is much easier to use for bad faith purposes (in the context of discussions) then it raises suspicions about why people are using it. Photos of Japan (talk) 22:44, 2 January 2025 (UTC)
- :::::::::::{{tq|LLMs don't understand Wikipedia's policies and norms}} They're not designed to "understand" them since the policies and norms were designed for human cognition. The fact that AI is used rampantly by people acting in bad faith on Wikipedia does not inherently condemn the AI. To me, it shows that it's too easy for vandals to access and do damage on Wikipedia. Unfortunately, the type of vetting required to prevent that at the source would also potentially require eliminating IP-editing, which won't happen. Duly signed, ⛵ WaltClipper -(talk) 14:33, 15 January 2025 (UTC)
- ::::::You mentioned "FUD". That acronym, "fear, uncertainty and doubt," is used in precisely two contexts: pro-AI propagadizing and persuading people who hold memecoin crypto to continue holding it. Since this discussion is not about memecoin crypto that would suggest you are using it in a pro-AI context. I will note, fear, uncertainty and doubt is not my problem with AI. Rather it's anger, aesthetic disgust and feeling disrespected when somebody makes me talk to their chatbot. Simonm223 (talk) 14:15, 14 January 2025 (UTC)
- :::::::{{tpq|That acronym, "fear, uncertainty and doubt," is used in precisely two contexts}} is simply
- :::::::FUD both predates AI by many decades (my father introduced me to the term in the context of the phrase "nobody got fired for buying IBM", and the context of that was mainframe computer systems in the 1980s if not earlier. FUD is also used in many, many more contexts that just those two you list, including examples by those opposing the use of AI on Wikipedia in these very discussions. Thryduulf (talk) 14:47, 14 January 2025 (UTC)
- :::::::{{tpq|That acronym, "fear, uncertainty and doubt," is used in precisely two contexts}} is factually incorrect.
- :::::::FUD both predates AI by many decades (indeed if you'd bothered to read the fear, uncertainty and doubt article you'd learn that the concept was first recorded in 1693, the exact formulation dates from at least the 1920s and the use of it in technology concepts originated in 1975 in the context of mainframe computer systems. That its use, eve in just AI contexts, is limited to pro-AI advocacy is ludicrous (even ignoring things like Roko's basilisk), examples can be found in these sprawling discussions from those opposing AI use on Wikipedia. Thryduulf (talk) 14:52, 14 January 2025 (UTC)
- Not really – I agree with Thryduulf's arguments on this one. Using AI to help tweak or summarize or "enhance" replies is of course not bad faith – the person is trying hard. Maybe English is their second language. Even for replies 100% AI-generated the user may be an ESL speaker struggling to remember the right words (I always forget 90% of my French vocabulary when writing anything in French, for example). In this case, I don't think we should make a blanket assumption that using AI to generate comments is not showing good faith. Cremastra (u — c) 02:35, 2 January 2025 (UTC)
- :To do that properly, we do need to make a blanket assumption that those using AI will self edit the AI outputs. In this age of state actor level propaganda and professional bad actors the onus has to be on the Editor to show "good faith" and a commitment to accuracy insomuch as is possible. An Old History Geek (talk) 06:29, 21 March 2025 (UTC)
- Yes because generating walls of text is not good faith. People "touching up" their comments is also bad (for starters, if you lack the English competency to write your statements in the first place, you probably lack the competency to tell if your meaning has been preserved or not). Exactly what AGF should say needs work, but something needs to be said, and
AGFDGF is a good place to do it. XOR'easter (talk) 02:56, 2 January 2025 (UTC) - :Not all walls of text are generated by AI, not all AI generated comments are walls of text. Not everybody who uses AI to touch up their comments lacks the competencies you describe, not everybody who does lack those competencies uses AI. It is not always possible to tell which comments have been generated by AI and which have not. This proposal is not particularly relevant to the problems you describe. Thryduulf (talk) 03:01, 2 January 2025 (UTC)
- ::Someone has to ask: Are you generating all of these pro-AI arguments using ChatGPT? It'd explain a lot. If so, I'll happily ignore any and all of your contributions, and I'd advise anyone else to do the same. We're not here to be flooded with LLM-derived responses. :bloodofox: (talk) 03:27, 2 January 2025 (UTC)
- :::That you can't tell whether my comments are AI-generated or not is one of the fundamental problems with these proposals. For the record they aren't, nor are they pro-AI - they're simply anti throwing out babies with bathwater. Thryduulf (talk) 04:25, 2 January 2025 (UTC)
- ::::I'd say it also illustrates the serious danger: We can no longer be sure that we're even talking to other people here, which is probably the most notable shift in the history of Wikipedia. :bloodofox: (talk) 04:34, 2 January 2025 (UTC)
- :::::How is that a "serious danger"? If a comment makes a good point, why does it matter whether ti was AI generated or not? If it doesn't make a good point, why does it matter if it was AI generated or not? How will these proposals resolve that "danger"? How will they be enforceable? Thryduulf (talk) 04:39, 2 January 2025 (UTC)
- ::::::Wikipedia is made for people, by people, and I like most people will be incredibly offended to find that we're just playing some kind of LLM pong with a chatbot of your choice. You can't be serious. :bloodofox: (talk) 04:40, 2 January 2025 (UTC)
- :::::::You are entitled to that philosophy, but that doesn't actually answer any of my questions. Thryduulf (talk) 04:45, 2 January 2025 (UTC)
- ::::::"why does it matter if it was AI generated or not?"
- ::::::Because it takes little effort to post a lengthy, low quality AI-generated post, and a lot of effort for human editors to write up replies debunking them.
- ::::::"How will they be enforceable? "
- ::::::WP:DGF isn't meant to be enforced. It's meant to explain to people how they can demonstrate good faith. Posting replies to people (who took the time to write them) that are obviously AI-generated harms the ability of those people to assume good faith. Photos of Japan (talk) 05:16, 2 January 2025 (UTC)
:The linked "example of someone using AI in their replies" appears – to me – to be a non-AI-generated comment. I think I preferred the allegedly AI-generated comments from that user (example). The AI was at least superficially polite. WhatamIdoing (talk) 04:27, 2 January 2025 (UTC)
::Obviously the person screaming in all caps that they use AI because they don't want to waste their time arguing is not using AI for that comment. Their first post calls for the article to be deleted for not "[https://en.wikipedia.org/w/index.php?title=Wikipedia%3AArticles_for_deletion%2FCriticism_of_fascism&diff=1264473251&oldid=1263617341 offering new insights or advancing scholarly understanding]" and "merely" reiterating what other sources have written.
::Yes, after a human had wasted their time explaining all the things wrong with its first post, then the bot was able to write a second post which looks ok. Except it only superficially looks ok, it doesn't actually accurately describe the articles. Photos of Japan (talk) 04:59, 2 January 2025 (UTC)
:::Multiple humans have demonstrated in these discussions that humans are equally capable of writing posts which superficially look OK but don't actually accurately relate to anything they are responding to. Thryduulf (talk) 05:03, 2 January 2025 (UTC)
::::But I can assume that everyone here is acting in good faith. I can't assume good faith in the globally-locked sock puppet spamming AfD discussions with low effort posts, whose bot is just saying whatever it can to argue for the deletion of political pages the editor doesn't like. Photos of Japan (talk) 05:09, 2 January 2025 (UTC)
:::::True, but I think that has more to do with the "globally-locked sock puppet spamming AfD discussions" part than with the "some of it might be [AI-generated]" part. WhatamIdoing (talk) 07:54, 2 January 2025 (UTC)
::::::All of which was discovered because of my suspicions from their inhuman, and meaningless replies. "Reiteration isn't the problem; redundancy is," maybe sounds pithy in a vacuum, but this was written in reply to me stating that we aren't supposed to be doing OR but reiterating what the sources say.
::::::"Your criticism feels overly prescriptive, as though you're evaluating this as an academic essay" also sounds good, until you realize that the bot is actually criticizing its own original post.
::::::The fact that my suspicions about their good faith were ultimately validated only makes it even harder for me to assume good faith in users who sound like ChatGPT. Photos of Japan (talk) 08:33, 2 January 2025 (UTC)
:::::::I wonder if we need some other language here. I can understand feeling like this is a bad interaction. There's no sense that the person cares; there's no feeling like this is a true interaction. A contract lawyer would say that there's no meeting of the minds, and there can't be, because there's no mind in the AI, and the human copying from the AI doesn't seem to be interested in engaging their brain.
:::::::But... do you actually think they're doing this for the purpose of intentionally harming Wikipedia? Or could this be explained by other motivations? Never attribute to malice that which can be adequately explained by stupidity – or to anxiety, insecurity (will they hate me if I get my grammar wrong?), incompetence, negligence, or any number of other "understandable" (but still something WP:SHUN- and even block-worthy) reasons. WhatamIdoing (talk) 08:49, 2 January 2025 (UTC)
::::::::The user's talk page has a header at the top asking people not to template them because it is "impersonal and disrespectful", instead requesting "please take a moment to write a comment below in your own words"
::::::::Does this look like acting in good faith to you? Requesting other people write personalized responses to them while they respond with an LLM? Because it looks to me like they are trying to waste other people's time. Photos of Japan (talk) 09:35, 2 January 2025 (UTC)
:::::::::Wikipedia:Assume good faith means that you assume people aren't deliberately screwing up on purpose. Humans are self-contradictory creatures. I generally do assume that someone who is being hypocritical hasn't noticed their contradictions yet. WhatamIdoing (talk) 07:54, 3 January 2025 (UTC)
::::::::::"Being hypocritical" in the abstract isn't the problem, it's the fact that asking people to put effort into their comments, while putting in minimal effort into your own comments appears bad faith, especially when said person says they don't want to waste time writing comments to stupid people. The fact you are arguing AGF for this person is both astounding and disappointing. Photos of Japan (talk) 16:08, 3 January 2025 (UTC)
:::::::::::It feels like there is a lack of reciprocity in the interaction, even leaving aside the concern that the account is a block-evading sock.
:::::::::::But I wonder if you have read AGF recently. The first sentence is "Assuming good faith (AGF) means assuming that people are not deliberately trying to hurt Wikipedia, even when their actions are harmful."
:::::::::::So we've got some of this (e.g., harmful actions). But do you really believe this person woke up in the morning and decided "My main goal for today is to deliberately hurt Wikipedia. I might not be successful, but I sure am going to try hard to reach my goal"? WhatamIdoing (talk) 23:17, 4 January 2025 (UTC)
::::::::::::Trying to hurt Wikipedia doesn't mean they have to literally think "I am trying to hurt Wikipedia", it can mean a range of things, such as "I am trying to troll Wikipedians". A person who thinks a cabal of editors is guarding an article page, and that they need to harass them off the site, may think they are improving Wikipedia, but at the least I wouldn't say that they are acting in good faith. Photos of Japan (talk) 23:27, 4 January 2025 (UTC)
:::::::::::::Sure, I'd count that as a case of "trying to hurt Wikipedia-the-community". WhatamIdoing (talk) 06:10, 5 January 2025 (UTC)
- The issues with AI in discussions is not related to good faith, which is narrowly defined to intent. CMD (talk) 04:45, 2 January 2025 (UTC)
- :In my mind, they are related inasmuch as it is much more difficult for me to ascertain good faith if the words are eminently not written by the person I am speaking to in large part, but instead generated based on an unknown prompt in what is likely a small fraction of the expected time. To be frank, in many situations it is difficult to avoid the conclusion that the disparity in effort is being leveraged in something less than good faith. Remsense ‥ 论 05:02, 2 January 2025 (UTC)
- ::Assume good faith, don't ascertain! Llm use can be deeply unhelpful for discussions and the potential for mis-use is large, but the most recent discussion I've been involved with where I observed an llm post was responded to by an llm post, I believe both the users were doing this in good faith. CMD (talk) 05:07, 2 January 2025 (UTC)
- :::All I mean to say is it should be licit that unhelpful LLM use should be something that can be mentioned like any other unhelpful rhetorical pattern. Remsense ‥ 论 05:09, 2 January 2025 (UTC)
- ::::Sure, but WP:DGF doesn't mention any unhelpful rhetorical patterns. CMD (talk) 05:32, 2 January 2025 (UTC)
- ::::The fact that everyone (myself included) defending "LLM use" says "use" rather than "generated", is a pretty clear sign that no one really wants to communicate with someone using "LLM generated" comments. We can argue about bans (not being proposed here), how to know if someone is using LLM, the nuances of "LLM use", etc., but at the very least we should be able to agree that there are concerns with LLM generated replies, and if we can agree that there are concerns then we should be able to agree that somewhere in policy we should be able to find a place to express those concerns. Photos of Japan (talk) 05:38, 2 January 2025 (UTC)
- :::::...or they could be saying "use" because "using LLMs" is shorter and more colloquial than "generating text with LLMs"? Gnomingstuff (talk) 06:19, 2 January 2025 (UTC)
- ::::::Seems unlikely when people justify their use for editing (which I also support), and not for generating replies on their behalf. Photos of Japan (talk) 06:23, 2 January 2025 (UTC)
- :::::::This is just semantics.
- :::::::For instance, I am OK with someone using a LLM to post a productive comment on a talk page. I am also OK with someone generating a reply with a LLM that is a productive comment to post to a talk page. I am not OK with someone generating text with an LLM to include in an article, and also not OK with someone using a LLM to contribute to an article.
- :::::::The only difference between these four sentences is that two of them are more annoying to type than the other two. Gnomingstuff (talk) 08:08, 2 January 2025 (UTC)
- ::::::::Most people already assume good faith in those making productive contributions. In situations where good faith is more difficult to assume, would you trust someone who uses an LLM to generate all of their comments as much as someone who doesn't? Photos of Japan (talk) 09:11, 2 January 2025 (UTC)
- :::::::::Given that LLM-use is completely irrelevant to the faith in which a user contributes, yes. Of course what amount that actually is may be anywhere between completely and none. Thryduulf (talk) 11:59, 2 January 2025 (UTC)
- ::::::::::LLM-use is relevant as it allows bad faith users to disrupt the encyclopedia with minimal effort. Such a user [https://en.wikipedia.org/w/index.php?title=Wikipedia%3AVillage_pump_%28policy%29&diff=1266895762&oldid=1266895487 posted in this thread earlier], as well as started [https://en.wikipedia.org/w/index.php?title=Wikipedia%3AVillage_pump_%28policy%29&diff=1266895278&oldid=1266895131 a disruptive thread here] and [https://en.wikipedia.org/w/index.php?title=Wikipedia%3AVillage_pump_%28policy%29&diff=1266895487&oldid=1266895278 posted here], all using AI. I had previously been involved in a debate with another sock puppet of theirs, but at that time they didn't use AI. Now it seems they are switching to using an LLM just to troll with minimal effort. Photos of Japan (talk) 21:44, 2 January 2025 (UTC)
- :::::::::::LLMs are a tool that can be used by good and bad faith users alike. Using an LLM tells you nothing about whether a user is contributing in good or bad faith. If somebody is trolling they can be, and should be, blocked for trolling regardless of the specifics of how they are trolling. Thryduulf (talk) 21:56, 2 January 2025 (UTC)
- ::::::::::::A can of spray paint, a kitchen knife, etc., are tools that can be used for good or bad, but if you bring them some place where they have few good uses and many bad uses then people will be suspicious about why you brought them. You can't just assume that a tool in any context is equally harmless. Using AI to generate replies to other editors is more suspicious than using it to generate a picture exemplifying a fashion style, or a description of a physics concept. Photos of Japan (talk) 23:09, 2 January 2025 (UTC)
- :::::::::I wouldn't trust anything factual the person would have to say, but I wouldn't assume they were malicious, which is the entire point of WP:AGF. Gnomingstuff (talk) 16:47, 2 January 2025 (UTC)
- ::::::::::WP:AGF is not a death pact though. At times you should be suspicious. Do you think that if a user, who you already have suspicions of, is also using an LLM to generate their comments, that that doesn't have any effect on those suspicions? Photos of Japan (talk) 21:44, 2 January 2025 (UTC)
- :::::::::::So… If you suspect that someone is not arguing in good faith… just stop engaging them. If they are creating walls of text but not making policy based arguments, they can be ignored. Resist the urge to respond to every comment… it isn’t necessary to “have the last word”. Blueboar (talk) 21:57, 2 January 2025 (UTC)
- ::::::::::::As the person just banned at ANI for persistently using LLMs to communicate demonstrates, you can't "just stop engaging them". When they [https://en.wikipedia.org/w/index.php?title=Talk:2025_New_Orleans_truck_attack&diff=prev&oldid=1266898382 propose changes to an article and say they will implement them if no one replies] then somebody has to engage them in some way. It's not about trying to "have the last word", this is a collaborative project, it generally requires engaging with others to some degree. When someone like the person I linked to above (now banned sock), spams low quality comments across dozens of AfDs, then they are going to waste people's time, and telling others to just not engage with them is dismissive of that. Photos of Japan (talk) 22:57, 2 January 2025 (UTC)
- :::::::::::::That they've been banned for disruption indicates we can do everything we need to do to deal with bad faith users of LLMs without assuming that everyone using an LLM is doing so in bad faith. Thryduulf (talk) 00:33, 3 January 2025 (UTC)
- ::::::::::::::I don't believe we should assume everyone using an LLM is doing so in bad faith, so I'm glad you think my comment indicates what I believe. Photos of Japan (talk) 01:09, 3 January 2025 (UTC)
- No -- whatever you think of LLMs, the reason they are so popular is that the people who use them earnestly believe they are useful. Claiming otherwise is divorced from reality. Even people who add hallucinated bullshit to articles are usually well-intentioned (if wrong). Gnomingstuff (talk) 06:17, 2 January 2025 (UTC)
- Comment I have no opinion on this matter, however, note that we are currently dealing with a real-world application of this at ANI and there's a generalized state of confusion in how to address it. Chetsford (talk) 08:54, 2 January 2025 (UTC)
- Yes I find it incredibly rude for someone to procedurally generate text and then expect others to engage with it as if they were actually saying something themselves. Simonm223 (talk) 14:34, 2 January 2025 (UTC)
- Yes, mention that use of an LLM should be disclosed and that failure to do so is like not telling someone you are taping the call. Selfstudier (talk) 14:43, 2 January 2025 (UTC)
- :I could support general advice that if you're using machine translation or an LLM to help you write your comments, it can be helpful to mention this in the message. The tone to take, though, should be "so people won't be mad at you if it screwed up the comment" instead of "because you're an immoral and possibly criminal person if you do this". WhatamIdoing (talk) 07:57, 3 January 2025 (UTC)
- ::It's rarely productive to get mad at someone on Wikipedia for any reason, but if someone uses an LLM and it screws up their comment they don't get any pass just because the LLM screwed up and not them. You are fully responsible for any LLM content you sign your name under. -- LWG talk 05:19, 1 February 2025 (UTC)
- No. When someone publishes something under their own name, they are incorporating it as their own statement. Plagiarism from an AI or elsewhere is irrelevant to whether they are engaging in good faith. lethargilistic (talk) 17:29, 2 January 2025 (UTC)
- Comment LLMs know a few tricks about logical fallacies and some general ways of arguing (rhetoric), but they are incredibly dumb at understanding the rules of Wikipedia. You can usually tell this because it looks like incredibly slick and professional prose, but somehow it cannot get even the simplest points about the policies and guidelines of Wikipedia. I would indef such users for lacking WP:CIR. tgeorgescu (talk) 17:39, 2 January 2025 (UTC)
- :That guideline states "Sanctions such as blocks and bans are always considered a last resort where all other avenues of correcting problems have been tried and have failed." Gnomingstuff (talk) 19:44, 2 January 2025 (UTC)
- :: WP:CIR isn't a guideline, but an essay. Relevantly though it is being cited at this very moment in an ANI thread concerning a user who can't/won't communicate without an LLM. Photos of Japan (talk) 20:49, 2 January 2025 (UTC)
- :::I blocked that user as NOTHERE a few minutes ago after seeing them (using ChatGPT) make suggestions for text to live pagespace while their previous bad behaviors were under discussion. AGF is not a suicide pact. BusterD (talk) 20:56, 2 January 2025 (UTC)
- :{{tq|... but somehow it cannot get even the simplest points about the policies and guidelines of Wikipedia|q=yes}}: That problem existed with some humans even prior to LLMs. —Bagumba (talk) 02:53, 20 January 2025 (UTC)
- No - Not a good or bad faith issue. PackMecEng (talk) 21:02, 2 January 2025 (UTC)
- Yes Using a 3rd party service to contribute to the Wikipedia on your behalf is clearly bad-faith, analogous to paying someone to write your article. Zaathras (talk) 14:39, 3 January 2025 (UTC)
- :Its a stretch to say that a newbie writing a comment using AI is automatically acting in bad faith and not here to build an encyclopedia. PackMecEng (talk) 16:55, 3 January 2025 (UTC)
- ::That's true, but this and other comments here show that not a few editors perceive it as bad-faith, rude, etc. I take that as an indication that we should tell people to avoid doing this when they have enough CLUE to read WP:AGF and are making an effort to show they're acting in good faith. Daß Wölf 23:06, 9 January 2025 (UTC)
- Comment Large language model AI like Chat GPT are in their infancy. The culture hasn't finished its initial reaction to them yet. I suggest that any proposal made here have an automatic expiration/required rediscussion date two years after closing. Darkfrog24 (talk) 22:42, 3 January 2025 (UTC)
- No – It is a matter of how you use AI. I use Google translate to add trans-title parameters to citations, but I am careful to check for Google's output making for good English as well as reflecting the foreign title when it is a language I somewhat understand. I like to think that I am careful, and I do not pretend to be fluent in a language I am not familiar with, although I usually don't announce the source of such a translation. If an editor uses AI profligately and without understanding the material generated, then that is the sin; not AI itself. Dhtwiki (talk) 05:04, 5 January 2025 (UTC)
- :There's a legal phrase, "when the exception swallows the rule", and I think we might be headed there with the recent LLM/AI discussions.
- :We start off by saying "Let's completely ban it!" Then in discussion we add "Oh, except for this very reasonable thing... and that reasonable thing... and nobody actually meant this other reasonable thing..."
- :The end result is that it's "completely banned" ...except for an apparent majority of uses. WhatamIdoing (talk) 06:34, 5 January 2025 (UTC)
- ::Do you want us to reply to you, because you are a human? Or are you just posting the output of an LLM without bothering to read anything yourself? DS (talk) 06:08, 7 January 2025 (UTC)
- :::Most likely you would reply because someone posted a valid comment and you are assuming they are acting in good faith and taking responsibility for what they post. To assume otherwise is kind of weird and not inline with general Wikipedia values. PackMecEng (talk) 15:19, 8 January 2025 (UTC)
- No The OP seems to misunderstand WP:DGF which is not aimed at weak editors but instead exhorts stronger editors to lead by example. That section already seems to overload the primary point of WP:AGF and adding mention of AI would be quite inappropriate per WP:CREEP. Andrew🐉(talk) 23:11, 5 January 2025 (UTC)
- No. Reading the current text of the section, adding text about AI would feel out-of-place for what the section is about. —pythoncoder (talk | contribs) 05:56, 8 January 2025 (UTC)
- No, this is not about good faith. Adumbrativus (talk) 11:14, 9 January 2025 (UTC)
- Yes. AI use is not a demonstration of bad faith (in any case not every new good-faith editor is familiar with our AI policies), but it is equally not a "demonstration of good faith", which is what the WP:DGF section is about.{{pb}}It seems some editors are missing the point and !voting as if every edit is either a demonstration of good faith or bad faith. Most interactions are neutral and so is most AI use, but I find it hard to imagine a situation where AI use would point away from unfamiliarity and incompetence (in the CIR sense), and it often (unintentionally) leads to a presumption of laziness and open disinterest. It makes perfect sense to recommend against it. Daß Wölf 22:56, 9 January 2025 (UTC)
- :Indeed most kinds of actions don't inherently demonstrate good or bad. The circumspect and neutral observation that {{tq|AI use is not a demonstration of bad faith... but it is equally not a "demonstration of good faith"}}, does not justify a proposal to one-sidedly say just half. And among all the actions that don't necessarily demonstrate good faith (and don't necessarily demonstrate bad faith either), it is not the purpose of "demonstrate good faith" and the broader guideline, to single out one kind of action to especially mention negatively. Adumbrativus (talk) 04:40, 13 January 2025 (UTC)
- Yes. Per Dass Wolf, though I would say passing off a completely AI-generated comment as your own anywhere is inherently bad-faith and one doesn't need to know Wiki policies to understand that. JoelleJay (talk) 23:30, 9 January 2025 (UTC)
- Yes. Sure, LLMs may have utility somewhere, and it might be a crutch for people unfamiliar with English, but as I've said above in the other AI RfC, that's a competence issue. This is about comments eating up editor time, energy, about LLMs easily being used to ram through changes and poke at editors in good standing. I don't see a case wherein a prospective editor's command of policy and language is good enough to discuss with other editors while being bad enough to require LLM use. Iseult Δx talk to me 01:26, 10 January 2025 (UTC)
- :Good faith is separate from competence. Trying to do good is separate from having skills and knowledge to achieve good results. Adumbrativus (talk) 04:40, 13 January 2025 (UTC)
- No - anyone using a washing machine to wash their clothes must be evil and inherently lazy. They cannot be trusted. ... Oh, sorry, wrong century. Regards, --Goldsztajn (talk) 01:31, 10 January 2025 (UTC)
- :Using a washing machine still results in washed clothes. Using LLMs results in communication failures because the LLM-using party isn't fully engaging. Hydrangeans (she/her | talk | edits) 04:50, 27 January 2025 (UTC)
- ::And before there's a reply of 'the washing machine-using party isn't fully engaging in washing clothes'—washing clothes is a material process. The clothes get washed whether or not you pay attention to the suds and water. Communication is a social process. Users can't come to a meeting of the minds if some of the users outsource the 'thinking' to word salad-generators that can't think. Hydrangeans (she/her | talk | edits) 05:00, 27 January 2025 (UTC)
- No - As long as a person understands (and knows) what they are talking about, we shouldn't discriminate against folks using generative AI tech for grammar fixes or minor flow improvements. Yes, AI can create walls of text, and make arguments not grounded in policy, but we could do that even without resorting to generative AI. Sohom (talk) 11:24, 13 January 2025 (UTC)
- :To expand on my point above. Completely AI generated comments (or articles) are obviously bad, but {{tq|using AI}} should be thrown into the same cross-hairs as completely AI generated comments. Sohom (talk) 11:35, 13 January 2025 (UTC)
- ::@Sohom Datta You mean shouldn't be thrown? I think that would make more sense given the context of your original !vote. Duly signed, ⛵ WaltClipper -(talk) 14:08, 14 January 2025 (UTC)
- No. Don't make any changes. It's not a good faith/bad faith issue. The 'yes' arguments are most unconvincing with very bizarre analogies to make their point. Here, I can make one too: "Don't edit with AI; you wouldn't shoot your neighbor's dog with a BB-gun, would you?" Duly signed, ⛵ WaltClipper -(talk) 14:43, 13 January 2025 (UTC)
- Yes. If I plug another user's comments into an LLM and ask it to generate a response, I am not participating in the project in good faith. By failing to meaningfully engage with the other user by reading their comments and making an effort to articulate myself, I'm treating the other user's time and energy frivolously. We should advise users that refraining from using LLMs is an important step toward demonstrating good faith. Hydrangeans (she/her | talk | edits) 04:55, 27 January 2025 (UTC)
- Yes per Hydrangeans among others. Good faith editing requires engaging collaboratively with your human faculties. Posting an AI comment, on the other hand, strikes me as deeply unfair to those of us who try to engage substantively when there is disagreement. Let's not forget that editor time and energy and enthusiasm are our most important resources. If AI is not meaningfully contributing to our discussions (and I think there is good reason to believe it is not) then it is wasting these limited resources. I would therefore argue that using it is full-on WP:DISRUPTIVE if done persistently enough –– on par with e.g. WP:IDHT or WP:POINT –– but at the very least demonstrates an unwillingness to display good faith engagement. That should be codified in the guideline. Generalrelative (talk) 04:59, 28 January 2025 (UTC)
- I appreciate your concern about the use of AI in discussions. It is important to be mindful of how AI is used, and to ensure that it is used in a way that is respectful of others.
I don't think that WP:DGF should be amended to specifically mention AI. However, I do think that it is important to be aware of the potential for AI to be used in a way that is not in good faith.
When using AI, it is important to be transparent about it. Let others know that you are using AI, and explain how you are using it. This will help to build trust and ensure that others understand that you are not trying to deceive them.{{pb}}It is also important to be mindful of the limitations of AI. AI is not a perfect tool, and it can sometimes generate biased or inaccurate results. Be sure to review and edit any AI-generated content before you post it.{{pb}}Finally, it is important to remember that AI is just a tool. It is up to you to use it in a way that is respectful and ethical. It's easy to detect for most, can be pointed out as needed. No need to add an extra policy JayCubby
- Questions: While I would agree that AI may be used as a tool for good, such leveling the field for those with certain disabilities, might it just as easily be used as a tool for disruption? What evidence exists that shows whether or not AI may be used to circumvent certain processes and requirements that make Wiki a positive collaboration of new ideas as opposed to a toxic competition of trite but effective logical fallacies? Cheers. DN (talk) 05:39, 27 January 2025 (UTC)
- :AI can be used to engage positively, it can also be used to engage negatively. Simply using AI is therefore not, in and of itself, an indication of good or bad faith. Anyone using AI to circumvent processes and requirements should be dealt with in the exact same way they would be if they circumvented those processes and requirements using any other means. Users who are not circumventing processes and requirements should not be sanctioned or discriminated against for circumventing processes and requirements. Using a tool that others could theoretically use to cause harm or engage in bad faith does not mean that they are causing harm or engaging in bad faith. Thryduulf (talk) 08:05, 27 January 2025 (UTC)
- ::Well said. Thanks. DN (talk) 08:12, 27 January 2025 (UTC)
- :::As {{u|Hydrangeans}} explains above, an auto-answer tool means that the person is not engaging with the discussion. They either cannot or will not think about what others have written, and they are unable or unwilling to reply themselves. I can chat to an app if I want to spend time talking to a chatbot. Johnuniq (talk) 22:49, 27 January 2025 (UTC)
- ::::And as I and others have repeatedly explained, that is completely irrelevant to this discussion. You can use AI in multiple different ways, some of which are productive contributions to Wikipedia, some of which are not. If someone is disruptively not engaging with discussion then they can already be sanctioned for doing so, what tools they are or are not using to do so could not be less relevant. Thryduulf (talk) 02:51, 28 January 2025 (UTC)
- ::This implies a discussion that is entirely between AI chatbots deserves the same attention and thought needed to close it, and can effect a consensus just as well, as one between humans, so long as its arguments are superficially reasonable and not disruptive. It implies that editors should expect and be comfortable with arguing with AI when they enter a discussion, and that they should not expect to engage with anyone who can actually comprehend them... JoelleJay (talk) 01:00, 28 January 2025 (UTC)
- :::That's a straw man argument, and if you've been following the discussion you should already know that. My comment implied absolutely none of what you claim it does. If you are not prepared to discuss what has actually been written then I am not going to waste more of my time replying to you in detail. Thryduulf (talk) 02:54, 28 January 2025 (UTC)
- ::::It's not a strawman; it's an example that demonstrates, acutely, the flaws in your premise. Hydrangeans (she/her | talk | edits) 03:11, 28 January 2025 (UTC)
- :::::If you think that demonstrates a flaw in the premise then you haven't understood the premise at all. Thryduulf (talk) 03:14, 28 January 2025 (UTC)
- ::::::I disagree. If you think it doesn't demonstrate a flaw, then you haven't understood the implications of your own position or the purpose of discussion on Wikipedia talk pages. Hydrangeans (she/her | talk | edits) 03:17, 28 January 2025 (UTC)
- :::::::I refuse to waste any more of my time on you. Thryduulf (talk) 04:31, 28 January 2025 (UTC)
- ::::::::Both of the above users are correct. If we have to treat AI-generated posts in good faith the same as human posts, then a conversation of posts between users that is entirely generated by AI would have to be read by a closing admin and their consensus respected provided it didn't overtly defy policy. Photos of Japan (talk) 04:37, 28 January 2025 (UTC)
- :::::::::You too have completely misunderstood. If someone is contributing in good faith, we treat their comments as having been left in good faith regardless of how they made them. If someone is contributing in bad faith we treat their comments as having been left in bad faith regardless of how they made them. Simply using AI is not an indication of whether someone is contributing in good or bad faith (it could be either). Thryduulf (talk) 00:17, 29 January 2025 (UTC)
- ::::::::::But we can't tell if the bot is acting in good or bad faith, because the bot lacks agency, which is the problem with comments that are generated by AI rather than merely assisted by AI. Photos of Japan (talk) 00:31, 29 January 2025 (UTC)
- :::::::::::{{tpq|But we can't tell if the bot is acting in good or bad faith, because the bot lacks agency}} exactly. It is the operator who acts in good or bad faith, and simply using a bot is not evidence of good faith or bad faith. What determines good or bad faith is the content not the method. Thryduulf (talk) 11:56, 29 January 2025 (UTC)
- ::::::::::::But the if the bot operator isn't generating their own comments, then their faith doesn't matter, the bot's does. Just like how if I hired someone to edit Wikipedia to me, what would matter is their faith. Photos of Japan (talk) 14:59, 30 January 2025 (UTC)
- :::::::::::::A bot and AI can both be used in good faith and in bad faith. You can only tell which by looking at the contributions in their context, which is exactly the same as contributions made without the use of either. Thryduulf (talk) 23:12, 30 January 2025 (UTC)
- ::::::::::::::Not to go off topic, but do you object to any requirements on users for disclosure of use of AI generated responses and comments etc...? DN (talk) 02:07, 31 January 2025 (UTC)
- :::::::::::::::I'm not in favour of completely unenforceable requirements that would bring no benefits. Thryduulf (talk) 11:38, 31 January 2025 (UTC)
- ::::::::::Is it a demonstration of good faith to copy someone else's (let's say public domain and relevant) argument wholesale and paste it in a discussion with no attribution as if it was your original thoughts?
Or how about passing off a novel mathematical proof generated by AI as if you wrote it by yourself? JoelleJay (talk) 02:51, 29 January 2025 (UTC) - :::::::::::Specific examples of good or bad faith contributions are not relevant to this discussion. If you do not understand why this is then you haven't understood the basic premise of this discussion. Thryduulf (talk) 12:00, 29 January 2025 (UTC)
- ::::::::::::If other actions where someone is deceptively appropriating, word-for-word, an entire argument they did not write, are intuitively "not good faith", then why would it be any different in this scenario? JoelleJay (talk) 16:57, 1 February 2025 (UTC)
- :::::::::::::This discussion is explicitly about whether use of AI should be regarded as an indicator of bad faith. Someone {{tpq|deceptively appropriating, word-for-word, an entire argument they did not write}} is not editing in good faith. It is completely irrelevant whether they do this using AI or not. Nobody is arguing that some uses of AI are bad faith - specific examples are neither relevant nor useful. For simply using AI to be regarded as an indicator of bad faith then all uses of AI must be in bad faith, which they are not (as multiple people have repeatedly explained).
- :::::::::::::Everybody agrees that some people who edit using mobile phones do so in bad faith, but we don't regard simply using a mobile phone as evidence of editing in bad faith because some people who edit using mobile phones do so in good faith. Listing specific examples of bad faith use of mobile phones is completely irrelevant to a discussion about that. Replace "mobile phones" with "AI" and absolutely nothing changes. Thryduulf (talk) 18:18, 1 February 2025 (UTC)
- ::::::::::::::Except the mobile phone user is actually doing the writing. Hydrangeans (she/her | talk | edits) 19:39, 1 February 2025 (UTC)
- :::::::::::::::I know I must be sounding like a stuck record at this point, but there are only so many ways you can describe completely irrelevant things as completely irrelevant before that happens. The AI system is incapable of having faith, good or bad, in the same way that a mobile phone is incapable of having faith, good or bad. The faith comes from the person using the tool not from the tool itself. That faith can be either good or bad, but the tool someone uses does not and cannot tell you anything about that. Thryduulf (talk) 20:07, 1 February 2025 (UTC)
- ::That is a really good summary of the situation. Using a widely available and powerful tool does not mean you are acting in bad faith, it is all in how it is used. PackMecEng (talk) 02:00, 28 January 2025 (UTC)
- :::A tool merely being widely available and powerful doesn't mean it's suited to the purpose of participating in discussions on Wikipedia. By way of analogy, Infowars is/was widely available and powerful, in the sense of the exercise it influenced over certain Internet audiences, but its very character as a disinformation platform makes it unsuitable for citation on Wikipedia. LLMs are widely available and might be considered 'powerful' in the sense that they can manage a raw output of vaguely plausible-sounding text, but their very character as text prediction models—rather than actual, deliberated communication—make them unsuitable mechanisms for participating in Wikipedia discussions. Hydrangeans (she/her | talk | edits) 03:16, 28 January 2025 (UTC)
- ::::Even if we assume your premise is true, that does not indicate that someone using an LLM (which come in a wide range of abilities and are only a subset of AI) is contributing in either good or bad faith. It is completely irrelevant to the faith in which they are contributing. Thryduulf (talk) 04:30, 28 January 2025 (UTC)
- ::::But this isn’t about if you think its a useful tool or not. This is about if someone uses one are they automatically acting in bad faith. We can argue the merits and benefits of AI all day, and they certainly have their place, but nothing you said struck at the point of this discussion. PackMecEng (talk) 13:59, 28 January 2025 (UTC)
- Yes. To echo someone here, no one signed up here to argue with bad AI chat bots. If you're a non native speaker running through your posts through ChatGPT for spelling and grammar that's one thing, but wasting time bickering with AI slop is an insult. Hydronym89 (talk) 16:33, 28 January 2025 (UTC)
- :Your comment provides good examples of using AI in good and bad faith, thus demonstrating that simply using AI is not an indication of either. Thryduulf (talk) 00:18, 29 January 2025 (UTC)
- ::Is that an fair comparison? I disagree that it is. Spelling and grammar checking doesn't seem to be what we are talking about.
- ::The importance of context in which it is being used is, I think, the part that may be perceived as falling through the cracks in relation to AGF or DGF, but I agree there is a legitimate concern for AI being used to game the system in achieving goals that are inconsistent with being WP:HERE.
- ::I think we all agree that time is a valuable commodity that should be respected, but not at the expense of others. Using a bot to fix grammar and punctuation is acceptable because it typically saves more time than it costs. Using AI to enable endless debates, even if both opponents are using it, seems like an awful waste of space, let alone the time it would cost admins that need to sort through it all. DN (talk) 01:16, 29 January 2025 (UTC)
- :::Engaging in endless debates that waste the time of other editors is disruptive, but this is completely irrelevant to this discussion for two reasons. Firstly, someone engaging in this behaviour may be doing so in either good or bad faith: someone intentionally doing so is almost certainly WP:NOTHERE, and we regularly deal with such people. Other people sincerely believe that their arguments are improving Wikipedia and/or that the people they are arguing with are trying to harm it. This doesn't make it less disruptive but equally doesn't mean they are contributing in bad faith.
- :::Secondly this behaviour is completely independent of whether someone is using AI or not: some people engaging in this behaviour are using AI some people engaging in this behaviour are not. Some people who use AI engage in this behaviour, some people are not.
- :::For the perfect illustration of this see the people in this discussion who are making extensive arguments in good faith, without using AI, while having not understood the premise of the discussion - despite this being explained to them multiple times. Thryduulf (talk) 12:13, 29 January 2025 (UTC)
- ::::Would you agree that using something like grammar and spellcheck is not the same as using AI (without informing other users) to produce comments and responses? DN (talk) 22:04, 29 January 2025 (UTC)
- :::::They are different uses of AI, but that's not relevant because neither use is, in and of itself, evidence of the faith in which the user is contributing. Thryduulf (talk) 22:14, 29 January 2025 (UTC)
- ::::::You are conflating "evidence" with "proof". Using AI to entirely generate your comments is not "proof" of bad faith, but it definitely provides less "evidence" of good faith than writing out a comment yourself. Photos of Japan (talk) 03:02, 30 January 2025 (UTC)
- :::::::No, it provides no evidence of good or bad faith at all. Thryduulf (talk) 12:54, 30 January 2025 (UTC)
- ::::::::Does the absence of AI's ability to demonstrate good/bad faith absolve the user of responsibility to some degree in that regard? DN (talk) 23:21, 6 February 2025 (UTC)
- :::::::::I'm not quite sure I understand what you are asking, but you are always responsible for everything you post, regardless of how on why you posted it or what tools you did or did not use to write it. This means that someone using AI (in any form) to write a post should be treated and responded to identically with how they should be treated and responded to if they had made an identical post without using AI. Thryduulf (talk) 04:10, 7 February 2025 (UTC)
- No per WP:CREEP. After reading the current version of the section, it doesn't seem like the right place to say anything about AI. -- King of ♥ ♦ ♣ ♠ 01:05, 29 January 2025 (UTC)
- Yes, with caveats this discussion seems to be spiraling into a discussion of several separate issues. I agree with Remsense and Simonm223 and others that using an LLM to generate your reply to a discussion is inappropriate on Wikipedia. Wikipedia runs on consensus, which requires communication between humans to arrive at a shared understanding. Putting in the effort to fully understand and respond to the other parties is an essential part of good-faith engagement in the consensus process. If I hired a human ghost writer to use my Wiki account to argue for my desired changes on a wiki article, that would be completely inappropriate, and using an AI to replace that hypothetical ghost writer doesn't make it any more acceptable. With that said, I understand this discussion to be about how to encourage editors to demonstrate good faith. Many of the people here on both sides seem to think we are discussing banning or encouraging LLM use, which is a different conversation. In the context of this discussion demonstrating good faith means disclosing LLM use and never using LLMs to generate replies to any contentious discussion. This is a subset of "articulating your honest motives" (since we can't trust the AI to accurately convey your motives behind your advocacy) and "avoidance of gaming the system" (since using an LLM in a contentious discussion opens up the concern that you might simply be using minimal effort to waste the time of those who disagree with you and win by exhaustion). I think it is appropriate to mention the pitfalls of LLM use in WP:DGF, though I do not at this time support an outright ban on its use. -- LWG talk 05:19, 1 February 2025 (UTC)
- :What they said. LLM comment generation is already happening, and I just came here from a deletion discussion proposed with AI. It made no sense. A policy explicitly banning generating comments with LLMs wholesale is necessary soon and would be very useful. Mrfoogles (talk) 17:22, 22 March 2025 (UTC)
- ::Why? Deletion nominations that make no sense are not exclusive to AI and can be dealt with under existing policies. Thryduulf (talk) 17:30, 22 March 2025 (UTC)
- No. For the same reason I oppose blanket statements about bans of using AI elsewhere, it is not only a huge over reach but fundamentally impossible to enforce. I've seen a lot of talk around testing student work to see if it AI, but that is impossible to do reliably. When movable type and the printing press began replacing scribes, the handwriting of scribes began to look like that of a printing press. As AI becomes more prominent, I imagine human writing will begin to look more AI generated. People who use AI for things like helping them translate their native writing into English should not be punished if something leaks through that makes the use obvious. Like anywhere else on the Internet, I foresee any strict rules against the use of AI to quickly be used in bad faith in heated arguments to accuse others of being a bot.{{pb
}}GeogSage (⚔Chat?⚔) 19:12, 2 February 2025 (UTC)
- Hesitantly support. I agree that generative AI and LLMs cause a lot of problems on Wikipedia, and should not be allowed. However, I think that a blanket ban could have a negative impact on both accessibility and the community as a whole. Some people might be using LLMs to help with grammar or spelling, and I'd consider it a net positive because it encourages people with english as a second language to edit wikipedia, which brings diverse perspectives we wouldn't otherwise have. The other issue is that it might encourage people to go on "AI Witch hunts" for lack of a better term. Nobody likes being accused of being an LLM and it negatively impacts the sense of community we have. If there is also a policy against accusing people of using an LLM without evidence, I would likely agree without any issue Mgjertson (talk) 15:53, 6 February 2025 (UTC)
- :We do have a policy against accusing people of using an LLM without evidence: WP:AGF. I don't think we should ban the use of LLMs, but because using an LLM to write your comments can make it harder for others to AGF, LLMs should be used with caution and their use should be disclosed. LLMs should never be used to gain the upper hand in a contentious discussion. -- LWG talk 21:17, 6 February 2025 (UTC)
- ::@LWG {{tpq|We do have a policy against accusing people of using an LLM without evidence: WP:AGF}} this proposal would effectively remove that. Thryduulf (talk) 21:42, 6 February 2025 (UTC)
- :::The only "evidence" required at the moment is "my personal belief". WhatamIdoing (talk) 22:33, 6 February 2025 (UTC)
- ::::This may be interpreted as a good example for both sides of the argument. Editors have and will always need to make personal judgements that affect how they participate in the topic or project, if at all. If we want to maintain faith in the project and encourage participation the core principles and policies should at least appear strong enough to adapt to AI. Some people don't mind it, others see it as spam, because it's subjective and based on "personal belief". DN (talk) 20:13, 4 March 2025 (UTC)
- Yes. Keep AI out of Wikipedia. Simple as that. Thehistorianisaac (talk) 06:54, 20 March 2025 (UTC)
- :@Thehistorianisaac, you are arguing that LLM use in discussions should not be discouraged in the AGF guidance? JoelleJay (talk) 03:16, 22 March 2025 (UTC)
- :What I mean is: Using AI is obviously not good faith, and use of AI(outside of some exceptions, such as examples on AI related articles) should be prevented. As for "good faith", if they are willing to use AI that means that they DO NOT care about guidlines and are just trying to manipulate things. Thehistorianisaac (talk) 05:30, 22 March 2025 (UTC)
- ::Do you have any evidence at all that everybody who is willing to use AI {{tpq|Do[es] not care about guidelines}}} and/or {{tpq|are just trying to manipulate things}}? There is a ton of evidence presented in this thread to the contrary. If we are going to assume bad faith of contributors we need some actual evidence that doing so is justified. Thryduulf (talk) 09:50, 22 March 2025 (UTC)
- :::I would argue that logically speaking, if someone use AI to respond(e.g. to being notified they violated the rules) then they are showing they do not care about the rules. However, "Good faith" is very hard to morally apply to AI, however I think we can agree that they likely do not fully understand wikipedia policies. On the same topic, AI may generate inaccurate info which is also unsourced in talk pages. Thehistorianisaac (talk) 10:54, 22 March 2025 (UTC)
- ::::{{tpq|logically speaking, if someone use AI to respond(e.g. to being notified they violated the rules) then they are showing they do not care about the rules}} the only way that logic holds is if the rule they have been notified they have violated is one that explicitly prohibits using AI. Obviously there are some people who don't care about the rules, and it is possible that some of them (although unlikely the majority) will use AI, but not everyone who uses AI doesn't care about the rules. Someone not understanding the rules is completely independent of and irrelevant to whether they are here in good or bad faith.
- ::::Whether AI is accurate or inaccurate is also irrelevant to good or bad faith (and remember that "may generate inaccurate info" is not the same as "everything they generate is inaccurate"). Someone intentionally posting things they know to be inaccurate is (except when explicitly marked as such as part of e.g. a discussion about what is and isn't accurate) an example of bad faith, but not everybody who posts such material is using AI and not everybody using AI is posting such material. tl;dr using AI is not evidence of either good or bad faith. Thryduulf (talk) 12:13, 22 March 2025 (UTC)
- :::::Additionally, there is also the problem that, well, the responses likely will also be AI generated. Obviously I'm going off topic here, however I believe that good faith does not apply as there is a chance the responder is not human at all. Additionally, even if the poster is human, It should be treated the same was as saying e.g. My classmate who generated his essay with AI and pasted it straight in. It counts as academic dishonesty in most cases, and I would agree for the same argument being applied to wikipedia, including talk pages. Thehistorianisaac (talk) 12:34, 22 March 2025 (UTC)
- ::::::Not all uses of AI are generating some text and pasting it straight in without any thought (see the many examples provided elsewhere in this discussion). {{tpq|there is a chance the responder is not human at all.}} There is always a human in the loop somewhere, but even if there wasn't "a chance" doesn't justify assuming bad faith of everybody who is suspected of using a certain tool. There is a chance, indeed a much greater chance, that someone editing using a mobile phone is an undisclosed paid editor whose goal is directly and explicitly contrary to Wikipedia's values, but that is not a reason to assume bad faith of everyone editing using a mobile phone or even to say that use of a mobile phone is something that is an indicator of bad faith. Thryduulf (talk) 13:06, 22 March 2025 (UTC)
- :::::::I would argue that AI is different to say, using a mobile phone. AI generates the text, while A phone is just a way to access a place to write the text. Thehistorianisaac (talk) 13:23, 22 March 2025 (UTC)
- ::::::::Your opinion doesn't actually negate any of my points. Even if some people using AI use it to generate all their text and then paste it into Wikipedia without looking, that doesn't mean everybody using AI is doing it that way or that people who are doing it that way are doing so in bad faith. Thryduulf (talk) 13:45, 22 March 2025 (UTC)
[tangent] If any of the people who have used LLMs/AI tools would be willing to do me a favor, please see the request at Wikipedia talk:Large language models/Archive 7#For an LLM tester. I think this (splitting a very long page – not an article – by date) is something that will be faster and more accurately done by a script than by a human. WhatamIdoing (talk) 18:25, 29 January 2025 (UTC)
- Yes. The purpose of a discussion forum is for editors to engage with each other; fully AI-generated responses serve no purpose but to flood the zone and waste people's time, meaning they are, by definition, bad faith. Obviously this does not apply to light editing, but that's not what we're actually discussing; this is about fully AI-generated material, not about people using it grammar an spellchecking software to clean up their own words. No one has come up with even the slightest rationale for why anyone would do so in good faith - all they've provided is vague "but it might be useful to someone somewhere, hypothetically" - which is, in fact, false, as their total inability to articulate any such case shows. And the fact that some people are determine to defend it regardless shows why we do in fact need a specific policy making clear that it is inappropriate. --Aquillion (talk) 19:08, 2 February 2025 (UTC)
- No - AI is simply a tool, whether it's to spellcheck or fully generate a comment. Labeling all AI use as bad faith editing is assuming bad faith. ミラー強斗武 (StG88ぬ会話) 07:02, 3 February 2025 (UTC)
- Yes unless the user makes it innately clear they are using AI to interact with other editors, per DGF, at least until new policies and guidelines for protecting our human community are in place. Wikipedia's core principals were originally designed around aspects of human nature, experiences and interactions. It was designed for people to collaborate with other people, at a time before AI was so readily available. In it's current state, I don't see any comments explaining how Wikipedia is prepared to handle this tool that likely hasn't realized it's full potential yet. I might agree that whether or not a person chooses to use AI isn't an initial sign of good or bad faith, but that is irrelevant to the core issue of the question as it relates to Wiki's current ability interpret and manage a potentially subversive tool.The sooner the better, before it's use, for better or worse, sways the community's appetite one way or the other. Cheers. DN (talk) 01:01, 7 February 2025 (UTC)
- No - A carefully curated and reviewed-by-the-poster AI generated statement is not a problem. The AI is being used as a tool to organize thoughts, and just because the exact wording came from an AI does not mean it does not contribute usefully to the discussion. The issue is not the use of the AI, the issue is in non-useful content or discussion, which, yes, can easily happen if the AI statement is not carefully curated and reviewed by the poster. But that's not the fault of the AI, that's the fault of the human operating the AI... and nothing has changed from our normal policy. This reply is not written by AI, but if it had been, it wouldn't have changed the points raised as relevant. And if irrelevant statements are made... heck, humans do that all the time too! Said comments should be dealt with the same way we deal with humans who spout nonsense. Fieari (talk) 06:23, 13 February 2025 (UTC)
- No - Outside of a few editors here I feel like most of the responses on both sides are missing what WP:DGF is about. First off, it is a postive rule about what editors should do. It is also a short rule. Expanding on this is unlikely to improve the rule. Additionally, beginning to talk about things an editor should not do because they imply a departure from godo faith opens the door to many other things that are not the best editing but are also not really what DGF is about. WP needs better guidleines on AI but this guideline does not need to be modified to encompass AI. — Preceding unsigned comment added by Czarking0 (talk • contribs) 07:30, 16 February 2025 (UTC)
- Yes Wikipedia was designed for humans. Until our structures are changed to accomodate AI, there needs to be reasonable safety measures to prevent abuse of a system that was designed for humans only. AI can impact every area of Wikipedia with great potential for distortion and abuse. This proposal is reasonable and needed. -- GreenC 19:51, 17 February 2025 (UTC)
- Yes but possibly with included clarification on the distinction between AI generated replies and the use of AI as a tool for spellcheck or translation. But someone who just asks an AI to spit out a list of talking points/generate an entire argument to support their predetermined position is not acting in good faith or seriously engaging in the discussion. I also think it is better to be cautious with this, then amend the rules later if needed, than the reverse. Vsst (talk) 06:22, 24 February 2025 (UTC)
- No-ish While I can see the possible issues with someone saying to ChatGPT "here's what I want to say, please give me a response that's as convincing as possible" I definitely don't think that this is a clear sign of bad faith. It is not likely to be productive, since ChatGPT isn't likely to be able to make a good policy-based argument here, but I could absolutely see someone doing this who genuinely wants to improve the encyclopedia and thinks the changes they are having ChatGPT argue for really good changes. {{pb}}Which is to say, if we ban AI-generated comments we definitely shouldn't ban them as an WP:AGF issue. Someone AI-generating their comments doesn't mean they're not acting in good faith. "Good faith" is a very low standard and just means they're not actively WP:NOTHERE, which is why it's what everyone is supposed to assume as a baseline. It's very common to think that someone is acting in good faith but that they're wrong, their arguments are bad, and the changes they want would be harmful. Loki (talk) 22:56, 28 February 2025 (UTC)
- Weak support. But I'd prefer it to be added in a way that more broadly states that your opinions should be carefully considered, should be based on the responses/views of others (if there are any), and that you should understand your view well enough to be able to debate it civilly with others if they respond with questions or concerns. This isn't limited to LLM generated comments - it would also cover things like deliberately misusing essays, etc. I disagree with others that it's not a clear sign of bad faith. If someone is unable to articulate their view on a topic themselves based on policies, guidelines, or properly used essays, then they are not acting in good faith by contributing to the discussion. This would also not make it "default" bad faith to use a LLM for spell check, or grammar, or similar - because the person using the LLM in that manner would, by definition, be articulating their ideas to the LLM and having them edited in minor ways. -bɜ:ʳkənhɪmez | me | talk to me! 21:03, 2 March 2025 (UTC)
- :@Berchanhimez, this proposal is to add a line to Wikipedia:Assume good faith. At some level, it's a proposal to say "You are a bad person if you use AI on talk pages". I wonder if you'd be more satisfied with a line in Wikipedia:Talk page guidelines that says something like "AI chat bots can be helpful with cleaning up grammar and spelling errors, but don't use them to completely generate a comment on a talk page." WhatamIdoing (talk) 21:11, 2 March 2025 (UTC)
- ::Well, yes, it would be better to do that. But in any case, I don't think people should have to assume good faith of someone who is using an LLM to generate their ideas in the first place. -bɜ:ʳkənhɪmez | me | talk to me! 22:01, 2 March 2025 (UTC)
- :::Why? What is it about LLMs that means we need to throw out one of the most fundamental principles (arguably the most fundamental principle) of interaction? How will this assumption of bad faith regarding something that cannot be proven either way improve the project? Thryduulf (talk) 23:39, 2 March 2025 (UTC)
- :I don't support any of that, because my support for AGF in general is because what it says is relatively weak. I don't think that the average editor (which includes IP editors) has a fully thought out policy-based justification for the average edit, and I don't think that's a reasonable expectation. I do think that it's reasonable to expect that editors think their edit is good and will make the wiki better, which is what AGF actually means. Loki (talk) 23:57, 2 March 2025 (UTC)
- ::I don't necessarily mean that an editor has to be aware of all policies. But they should at least be understanding of their position enough to be able to understand objections to it that may be based on policies they were unaware of, and be willing to constructively discuss. A user that is using a LLM to generate their arguments, whether the LLM generates "proper" arguments or not, is not going to be able to understand the criticism of their arguments, because they don't understand their arguments in the first place. -bɜ:ʳkənhɪmez | me | talk to me! 20:08, 3 March 2025 (UTC)
- :::{{tpq|A user that is using a LLM to generate their arguments [...] is not going to be able to understand the criticism of their arguments, because they don't understand their arguments in the first place.}} I don't think this is universally true. It is going to be true for some people but some people are going to understand the arguments (e.g. because they've read and understood the LLM output before posting it, blended the LLM output with their own words, been very careful with their prompting, or some combination). Whichever it is, it gives absolutely no indication of the faith in which the user is contributing. Thryduulf (talk) 20:30, 3 March 2025 (UTC)
- :@Berchanhimez, I wonder if you'd take a look at Talk:Böksta Runestone#Clarification on Image Caption – "Possibly Depicting" vs. "Showing". It appears to be a case of a non-native English speaker deciding to "use a LLM for spell check, or grammar, or similar" and getting insulted by editors for using an LLM to translate his own words – and the accusations keep coming, even after he's told them that he's stopped using an LLM for translation and is writing everything himself. WhatamIdoing (talk) 19:58, 3 March 2025 (UTC)
- ::The accusations of LLM use there seem to be based on the format the user chose to present their arguments in, which is comparable to a LLM (breaking points out into individual sections with titles, for example). I would agree with you that, if a LLM was used there, it's not to the point of generating the arguments involved, which should be fine. I believe that editors responding as if it was definitively LLM generated when it is at best unclear can be dealt with through normal civility policies. -bɜ:ʳkənhɪmez | me | talk to me! 20:05, 3 March 2025 (UTC)
- :::The user admitted his response was generated from an LLM, although {{user|WhatamIdoing}} attempted to convince us otherwise. None of us signed up here to attempt interface with someone's LLM outputs. :bloodofox: (talk) 20:08, 3 March 2025 (UTC)
- ::::No that is not what happened. WAID said it wasn't obvious to her that the initial message was LLM-generated - that's not the same as trying to convince you (or anyone else) that it is or is not. Then later the the other editor said they had stopped using LLMs but you refused to believe them. Thryduulf (talk) 20:33, 3 March 2025 (UTC)
- :::::You're wrong: "I don't think this editor is using WP:LLM tools", "Have you ever seen a chatbot correctly..." — etc. All attempts to convince us that the user wasn't using generative AI. However, as two users, including myself, pointed out, it was obviously LLM-produced text, which the editor admitted. I'm not here to play games with prompt outputs and I'm certainly not litigating the matter with editors like yourself who openly claim that anyone who refuses to interact with prompt outputs is a victim of anti-AI propaganda. :bloodofox: (talk) 20:39, 3 March 2025 (UTC)
- ::::::What is it with proponents of policies that compel them to tell falsehoods? Anybody can read the what was on that page and see that it does not accord with what you are saying here. You are refusing to engage with someone because they previously used a technology you dislike, regardless of everything else, even though they are no longer using that technology while asserting that it is everybody else who is contributing in bad faith. Thryduulf (talk) 20:49, 3 March 2025 (UTC)
- :::::::Translation: "Those darn users who refuse to sift through my prompt-generated AI spew. Darn their FUD!" :bloodofox: (talk) 21:24, 3 March 2025 (UTC)
- ::::::::Looks like they used it for translation and grammar/spell. Both recently affirmed as okay to do. I mean your comments and actions are kind of the exact reason we should not carve out an exception to AGF. PackMecEng (talk) 22:43, 3 March 2025 (UTC)
- :::::::::I would agree that one editor's actions and words shouldn't encapsulate or be used diagnose the entirety of the issue, so it seems best to avoid repeating or following that train of thought. Cheers. DN (talk) 20:43, 4 March 2025 (UTC)
- ::::::I took the invitation to read the page for myself, and I think that accusing people of telling falsehoods is inappropriate here, since what happened on that page is pretty much as described by bloodofox. From what I read bloodofox could have and should have been more civil, but they have not told falsehoods. With that said, bloodofox, I also would suggest that we WP:DROPTHESTICK in that specific case, as the user who was correctly accused of using an LLM has since made what I would consider an very honest good-faith response in which they clarified their use of an LLM and agreed to your request to refrain from using it in that way in the future. When someone receives your criticism humbly and agrees to change their behavior accordingly, the correct response is to welcome and encourage them, not continue to condemn them. -- LWG talk 22:57, 3 March 2025 (UTC)
- :::::::I think that Bloodofox is making a material misstatement when he says {{!xt|his response was generated from an LLM}}. The user says that the first two (=not all) comments were LLM translated.
- :::::::Haven't we had multiple conversations in which everyone says that of course LLMs shouldn't create your list of reasons, but it's fine for non-native English speakers to use LLMs to translate their own original thoughts?
- :::::::And yet here we are, with an editor explicitly saying that he used ChatGPT only to get his English grammar correct, and we have an editor insisting that this is "AI generated". WhatamIdoing (talk) 17:35, 4 March 2025 (UTC)
- ::::::::I don't speak for anyone else here, but even if it's only being used as a translator, it would seem relevant to disclose that information from the start, if for no other reason than transparency in order to avoid miscommunication if the the AI makes a mistake. It could be something as simple as checking a box on the signature control panel. Cheers. DN (talk) 19:33, 4 March 2025 (UTC)
- :::::::::Such a requirement would have to come way to make people aware of it before they post for the first time, and would need to come with some assurances that other commenters will not fly off the handle at you for being honest (both for saying you've used AI when you have and for saying you have not used AI when you haven't) - something we currently cannot give. Thryduulf (talk) 19:49, 4 March 2025 (UTC)
- ::::::::::Are you specifically referring to IP accounts? DN (talk) 19:54, 4 March 2025 (UTC)
- :::::::::::No, and I don't know why I would be. For any such rule or guidance to be of any benefit whatsoever it needs to apply equally to all editors. Thryduulf (talk) 20:08, 4 March 2025 (UTC)
- ::::::::::::Your first sentence was a bit confusing. DN (talk) 20:10, 4 March 2025 (UTC)
- :::::::::::::If you want people to type "I used an LLM to translate this" (or "I used Google Translate to translate this") in their comments, then you have to tell them to type that before they post the comment. We don't want:
- :::::::::::::* A: Posts hand-written but machine translated comment.
- :::::::::::::* B: Pitches fit because the comment uses the The Cognitive Style of PowerPoint, which they believe is proof of using ChatGPT.
- :::::::::::::* A: Why didn't anyone tell me that machine translation is banned?
- :::::::::::::* B: You were just supposed to magically know that you have to type the secret code 'This is 100% original text from me, and I used ChatGPT to correct my grammar errors' when you post anything that has ever been touched by an LLM.
- :::::::::::::It does not matter whether User 'A' (or 'B') is logged in. What matters is whether we blame them later for doing something most people think is reasonable, but that we never told them is a problem. WhatamIdoing (talk) 21:07, 4 March 2025 (UTC)
- ::::::::::::::@WhatamIdoing This is not nearly so bad as you make it out to be. The answer is to insert a line into a guidelines saying that AI-generated text and translation must be disclosed and then for B to not pitch a fit. This is how literally all our guidelines work (except very, very serious ones). Cremastra (talk) 21:18, 4 March 2025 (UTC)
- :::::::::::::::"Insert a line into a guideline", when we know that Wikipedia:Nobody reads the directions? No. That's a path towards A not having any clue about the desired behavior, and B showing up to pitch a fit about how you are Violating the Holy Guideline™ 😱.
- :::::::::::::::If you want newcomers to do this, you have to put the message where they will see it. That means in the user interface.
- :::::::::::::::An abuse filter that says newcomer + more than 100 words = warn about undisclosed AI use might work most of the time.
- :::::::::::::::A line in the editing environment would probably work approximately as well as the one that says "Content that violates any copyrights will be deleted. Encyclopedic content must be verifiable through citations to reliable sources." (See Wikipedia:Copyright problems and related pages if you want to estimate how well that does/n't work.) WhatamIdoing (talk) 22:58, 4 March 2025 (UTC)
- No - The proposal here is not "should we stop people from using AI to generate comments" but "should we modify WP:DGF to address use of AI to generate comments. Those are not the same. There's a very easy answer to the latter: No, that's a section best stated simply, without overcomplicating it with specific cases. The other, underlying question which seems to be the basis for much of the drama and ad hominems in this thread (the second time in as many days I've seen ad hominems from the same people on the same subject). The problem is figuring out where to draw a line amid discussions where so many of the loudest participants seem determined to avoid nuance. Nobody wants users to tell an LLM "give me an argument to keep this article using Wikipedia policies" with no additional information, copy/pasting the output. The problem is, it's hard to figure out where to draw a line that doesn't also preclude or stigmatize, say, using an LLM as a to help them overcome interpersonal/communicative differences ranging from straight translation and fixing typos to softening language or checking logical consistency in the user's own arguments. With all of those, there are valid and inappropriate uses, and I'm yet to see a proposal that adequately takes both into consideration. That doesn't even get into the difficulty with detecting/proving this behavior, meaning the only people who will be punished are the newbies who don't realize how reactive some part of this community is on LLM issues. We fundamentally need loosey-goosey language around this, acknowledging its uses are extremely varied.
Speaking as someone who frequently opposes these blanket rules but is also not a fan of people copy/pasting chatgpt content into talk pages, here's some draft text I could probably get behind (though for WP:TPYES or WP:SEMIAUTO rather than WP:AGF): "LLMs are powerful tools that can assist editing and communication on Wikipedia, but use caution when using them on talk pages. As a collaborative project, talk pages are where contributors converse with one another to determine how to improve an article. Avoid copy-pasting LLM-generated text without justification, and use caution when relying on an LLM to generate ideas, arguments, or evidence in talk page discussions. A pattern of overreliance on LLM-generated content may be viewed as disruptive to the deliberative process." FWIW. — Rhododendrites talk \\ 18:24, 4 March 2025 (UTC) - :That proposed text is the most constructive proposal by far that I've seen around LLMs and I could support something like that as guidance somewhere, perhaps with the addition of something around the benefits of concision. As you say WP:AGF is definitely not the right place for it, and I'm not certain WP:SEMIAUTO is either. WP:TPYES is the best of the locations you mention, but maybe we should have a central single page of guidance (not hard and fast rules) about the interaction of LLMs and Wikipedia?. Thryduulf (talk) 19:44, 4 March 2025 (UTC)
- ::A central page might be a good idea, but it would be a consensus-building challenge if these discussions are any indication. I know what I suggested above doesn't go far enough for some folks, but maybe it could work as a starting point to move from [no rules at all] to [some rough guidance]. — Rhododendrites talk \\ 19:52, 4 March 2025 (UTC)
- :I also could get behind this proposal, though I would like to also see something to the effect of "to avoid misunderstanding, it is best to disclose LLM use up front and ideally share the prompts used to generate any LLM text you contribute to a discussion." -- LWG talk 19:58, 4 March 2025 (UTC)
- :That's not bad, but to the extent I have a concern with LLMs it's very similar to Berchan's, which is to say that I'm not really convinced that someone who has asked an LLM to generate arguments can really respond to counterarguments in the way we'd expect, and I'd ideally like any guideline about it to mention that specifically.
- :Or I guess more broadly, I think that any guideline about this shouldn't just say that editors should "use caution" with LLMs, it should tell them what to be cautious about. Loki (talk) 20:00, 4 March 2025 (UTC)
- No. I don't think it would be possible to adequately express the objection, and we would be left with an even bigger problem than the one we had been trying to resolve. I suppose I'm invoking WP:CREEP in a sense. Apart from expressing the general sentiment that everyone should assume and demonstrate good faith, it is not an entity like vandalism (for example) that can easily be identified. We need to treat each case on an individual basis and, perhaps by consensus, decide if that specific action was done in bad faith and if this other specific action was justified. There are so many scare stories about AI, and how it will nuke sites like Wikipedia, that it is easy to assume bad faith where it is involved. As someone said above, AI itself is not the problem—its application is the problem. As with vandalism, that can can only be policed on a case-by-case or person-by-person basis. Spartathenian (talk) 11:55, 13 March 2025 (UTC)
- Yes: Much of Wikipedia communication is based on trust, and a key aspect of good faith is asserting things that you claim are the truth to the best of your ability. If you're using chatbots to answer the key points in questions and make arguments for you, you cannot possibly be asserting the truth to the best of your ability, because you have outsourced that determination to a third party that can do more than offer an educated guess. Responses can't possibly be in good faith when using LLMs, because an LLM lacks the inherent ability to act in good or bad faith. All uses of AI that substantively make the arguments for an editor are basically meatpuppeting, but for predictive algorithms rather than people. This is true even if the LLM is 100% correct in the argument. If someone poses as a doctor when they are not, them making accurate diagnoses doesn't provide a good faith basis for their actions. Even if the community prefers a solution is less stringent on LLM use than I would be, undisclosed LLM edits ought to be forbidden. CoffeeCrumbs (talk) 17:01, 17 March 2025 (UTC)
- I think such advice would fit better in WP:CIVIL. Most people who use LLMs to talk to us presumably do so because they feel it will enhance what they have to say in some way, i.e. in good faith. What needs to be explained is that others find not expressing yourself in your own words annoying and impolite. – Joe (talk) 17:48, 17 March 2025 (UTC)
- While I am adamantly against allowing LLM written material in article space, I am much more open to allowing it on talk pages. Some editors have difficulty expressing what they are trying to say in talk page discussion, and I could see them legitimately turning to LLMs to formulate their comments. I assume (in good faith) that, before posting, they read the text that the LLM has generated for them, and that they think that the generated text accurately expresses their views on what is being discussed. Or to put it another way: they have taken ownership of what the LLM has generated. Blueboar (talk) 13:52, 22 March 2025 (UTC)
- Oppose as written. Not all AI output is bad faith. Spelling and grammar checking, even if done by AI, is certainly not; machine translation of good text is also generally good-faith. Animal lover |666| 07:46, 26 March 2025 (UTC)
- No way in hell should we be assuming bad faith about AI tools any more than script tools we use. I could maybe see possibly writing some prescriptive advice about how AI tools are imperfect just like the scripts we use, but you are ultimately responsible for reviewing your edits for accuracy or errors and you may be blocked for repeated misuse of the tools. Huggums537voted! (sign🖋️|📞talk) 22:56, 4 April 2025 (UTC)
- If someone is dumping long obviously AI generated comments onto a talk page simply feed that comment back into whatever AI is at hand and have it generate an appropriate response. Maybe at some point the WMF could build some functionality in the reply tool feature to make the process more slimlined. -- LCU ActivelyDisinterested «@» °∆t° 20:51, 5 April 2025 (UTC)
- :[https://en.wikipedia.org/w/index.php?title=Talk%3ACOVID-19_lab_leak_theory&curid=66692278&diff=1286479362#Most_scientists_believe_the_pandemic_is_a_natural_event? SOMETHING LIKE THIS]?
- :I just saw this user place a wall of text using AI to accuse another editor of logical fallacies and berate them on a CTOP article....This is becoming extremely common IMO. Cheers DN (talk) 04:33, 20 April 2025 (UTC)
- ::That's the kind of thing. Editors should put as much effort into replying to it as was used to generate it. Simply put it into whichever AI you have handy and use it to generate a response, that way you only expend an amount of time equal to the value of the original comment. With the WMFs help the process could be slimlined to the click of a button. -- LCU ActivelyDisinterested «@» °∆t° 20:16, 27 April 2025 (UTC)
{{closed rfc bottom}}