If you want to report a JavaScript error, please follow this guideline. Questions about MediaWiki in general should be posted at the MediaWiki support desk. Discussions are automatically archived after remaining inactive for {{Th/abp|age|{{{root|{{FULLPAGENAME}}}}}|cfg={{{cfg|1}}}|r=y}} {{Th/abp|units|{{{root|{{FULLPAGENAME}}}}}|cfg={{{cfg|1}}}|r=y}}.
Syncing user scripts from an external Git repository to Wikipedia
Hi all,
There are some common problems when developing user scripts:
While local development usually occurs through a version control system, usually Git with additional continuous integration provided by sites like GitHub or Wikimedia GitLab, publication of new versions of user scripts still require on-wiki edits to the user script page, which need to be done manually, and can be tedious.
Update of user scripts are restricted to their owners. This creates a large bottleneck for projects maintained by multiple people. This can be especially problematic when a script owner leaves Wikipedia or goes on an extended wikibreak.
Store a BotPassword/OAuth token of the owner account somewhere, and use it to make an edit whenever new code needs to be deployed (per CI results/manual approval/etc)
However, 1 to me feels unwieldy and suffers from the amount of effort the engineering/linking everything required, 2 can have issues with regards to caching per the maintainer, and is not as good as hosting the script on-wiki.
My proposal for how to resolve the problems above involves hosting an interface admin bot, and allowing user script authors to opt in to syncing their user script from a Git repository to Wikipedia using webhooks.
Any script wishing to be synced by the bot needs to be edited on-wiki (to serve as an authorization) to have the following header at the top of their file:
Running this bot requires community discussion and approval. I'd like to see whether the community is willing to adopt this.
What are some benefits of this proposal?
Auditability. If this scheme was to be adopted, there is an easy way to know whether a script is being automatically synced, there is an easy way to get the list of all scripts that are being synced. All edit summaries are linked to the Git commit that created that edit.
Ease of use. It is very easy to setup a sync for a user script (just insert a header to the file and configure webhooks), and flexible as the format above allows the branch and file name to be configured. It removes the need for all script developers to create BotPasswords or OAuth tokens.
Efficiency. Only webhooks will trigger syncs. There is no unnecessary periodic sync being scheduled, nor does it require CI jobs to be run each time the script needs to be deployed.
What are some drawbacks of this proposal?
Security. Even though there are already ways to allow someone else or an automated process to edit your user script as described above, allowing this bot makes it slightly easier, which could be seen a security issue. My personal opinion is that this shouldn't matter much as long as you trust the authors of all user script developers whose scripts you use. This bot is aimed primarily at user scripts.
Centralization of trust. The bot having interface administrator rights requires the bot to be trusted to not go rogue. I have created a new bot account (User:DeadbeefBot II) to have separate credentials, and it will have 2FA enrolled, and the code will be open source and hosted on Toolforge.
What are some alternatives?
We can do nothing. This remains a pain point for user script developers as syncing is hard to setup with careful CI configuration required or a less reliable reverse proxy would be required.
We can create a centralized external service (suggested by {{u|BryanDavis}} on Discord) that stores OAuth tokens and which project files are synced with which titles. There would be a web interface allowing developers to enter in their information to start automating syncs. However, this may not be as auditable as edits would go through the bot owners' accounts and not a bot account. This is less easy to use as an owner-only OAuth token would need to be generated for each sync task.
Feel free to leave a comment on how you think about this proposal. I'd also be happy to answer any questions or respond to potential concerns. beef [talk] 12:03, 23 May 2025 (UTC)
Note: This discussion is for the task of the BRFA that I opened some time ago. beef [talk] 12:16, 23 May 2025 (UTC)
:Am I reading this correct that one of methods you are proposing is to ask other users to give you their (bot)passwords? That is a horrible idea. — xaosfluxTalk 12:25, 23 May 2025 (UTC)
::Yep. It will probably be stored on Toolforge's tooldb though. Preferably it would be an OAuth token that is only limited to editing the specific user script.
::I personally prefer having a single bot handle it. beef [talk] 12:30, 23 May 2025 (UTC)
:::We explicitly tell our users to never share their authentication secrets with others, I can't possibly support processes that go against that. — xaosfluxTalk 14:52, 23 May 2025 (UTC)
::::If the bot receives community approval, then we won't need one that collects OAuth tokens. But according to WP:BOTMULTIOP it might be preferred to use OAuth instead of having a bot?
::::A different question would be whether we should require all commits to be associated with a Wikipedia username. I personally don't see a need, but WP:BOTMULTIOP and the community might think otherwise. beef [talk] 15:01, 23 May 2025 (UTC)
:::::I think single bot with interface administrator is the way to go. –Novem Linguae (talk) 15:08, 23 May 2025 (UTC)
::::::Much more so this way, making on-wiki edits by impersonating other users has a whole host of problems. — xaosfluxTalk 15:10, 23 May 2025 (UTC)
:::::::I don't have a preference to either approach, but let's not confuse things here. No one's asking for passwords to be shared. OAuth tokens are not the same as passwords. Every time you make an edit through an OAuth tool (like Refill), you are sharing your OAuth tokens. This is very normal, and safe because OAuth-based edits are tagged and can be traced back to the application that did it. (Worth noting that owner-only OAuth credentials don't have such protections and indeed should not be shared.) – SD0001 (talk) 15:38, 23 May 2025 (UTC)
:::::::: This. I'm concerned that having people upload a BotPassword or owner-only OAuth token was even considered, when a "normal" OAuth token is so much more obviously the way to go for that option. Anomie⚔ 13:03, 24 May 2025 (UTC)
:::::::::Ah, yeah, that would be fine. I guess I wasn't think much about having a non-owner only OAuth application. dbeef [talk] 10:17, 2 June 2025 (UTC)
:I might just be a Luddite here, but I don't think using GitHub for on-wiki scripts is a good idea to begin with. First, I feel that the git model works when there is a "canonical" version of the source code (the main branch, say), that people can branch off of, re-merge into, etc. But the problem here is that a git repo for a MW user script can *never* be the canonical source code; the canonical source code is inherently what's on-wiki, since that's what affects users. There is an inherent disconnect between what's on-wiki and what's elsewhere, and the more we try to pretend that GitHub is the source of truth for a script, the bigger the problems with that disconnect will be. Personally, I've seen many problems caused by the confusion generated just when projects use git branches other than "main" for their canonical code; here, the canon isn't even on git at all. How would this bot handle changes made on-wiki that aren't in git (if it would handle those at all)?
:Second, this doesn't solve the problem of "inactive maintainer makes it difficult to push changes to production", since a repo maintainer can disappear just as easily as a mediawiki user; it just adds an ability to diffuse it a little bit by adding multiple maintainers, at the cost of this inherent disconnect.
:Third, and easiest to overcome, how does this bot handle attribution of authorship? Writ Keeper⚇♔ 13:36, 23 May 2025 (UTC)
::{{tq|source of truth}} is a vague and subjective term. I would personally call the latest version the source of truth, which of course lives on GitHub. Wikipedia hosts the published version, which may not be from the default branch on GitHub (dev branch for development, as the latest source of truth, main branch for the published version).
::But that's of course a personal preference. There are many, many people out there that use Git for version control and for development of user scripts. You may be fine with using MediaWiki as version control and primarily updating code on-wiki, but some of us have different workflows. It might be helpful to write unit tests and force them to pass before getting deployed. It might be helpful to use a more preferred language that transpiles to javascript instead of using javascript directly. Having this benefits these use cases.
::It does solve the problem by allowing additional maintainers to be added. There's no native MediaWiki support for adding collaborators to a user script, so this can help with that, in addition to the benefits of a Git workflow.
::Attribution is given by using the commit author's name in the edit summary. I'm sure user script developers can include a license header and all that to deal with the licensing part.
::I think this thing should happen, and I think it will happen even if there is no community support for the bot to run, it will just involve the proposed toolforge service that collects OAuth credentials. I sure hope that the bot proposal passes but I'm fine with writing the extra code for the alternative too. I also want to think about whether I have enough energy to keep justifying for why I think this would be a good bot task, when all the negative feedback I get are from people who won't use it. The automatic syncing has occurred in one form or another. And personally, I want to be able to use TypeScript to write my next sophisticated user script project, and I want to add collaborators. beef [talk] 14:42, 23 May 2025 (UTC)
:::So would this bot only be used for edits in userspace? Or also for gadgets in the MediaWiki namespace? Polygnotus (talk) 14:52, 23 May 2025 (UTC)
::::I would want to get approval for only userspace edits first. Extending it to gadgets is an even bigger stretch and less likely to get approved. beef [talk] 14:53, 23 May 2025 (UTC)
:::{{tq|I also want to think about whether I have enough energy to keep justifying for why I think this would be a good bot task, when all the negative feedback I get are from people who won't use it}}: None of this happens in a vacuum. I commented on this because I've *already* had people complaining that I didn't submit a pull request on some GitHub repo when I responded to an intadmin edit request and implemented the change on-wiki--despite the fact that the GitHub repo was already several onwiki edits out of date before I made the change. We already have a process for multiple maintainers and code change requests; it's the intadmin edit request template. It's sub-optimal, for sure, but the solution to a sub-optimal process is not to create an entirely separate process to run in parallel. If development happens on GitHub, it doesn't affect anything unless it gets replicated onwiki. If development happens onwiki, it affects everyone regardless of what GitHub says. That's why I call the onwiki version the canonical source of truth--because that's the one that matters. I could see the benefit here if the bot also worked in reverse--if it were set up to automatically keep the main branch of the git repo in sync with the onwiki script. But as it is, I feel this will add more headache than it's worth. Sorry if that's tiring for you. Writ Keeper⚇♔ 15:03, 23 May 2025 (UTC)
::::If there is a critical fix, you can remove the header and the bot will stop syncing. That is by design. And then you can ping the maintainers to incorporate the fix. I personally wouldn't mind giving committer access of my user scripts to every interface admin on this site.
::::A two-way sync involves storing authentication to the Git repo, and yeah, harder to implement. Everyone that uses this sync scheme will have all development activity on GitHub, with potentially occasional bug reporting happening at the talk page, so I don't see that much point in programming the sync the other way. beef [talk] 15:16, 23 May 2025 (UTC)
:::::{{tq|Everyone that uses this sync scheme will have all development activity on GitHub}}{{fake citation needed}} My whole point is that hasn't been my experience so far. Maybe I just caught an unusual case. Writ Keeper⚇♔ 15:25, 23 May 2025 (UTC)
::::::If someone does choose to sync from Git to Wikipedia, then they must use the Git repo as their primary place for development. I cannot think of any case where people would have an onwiki version that is more up-to-date than the Git version, given that the idea of having it sync is based on the assumption that Git is used as the most up-to-date place. beef [talk] 03:29, 24 May 2025 (UTC)
::::{{tq|1=We already have a process for multiple maintainers and code change requests; it's the intadmin edit request template.}} This seems like wishful thinking. It's just not true. I'm reminded of a [https://en.wikipedia.org/w/index.php?title=User_talk:Evad37/rater.js&oldid=922411885#Interface-protected_edit_request_on_21_October_2019 time when a heavily used script broke] and multiple interface admins refused to apply an unambiguous 1-line bug fix. {{pb}}At best, edit requests get accepted for bug fixes, not for anything else. – SD0001 (talk) 16:26, 23 May 2025 (UTC)
::That's true of almost all kinds of software on GitHub. By your logic, the canonical version of, say [https://github.com/wikimedia/mediawiki mediawiki] itself, is what actually runs on the production machines, not what's on GitHub. Similarly, for a library the canon would be what's released to npm/pypi, etc. {{tq|How would this bot handle changes made on-wiki that aren't in git (if it would handle those at all)?}} That's like asking if a wikimedia sysadmin shells into a production host and edits the code there, how is it reflected back to gerrit? It isn't. That might sounds non-ideal, but it isn't unprecedented. Already, most big gadgets including Twinkle, afc-helper, and xfdcloser are developed externally and deployed to wikis via automated scripts. Manual edits on-wiki aren't allowed as they'll end up overwritten.{{pb}}{{tq|Second, ...}} It does solve that problem – a git repo can have multiple maintainers to avoid bus factor, unlike a user script which can only be edited by one single userspace owner (technically interface admins can edit as well, but on this project, we appear to have adopted a mentality that doing so is unethical or immoral). {{pb}}Having said that, I personally don't use GitHub or Gitlab for any of my user scripts. But I respect the wishes of those who choose to do so. – SD0001 (talk) 15:05, 23 May 2025 (UTC)
:::I would argue there is a substantial difference between someone SSHing into a production host to make manual changes and the process of talk-page-int-admin-edit request, and the difference is that the latter *is* a process. But also, yes, to an extent I *would* argue that, from a holistic perspective, the code that is active in production and that users are seeing, interacting with, and using *is* the canonical version, and that what is in a code repo, main, develop, or otherwise, is only important to the extent that it reflects what's on the production machine. The reader or normal editor using a website feature doesn't care what's in the repo, they care what they're using, and they're going to be frustrated if that feature suddenly disappears, regardless of whether that's the fault of some bot overwriting code or some dev not committing their changes to the off-site repo or what have you. Writ Keeper⚇♔ 15:32, 23 May 2025 (UTC)
::::If I have to choose between two processes that can't co-exist, I'll choose the one that offers more benefits. A git-based workflow enables unit testing, transpilation, linting and better collaboration. It offers a change review interface that allows for placing comments on specific lines. As for talk page requests, refer to my comment above about how useful they are. – SD0001 (talk) 12:41, 24 May 2025 (UTC)
::There's pros and cons. I talk about it in my essay User:Novem Linguae/Essays/Pros and cons of moving a gadget to a repo. Popular, complex gadgets are often the use case that benefits the most from a github repo. A github repo enables automated tests (CI), a ticket system, and a PR system, among other things. These benefits are well worth the slight downside of having to keep things in sync (deploying). And in fact this proposed bot is trying to fix this pain point of deploying/syncing. –Novem Linguae (talk) 15:16, 23 May 2025 (UTC)
:@0xDeadbeef Don't know if you missed it in the Tech News above, but wikitech:Tool:Gitlab-content describes a new reverse proxy that allows user scripts to directly run code from gitlab. --Ahecht (TALK PAGE) 15:06, 23 May 2025 (UTC)
::@Ahecht They mentioned Gitlab-content above. Search for {{tq|remote script hosted on Wikimedia GitLab}} Polygnotus (talk) 15:07, 23 May 2025 (UTC)
::I have talked to BDavis on Discord and he said he thinks having it synced to an on-wiki page is better than a reverse proxy. It's in the thread under the #technical channel on Discord. I originally thought that gitlab-content was going to be the ultimate solution but apparently not. And I had already written some code for this thing to happen, so I figured why not propose it. beef [talk] 15:09, 23 May 2025 (UTC)
An alternative that doesn't require any advanced permissions or impersonation issues is for the bot to just sync to itself. It could sync from anywhere upstream to User:Botname/sync/xxxx/scriptyyy.js). Then, any interested user could just import that script. — xaosfluxTalk 15:16, 23 May 2025 (UTC)
:For gadgets, we already have a manual process - a bot that opens an edit request when an upstream repo wants to be loaded to the on-wiki one. That does allow us to ensure that changes are only made when we want them, and allows for local code review. For userscripts, users that want to do what this thread is about are already going to have to just trust the bot directly regardless. — xaosfluxTalk 15:22, 23 May 2025 (UTC)
:That might be fine, but to me less preferable than the main proposal because then it would be harder to know who is maintaining what script. (I guess it wouldn't be the case if the {{code|xxxx}} refers to the user who asked for the script) I'm also slightly lazy about adding a new proxy-script-creation system in addition too.
:A slight concern would be that the name could shift the responsibility of trust and maintaining the script to the bot instead of the actual maintainer. beef [talk] 15:24, 23 May 2025 (UTC)
::This would absolutely require that anyone's space that you were publishing to trusted the bot. By publishing a revision you would be responsible for the revision you publish. — xaosfluxTalk 15:53, 23 May 2025 (UTC)
:::The problem with this alternative approach is that it is just hard to manage.
:::If I make a user script, it should be my own. Under a bot's userspace, you'd need a separate process for requesting creation and deletion.
:::Also this makes it harder for pre-existing scripts to be synced. People already using and developing a script at an existing location cannot choose to adopt a Git sync. And it makes it much more harder for the person to disable syncing (compared to editing in your own userspace to remove the header). beef [talk] 03:32, 24 May 2025 (UTC)
Support. Deploying gadgets such as Twinkle and AFCH (using fragile and bespoke deploy scripts that have a lot of manual steps), and my user scripts (which I edit in VS Code then copy paste to onwiki) is a pain and not a good use of my time. Let's automate this. –Novem Linguae (talk) 15:24, 23 May 2025 (UTC)
I know this is not going to happen, but i consider it unfortunate that we have to do all these hacks. A more reasonable approach would be if there was a spot on gerrit where script authors could put their gadget scripts (With CR excpectations being similar to on wiki instead of normal gerrit) and have them deployed with normal mediawiki deployments. I guess there's all sorts of political issues preventing that, but it seems like it would be the best approach for everyone. Gadgets deserve to be first-class citizens in the Wikimedia code ecosystem. Bawolff (talk) 18:03, 23 May 2025 (UTC)
:We're a top-10 website in the world, I wouldn't call it "political" that we could be hesitant about loading executable code from an external commercial platform in to our system without our review. — xaosfluxTalk 23:47, 23 May 2025 (UTC)
::If the community wants to restrict the sync to only Wikimedia GitLab, there wouldn't be any objections on my part, though I don't see why we can't do GitHub as well. beef [talk] 03:37, 24 May 2025 (UTC)
:::To clarify, I'm just saying, in the ideal world, gadgets would be deployed as part of MediaWiki (i.e. They would ride the deployment train). Its weird that this stuff is being layered on top. I understand that there are political & historical reasons why this is not the case, but ideally gadgets would be treated the same as any other site javascript. Alas that is not the world we are living in. Bawolff (talk) 23:55, 25 May 2025 (UTC)
::::The train is slow and mediawiki developers have been known to argue with communities about practices. — xaosfluxTalk 00:46, 26 May 2025 (UTC)
::::Well, if gadgets rode the deployment train, they wouldn't exactly be gadgets, would they? They would be indistinguishable from JavaScript loaded by extensions. The point of gadgets was for them to be fully under community control. I think it's intentional they're managed on-wiki, although admittedly at that time JS development tended to be lightweight and the drawbacks of wiki-based editing may not have been a big deal. Making gadgets be part of an extension feels akin to making Community Configuration controlled via ops/mediawiki-config. – SD0001 (talk) 06:17, 30 May 2025 (UTC)
:There was at least one hackathon project in the past that proposed something like this, but I don't think it ever went anywhere. @Legoktm and I think either @Krinkle or @Catrope (I can't remember which unfortunately) worked on the idea of making a single extension to host the code for multiple gadgets during the Mexico City Wikimania hackathon. Oh my, that was 10 years ago now. Today I assume one of the main blockers to this idea would be finding a Foundation engineering team to claim ownership/sponsorship of the extension. -- BryanDavis (talk) 19:51, 29 May 2025 (UTC)
The only concern I have is that you should require the existing interface administrators be given write access to the repository on request. Otherwise this falls into the ballpark of me not personally seeing the value or using this myself but if other people think it's useful then more power to them. * Pppery *it has begun... 17:37, 25 May 2025 (UTC)
:It's not something I can require because it involves people that are not me. IAs can disable the sync through removing the line for the sync. I personally would give access to my repos to IAs upon request but that's just me. dbeef [talk] 10:19, 2 June 2025 (UTC)
I'm highly supportive. I hope the default for devs of major scripts will become deployments from GitHub (the current ad hoc system is honestly pretty wild). Best, KevinL (aka L235·t·c) 23:49, 27 May 2025 (UTC)
Edit Source (MiniEdit)
Currently, you can edit the entire article only .
I suggest making a mechanism that will track individual paragraphs and display a PENCIL "Edit Source" on the right.
You can see something similar in many web-mails (gmail.com).
Of course, the question immediately becomes what to consider a paragraph. This is not so important, you can combine several paragraphs to a large one.
It's better than editing the entire article on 4 screens. Seregadu (talk) 04:40, 29 May 2025 (UTC)
::Yes, that's exactly what I'm talking about. It often happens that the user does not see some scripts on the page. I've disabled everything I can. And on Chromium 136, I don't see this pencil. Like 99% of Wiki users Seregadu (talk) 04:56, 29 May 2025 (UTC)
:::I'm not exactly sure what issue you're having, but this is a script you need to add! I don't think you've done that, at least not to User:Seregadu/common.js, which is where you would only have to copy one line to enable MiniEdit. If you need more help, don't hesitate to ask. Remsense ‥ 论 04:59, 29 May 2025 (UTC)
::::I'm going to try adding a script now, but why not do it for everyone?
::::This script only works on the user's side and does not create a load on the Wiki. Thanks for the script ! ). Seregadu (talk) 05:01, 29 May 2025 (UTC)
:::::Glad if I could help! Honestly, it's always worth considering that most people aren't "power users" like you and I, and maybe you can imagine little symbols showing up all the time being confusing or stressful for someone's grandma or a young child. Remsense ‥ 论 05:03, 29 May 2025 (UTC)
::::::This script edits only 1 level. Very sadly. Most discussions have 4 pages that are far from level 1. Seregadu (talk) 05:10, 29 May 2025 (UTC)
:::::Because the mechanism used to edit something is the section, not the paragraph. When you click an edit button next to a paragraph and get the whole section, your user will be like "what happened?!"
:::::And, to be honest, a button every paragraph would be a lot of clutter. Izno (talk) 05:10, 29 May 2025 (UTC)
::::::No! Exactly every paragraph ! After all, the pencil is already there and it works well. In this conversation, I can edit only the first 5 lines. Let's wait for your opinion when this conversation grows to 4 screens. Seregadu (talk) 05:14, 29 May 2025 (UTC)
:::::::I routinely edit pages much longer than that. Editing with paragraphs wouldn't be useful. Izno (talk) 05:41, 29 May 2025 (UTC)
:::::::What do you mean by 4 screens? And I definitely would not find this useful. Doug Wellertalk 06:45, 29 May 2025 (UTC)
2 problem that , that I found right now! And where is the community see ? I'm not just adding empty lines, I'm testing the script. I see that it requires updating the browser cache after each text change. It's not normal.
it's as if adding text removes the script from the browser cache. Seregadu (talk) 05:20, 29 May 2025 (UTC)
:Yes, only 1 level. This feature of a good script makes it useless for serious lengthy discussions. Seregadu (talk) 05:26, 29 May 2025 (UTC)
::Discussions usually shouldn't be edited, the tool is for editing articles. CMD (talk) 06:07, 29 May 2025 (UTC)
::BTW the correct way to say "level 1" is "namespace 0" or just "article". See [https://www.mediawiki.org/wiki/Extension_default_namespaces List of namespaces]. Talk pages are namespace 1. -- GreenC 15:58, 29 May 2025 (UTC)
:I finally tried this script now, I wanted to edit my message. The script prompts me to edit the entire header of this page, not my message. This script doesn't work for me. Neither at the 0 level of names, nor at 1, nor at 2. Seregadu (talk) 19:10, 29 May 2025 (UTC)
:I couldn't find a link to my common.js page, and link to all the scripts useful to the user. You don't admit the idea that I should write them myself, do you? The obvious place: "Special pages" -- there is nothing.
:I think , Wikipedia should structure useful links for the user inside his profile. Seregadu (talk) 10:25, 30 May 2025 (UTC)
::*Preferences → Appearance, specifically, it's the "{{int:prefs-custom-js}}" link just under "{{int:prefs-common-config}}"
::So, it's already "inside his profile". --Redrose64 🌹 (talk) 10:16, 31 May 2025 (UTC)
:::Yes, I wasn't paying attention. I searched in the top menu, in the side menu, but not in my profile. I was no right. Yes, the script works for editing articles, but not discussions. And that's good too. Although it's strange for a Wiki to invent different text formats.
:::But you still haven't answered the question: "Why a simple user, even without knowledge of JS, doesn't see a link to a library of useful scripts or styles? It is a pity if it exists, but there is no link to it. Seregadu (talk) 16:51, 2 June 2025 (UTC)
Quarry (quarry.wmcloud.org) not working
{{tracked|T395680}}
@Liz I doubt this is the best place to ask this, but Quarry is not working at all. Pages take 5 minutes to load and it is impossible to submit a query. I am posting this on the pump if anybody knows what is causing this or how to fix this... -1ctinus📝🗨 00:42, 30 May 2025 (UTC)
:I noticed Quarry goes down every few days. If it happens again, you can use the alternative https://superset.wmcloud.org/sqllab/. – DreamRimmer■ 12:50, 31 May 2025 (UTC)
Editing references direct from the reflist
Sometime fairly recently a change was made to VE that allows you to double-click on a reference in the {{t|reflist}} and edit it directly, as opposed to having to go track it down in the body of the article. I just want to say that this is wonderful, and a huge timesaver, and thank you to whoever made this happen. RoySmith(talk) 00:57, 30 May 2025 (UTC)
:Oh yeah. Is there a userscript for non-VE? -- GreenC 04:15, 30 May 2025 (UTC)
:::Oh nice. I looked at Factotum it's kind of overwhelming the complexity of options and taking over so many things I have yet to try it. I just installed ReferenceEditor and it's great except it only is able to edit a small proportion of citations for some reason. I can understand certain things, but some perfectly formed idiomatic CS1|2 citations it is unable to edit. Maybe I need to spend time with Factotum to see what it can do. -- GreenC 15:44, 30 May 2025 (UTC)
::::I tried Factotum. It works better though I wish it was a popup edit window like ReferenceEdtior but it's still a big help with citation maintenance. -- GreenC 16:00, 30 May 2025 (UTC)
:Thanks for the thanks, I've passed it along to the team. It's rare and appreciated. Digging… that was phab:T54750 by the Editing team and specifically Esanders in gerrit:c/mediawiki/extensions/Cite/+/903311 (and the mountain of prior code/collaboration that it all requires!). HTH. Quiddity (WMF) (talk) 04:53, 30 May 2025 (UTC)
::Thanks, I've requested mass message rights.♦ Dr. Blofeld 11:24, 31 May 2025 (UTC)
:::Is there a way to find a list of the most active editors (who've made the most substantial expansions), to science, technology, engineering, maths, medicine and business articles and geography and city/village/region articles in recent years. Including good and featured article contributors etc? I've been looking through the Science project members and it's difficult to find active editors! ♦ Dr. Blofeld 13:27, 31 May 2025 (UTC)
::::: The database does record the project assessment and association of all pages (mw:Extension:PageAssessments#Database tables). So it's possible to get all pages tagged with the project, get for each page the number of edits for each editor, and then sum up the counts to get the editors with the most edits on that project in a given timeframe. We're going to add similar information (though from the "what are this user's projects" side rather than "what are this project's users") to XTools soon (we're doing a lot of stuff these days, so the change won't go live for a while). Probably this would be a slow query and should be done by batches (such as: first 100 pages, 101-200, and so on). — Alien 3 3 3 14:50, 31 May 2025 (UTC)
::::::@Alien333 Interesting, is there something I can read about the improvements to XTools? Currently its technically possible but it would require so many API calls that it would be a bad idea. Polygnotus (talk) 14:55, 31 May 2025 (UTC)
::::::: A list of everything that's happening/planned is at phab:tag/xtools. Feel free to drop a task if you've got a suggestion. Stuff that's done and will 100% be in the next update is in the "Pending deployment" column. Changes that still need review are at [https://github.com/x-tools/xtools/pulls].
::::::: It's perfectly doable in reasonable time, just not through the api. The go-to solution for such mass queries to the database that still can finish in reasonable time is [https://quarry.wmcloud.org/ quarry]. — Alien 3 3 3 15:03, 31 May 2025 (UTC)
:::: @Dr. Blofeld: well, I couldn't help myself fidgeting with the idea. Turns out the query takes about a few minutes in the end.
:::: The MySQL optimiser is a bit dumb, so it can't be one query: first you have to go to a fork of [https://quarry.wmcloud.org/query/94186], change the project name line 5, start it, wait a few minutes, then you get a comma-separated string of user IDs. Then go to a fork of [https://quarry.wmcloud.org/history/94187/1012895/982195], replace line 4 by what you got in the previous step, and poof, you get the list of the 100 most active users in the given wikiproject, with those with the most edits first.
:::: It's a bit of a mess, but it's prob still much faster than doing it by hand. — Alien 3 3 3 20:09, 31 May 2025 (UTC)
:::::Thanks both! Is there are way Alien that you could copy into a Wiki list? ♦ Dr. Blofeld 09:33, 1 June 2025 (UTC)
:::::: Quarry has a "download data" at the right that lets you download the CSV of the result; as here there's only one value per row it gives the names one name per line. — Alien 3 3 3 09:48, 1 June 2025 (UTC)
::::: I don't see why you don't just join actor (or actor_revision, which is a little faster since you're already joining revision anyway). Also, you don't need to go through the page table at all, since page_assessments.pa_page_id is already a page id and that's all you're using it for; the revision_userindex view is usually a pessimization unless you already have a set of actor ids you're looking for; you don't need to select COUNT(*) just so that you can order by it; and you're aware that you're throwing away the ordering in that second query, right? quarry:query/94218 does it in one step; quarry:history/94218/1013390/982681 for a version showing the edit counts. —Cryptic 21:13, 1 June 2025 (UTC)
:::::: I wasn't joining on actor because the MySQL optimiser is dumb and last time I checked it didn't use the index when doing the join, which meant it scanned the whole actor table and took ages. Maybe related to your other points, though.
:::::: You're 100% right on the join on page, and the other stuff you said; and no I'd forgotten that the second query threw the ordering away.
:::::: I'm a bit rusty at SQL :). — Alien 3 3 3 05:33, 2 June 2025 (UTC)
I can’t Log In!
So, unfortunately, I was logged out of my account, and whenever I try to log in, the following text message appears: “There seems to be a problem with your login session; this action has been canceled as a precaution against session hijacking.” It also further mentions that it may be due to my cookie settings. Well, I can’t access that due to this exact problem. If anyone could help me, I’d be very thankful. BTW, my account is “Long-live-ALOPUS”. This may have something to do with my account completing one year, but, I’m able to log in in other devices, not my iPad. Could it be a problem from my side? I don’t think I forgot my password. Please help. 2405:201:550B:B035:B588:DBDC:3F72:E094 (talk) 11:21, 31 May 2025 (UTC)
:Can you acccess https://auth.wikimedia.org? It redirects to https://www.wikimedia.org/. If the redirect works then try deleting your cookies on the iPad. See [https://support.apple.com/en-us/105082]. If you don't want to delete data for all websites then try wikimedia.org and wikipedia.org. PrimeHunter (talk) 12:47, 31 May 2025 (UTC)
::Yes, I’m even signed in. The problem is only in the English Wikipedia. I’ll try deleting the cookies. Thank you for your help! 😄 2405:201:550B:B035:FC82:3345:E73B:F763 (talk) 14:13, 31 May 2025 (UTC)
::I deleted the website data for Wikipedia on my iPad, but it still hasn’t worked... what should I do now?! 😞 2405:201:550B:B035:FC82:3345:E73B:F763 (talk) 14:28, 31 May 2025 (UTC)
::::::Like, this is me on another iPad, but it’s not my main device. Long-live-ALOPUS (talk) 04:14, 1 June 2025 (UTC)
:::::::Try opening an incognito window (that's on Chrome; I think Safari and Firefox call it private browsing) and try to login there. If that works, that's a pretty good indication that you've still got some stale cookies that need removing. RoySmith(talk) 18:04, 1 June 2025 (UTC)
::::::::I use an old iPad (the first generation of the iPad Air), so it doesn’t have that feature. 2405:201:550B:B035:CD9E:1317:5009:A39B (talk) 07:07, 2 June 2025 (UTC)
:::::::::That's a 12 year old machine (from when it was introduced). The newest version of iOS it should support is iOS 12. iOS 12 comes with Safari 12, which most definitely has "Private browsing". It is not unlikely that there is some sort of incompatibility with iOS 12 devices and the recent changes to the login methodology as it was likely never tested. Have you tested other language wikipedias ? What about https://en.wikivoyage.org ? —TheDJ (talk • contribs) 09:27, 2 June 2025 (UTC)
::::::::::Well, I don’t have that private browsing feature; I think there’s a content filter, that’s why. Also, yes, I’m able to log in to my Arabic and Hindi Wikipedia accounts (which are the same name as my English one), but not Wikivoyage. Also, I’m able to log in from other, non-permanent devices, so this is a problem in my iPad. 2405:201:550B:B035:CD9E:1317:5009:A39B (talk) 12:57, 2 June 2025 (UTC)
Unwanted box
For some reason I'm now seeing a box at the top of every article page with Article Links Tools and Include URLs. All I did was update my common.js to allow mass messages [https://en.wikipedia.org/w/index.php?title=User%3ADr._Blofeld%2Fcommon.js&diff=1293183948&oldid=1182781227 here] ♦ Dr. Blofeld 11:24, 31 May 2025 (UTC)
::Ah OK, thanks! ♦ Dr. Blofeld 12:38, 31 May 2025 (UTC)
Category not retained in draft with AFC submission template
Hello, I'm noticing an issue with Draft:Baba Mosque where manually added categories (such as Category:AfC draft submissions) are not retained or do not appear in the rendered page after saving, especially when the template is used.
2] Add a category like (Category:AfC draft submissions)
3] Save the page — the category doesn’t appear
Is this suppression intentional due to the template? Or is there a technical issue at play? Thanks! JesusisGreat7☾⋆ | Ping Me 11:50, 31 May 2025 (UTC)
:This is a tracking category that is added automatically by the {{tl|AfC submission}} template and does not render as text. – DreamRimmer■ 12:45, 31 May 2025 (UTC)
::The draft uses {{tl|Draft categories}} which deliberately only displays the categories at the location without actually adding the page to the categories. Don't change this. The categories would be added if they were outside {{tl|Draft categories}} but don't do that. AfC categories should not be added manually. I have added {{tl|AfC submission}} instead.[https://en.wikipedia.org/w/index.php?title=Draft:Baba_Mosque&diff=prev&oldid=1293220665] PrimeHunter (talk) 13:01, 31 May 2025 (UTC)
Request: example of markup for tickable checkboxes
(Context) I would like to add a section to an article talk page which contains a list of checkboxes which I can tick and then save the section. Short of using 'pre' tag with '[ ]' and '[X]', is there a civilized way to do it? Gryllida (talk, e-mail) 12:50, 31 May 2025 (UTC)
:{{ping|Gryllida}} You could use {{tl|Checkbox 2 (simple)}} or another template linked there. PrimeHunter (talk) 13:10, 31 May 2025 (UTC)
: I just use
:* {{tlg|unchecked box|code=y|_show_result=y}}
:* {{tlg|checked box|code=y|_show_result=y}}
: and don't really need to bother with a parameterized template whose name or parameters I can't remember. Mathglot (talk) 19:07, 1 June 2025 (UTC)
Gadget to make delete button more accessible?
Is there some gadget that would modify Vector Legacy 2010 skin and
move "delete" button from "more" panel and would make it more accessible?
maybe move it in such way only if delete template is on the page?
Note: I know that I am not an admin on Wikipedia. I have admin rights on other, much smaller, Mediawiki wiki - where there is backlog of many pages to be deleted. Currently I need to click to open a page, click to view history and maybe investigated, click to unroll panel, click delete, confirm delete. I would gladly simplify this process as I will do it about 1200 times or more Mateusz Konieczny (talk) 14:25, 31 May 2025 (UTC)
:{{ping|Mateusz Konieczny}} Some of our deletion templates make a delete link which is only visible to administrators and has a prefilled reason. If you post a link to a page with a deletion template at your wiki then we can maybe help more. PrimeHunter (talk) 17:38, 31 May 2025 (UTC)
::{{ping|PrimeHunter}} https://wiki.openstreetmap.org/wiki/Template:Delete Mateusz Konieczny (talk) 21:08, 1 June 2025 (UTC)
::::{{ping|PrimeHunter}} Thanks, I got it working! Mateusz Konieczny (talk) 00:55, 2 June 2025 (UTC)
: Mateusz, after you post the link, if the button has a CSS class defined for it, you may be able to move it yourself, using custom code at your common.css page. If not, then probably a User script would do it. Mathglot (talk) 18:59, 1 June 2025 (UTC)
Finding raw text CN tags
Quite a few articles contain a {{code|[citation needed]}} tag, usually added via visual edit. These should be converted into standard cn tags. I fixed one at Special:Diff/1293275217. And a search for [https://en.wikipedia.org/w/index.php?search=insource%3A%22%5B%5BWikipedia%3ACitation+needed%22&title=Special%3ASearch&ns0=1&searchToken=86opyhchckutlk02egbfjhm1p insource:"CX Zoom[he/him] (let's talk • {C•X}) 20:34, 31 May 2025 (UTC)
:Try insource:"Citation needed" insource:/\[\[Wikipedia:Citation needed/i . The latter search is regex search. insource:"Ponor (talk) 20:40, 31 May 2025 (UTC)
::The Special:Search filters the articles correctly with this search term. But, WP:JWB keeps adding every article with a cn tag in it, probably a bug with JWB? —CX Zoom[he/him](let's talk • {C•X}) 20:47, 31 May 2025 (UTC)
:::@CX Zoom: Did you limit JWB search to main(space) only? By default, it includes all name spaces. Still, "Ponor (talk) 20:52, 31 May 2025 (UTC)
::::Limiting by namespace works. Thank you! —CX Zoom[he/him](let's talk • {C•X}) 20:58, 31 May 2025 (UTC)
How to pass an article into Python code in AWB
So, I have written a Python code that takes a file, does some operations on the text, and replaces the old text with new text. Now, Wikipedia:AutoWikiBrowser/User manual#Tools allows external scripts, but I don't understand how to pass the article through the Python code. What additional code is needed for it? —CX Zoom[he/him](let's talk • {C•X}) 22:50, 31 May 2025 (UTC)
:(When you do, I suggest showing the code that you used to do so, or at the very least whether you use pywikibot or handjammed things.) Izno (talk) 20:39, 1 June 2025 (UTC)
:CX Zoom, to my understanding, you have a python script read the content from a file, and then write the changed content back to the file. So you could set the "Program or script" field to the python executable, then pass the path to the python script as an argument, then you'd have the script with something like: {{#tag:syntaxhighlight|with open(filename, "r") as file:{{Truenewline}} content = file.read(){{Truenewline}}with open(filename, "w") as file:{{Truenewline}} file.write(perform_changes(content)){{Truenewline}}|lang="python"}}— Qwerfjkltalk 11:04, 2 June 2025 (UTC)
::@Qwerfjkl: The structure of script is similar. I understood the "Program or script" field also. But I don't understand the "Arguments/Parameters" field. Do we enter the same value in both fields? —CX Zoom[he/him](let's talk • {C•X}) 18:22, 2 June 2025 (UTC)
:::I am only guessing (and you need to ask this at AWB), the Arguments/Parameters would be something to specify the name of the file to be processed (variable "filename" in above code). If any other arguments were required by the Python program, they would also be given, similar to how you would run a program from a command line. Johnuniq (talk) 05:43, 3 June 2025 (UTC)
:::CX Zoom, as I said, you can put the path to the python executable (python.exe) in the Program or script field, and the path to the python script under "Arguments/Parameters". Equivalent to running python script.py in the terminal. — Qwerfjkltalk 15:11, 3 June 2025 (UTC)
::::Thank you very much everyone. Issue resolved now. —CX Zoom[he/him](let's talk • {C•X}) 22:06, 3 June 2025 (UTC)
Is "Related changes" working properly? (example: Category:Use Malaysian English)
:Category:Use Malaysian English transcludes {{tl|Parent monthly clean-up category}}. That template was modified on 31 May 2025, but when I click on "Related changes" in the sidebar of :Category:Use Malaysian English, the resulting page says No changes during the given period match these criteria. I have been having a feeling that "Related changes" has not been working properly for a few months, but this is the first time that I have been able to find a concrete example. Am I misunderstanding what "Related changes" is supposed to show? I use it to try to figure out why a page that has not been modified in a while is suddenly showing a change of some kind (e.g. a new category or syntax error). – Jonesey95 (talk) 14:27, 2 June 2025 (UTC)
:{{ping|Jonesey95}} Related changes doesn't show changes to pages which are transcluded. It only shows changes to pages which are linked on the page or have a link to the page. See more at Help:Related changes. PrimeHunter (talk) 19:51, 2 June 2025 (UTC)
::That is a helpful link. I see an explicit statement there: {{tq|Changes to transcluded pages like templates are not listed, unless there is also a link to or from the page.}} Maybe it has just been coincidence that clicking on "Related changes" has often worked for me in these situations. I guess my question is, then, if a page that has not been modified in a while is suddenly showing a change of some kind (e.g. a new category or syntax error), what is a good way to figure out what has caused the change? I seem to remember a script that sorted "Pages transcluded onto the current version of this page" by modified date, which would probably work, but I found it difficult to live with because if I was looking for a specific template, I could never find the template in the long list because it was not alphabetized. – Jonesey95 (talk) 20:06, 2 June 2025 (UTC)
:::The script is User:Anomie/previewtemplatelastmod but I also found it difficult to live with. I gave up using it because both the order and added information was unwanted most of the time and made it harder to find templates of interest. {{ping|Anomie}} It's a great script when I do want the changes it makes. I would love to reinstall it if I had to click something on an edit page to activate it. PrimeHunter (talk) 21:17, 2 June 2025 (UTC)
::I don't think this is true in the "to" direction. If you do related changes on a template and select changes linked to the selected, all the pages that include that template are usually included. Bawolff (talk) 00:06, 4 June 2025 (UTC)
:: At a quick glance at the code, it looks like it normally does check both pages linked and pages transcluded. But category pages are special in that the normal use (like Special:RecentChangesLinked/Category:Use Malaysian English) ignores all links from the category page, instead it gives you changes for the members of the category. Anomie⚔ 01:10, 4 June 2025 (UTC)
Feel free to leave the results in my user space, if you're not able to edit a protected template on that project. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 17:35, 2 June 2025 (UTC)
:Answered over there. – Jonesey95 (talk) 18:06, 2 June 2025 (UTC)
Simple summaries: editor survey and 2-week mobile study
Hi everyone! I'm writing on behalf of the Web Team. Over the past year, the team has been exploring ways to make the wikis more accessible to readers globally through different projects around content discovery. One of the ideas we’ve been discussing is the presentation of machine-generated, but editor moderated, simple summaries for readers. These summaries take existing Wikipedia text, and simplify it for interested readers. Readers will show interest by opting into the feature and clicking to open the summary on pages where it is available. As part of our exploration into this idea, in the next two weeks we will be launching:
1. An editor survey on English, Spanish, French, and Japanese Wikipedias. This survey will ask editors on their preferences for generating, editing, and moderating summaries, as well as their thoughts on the project overall. We will use the data from this survey to propose the initial moderation workflows for a future version of a summary feature.
2. A two-week experiment on the mobile website. This experiment will allow a small set (10%) of readers to opt into and open pre-generated summaries on a set of articles for two weeks. After two weeks, we will turn the experiment off and use the data collected to determine whether users are interested in summaries and open them frequently, as well as whether summaries aid the overall experience.
After the completion of these two steps, we’ll be publishing our results on the project page and reaching out to discuss whether to proceed with building this feature and provide some options for its associated workflows for editors. You are welcome to leave questions around the project here or on the project talk page. EBlackorby-WMF (talk) 18:20, 2 June 2025 (UTC)
:Yuck. Also, this should probably be at VPPR or VPWMF. Cremastra (u — c) 21:58, 2 June 2025 (UTC)
:@EBlackorby-WMF But seriously. I'm grinning with horror. Just because Google has rolled out its AI summaries doesn't mean we need to one-up them.
:I sincerely beg you not to test this, on mobile or anywhere else. This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source. Wikipedia has in some ways become a byword for sober boringness, which is excellent. Let's not insult our readers' intelligence and join the stampede to roll out flashy AI summaries. Which is what these are, although here the word "machine-generated" is used instead
:You also say this has been "discussed" which is thoroughly laughable as the "discussion" you link to has exactly one participant, the original poster, who is another WMF employee. Cremastra (u — c) 22:04, 2 June 2025 (UTC)
What a coincidence! I had just read this article (https://www.theverge.com/news/676933/gmail-ai-summaries-workspace-android-ios) a day ago and wondered if there would be a similar feature on Wikipedia. As long as this machine/AI-generated summary feature is opt-in, I don't see any downsides to having it available for interested readers. The attention spans of the younger generations are shrinking, and some would rather read a short summary of the World War II article than a 13,033-word long article; this feature would be useful and beneficial for them. Some1 (talk) 22:43, 2 June 2025 (UTC)
:We can read the lead, which is a convenient, short summary written by real people. Cremastra (u — c) 22:45, 2 June 2025 (UTC)
::Have you seen our leads lately? Izno (talk) 22:49, 2 June 2025 (UTC)
:::All right, they're a reasonably short summary. In any case, even in articles with longer leads like Romeo and Juliet it is possible to skim over or ignore the parts that disinterest me and still extract valuable information. Cremastra (u — c) 22:51, 2 June 2025 (UTC)
::::File:Dopamine simple Summary.png]] The lead of Romeo and Juliet isn't as long as the lead of World War II, which I'd linked. It seems like these AI-generated simple summaries are ~5 sentences long, which is much shorter (and more digestible) than the average leads of (non-stub) articles. Some1 (talk) 02:28, 3 June 2025 (UTC)
:::::Also, concerns about this feature should focus on the "This summary has not been checked for verifiable accuracy" part, not because "it's AI". Some1 (talk) 02:39, 3 June 2025 (UTC)
::::::The first paragraph is generally the digestible summary of the summary. This is enforced technologically in mobile views, which is where most of the view of the above-maligned younger generations are going to be coming from, as only the first paragraph is shown before the infobox. For World War II, that is six sentences. CMD (talk) 04:58, 3 June 2025 (UTC)
:::::::The first paragraph of Donald Trump is two sentences long. I want to see how "Aya" summarizes that Trump article and other contentious articles such as Gaza genocide, Zionism, Race and intelligence, etc. Some1 (talk) 03:00, 4 June 2025 (UTC)
::::::::@Some1 I only have experience with the beautifully named c4ai-aya-expanse-32b but I'd describe it as almost "on the level of OpenAI". Which is where a lot of companies are at. It refuses to tell me that I am wrong, even when I am intentionally saying stuff that is clearly incorrect. It says things like {{tq|You're right; the poem I generated does indeed draw inspiration from the well-known and beloved "duck poetry" genre}}. Polygnotus (talk) 03:10, 4 June 2025 (UTC)
::::::::{{u|Some1}}, summarizing contentious articles and how that will almost certainly be exploited and weaponized by partisan actors is something that interests me too. For interest, see [https://en.wikipedia.org/wiki/Talk:International_recognition_of_Palestine#Extended-confirmed-protected_edit_request_on_30_May_2025 this recent query from a reader], where they appear to either a) assume that we control Google knowledge panels and/or b) that we should amend the wording to ensure the part they regard as important fits into the summary. It is the first time that I have seen something like this. Sean.hoyland (talk) 07:54, 4 June 2025 (UTC)
:::::::::Oh we get those every now and then, not just for CTOPs. CMD (talk) 08:07, 4 June 2025 (UTC)
:@EBlackorby-WMF Hi! As you can tell, your proposal does not align with what the community actually wants.
:As you may or may not be aware, the WMF and the Wikipedia community have a very difficult and tense relationship.
:It sounds like you guys already did a lot of work, without getting input from the community.
:You link to this with the text {{tq|we’ve been discussing}} but that must've been an internal WMF discussion because no one responded to that post.
:Perhaps the Movement Communications team forgot to actually communicate with the movement? {{smiley|13}}
:I recommend stopping, and in the future asking for feedback at a far far earlier stage (but of course I know you won't).
:There are many people here who are happy to help you by saying why we dislike certain ideas. But you need to involve those people at an early stage (during brainstorming), otherwise it is difficult to change course and you've wasted a lot of time and energy.
:The WMF as a whole makes this same mistake over and over and over again. If you want to hear all the downsides and problems with a proposal, you can ask me on my talkpage. Polygnotus (talk) 05:15, 3 June 2025 (UTC)
::How can you tell that from 5 people responding ? Have you run your own research into this ? —TheDJ (talk • contribs) 14:01, 3 June 2025 (UTC)
:::@TheDJ Let's not argue for the sake of arguing. It might confuse them. This isn't a [https://hannahmontana.sourceforge.net/index.html distro]-war. Polygnotus (talk) 15:30, 3 June 2025 (UTC)
::Did a lot of work even go into this? It takes all of 5 minutes to set up the world's 327482th ChatGPT wrapper claiming to be a brand new feature. It feels slapdash and easy for them to throw up and it's such an insult to the community. 🌸wasianpower🌸 (talk • contribs) 04:20, 4 June 2025 (UTC)
The replies above may be missing the point. Imagine you were a WMF staffer. Would you want to fix boring bugs or would you want to work on AI for your CV? Johnuniq (talk) 05:49, 3 June 2025 (UTC)
: I'd want to fix boring bugs, particularly in the Action API. 😀 Anomie⚔ 11:22, 3 June 2025 (UTC)
::{{tq|Imagine you were a WMF staffer. Would you want to fix boring bugs}}? Hell yes I would. Have you seen my contributions here on en.WP? All I do is fix boring bugs so that our readers and editors get the pages, links, text, and images that they expect, with a minimum of errors. The WMF needs more gnome developers. – Jonesey95 (talk) 12:56, 3 June 2025 (UTC)
:::WMF, hire Anomie and Jonesey95! Polygnotus (talk) 15:29, 3 June 2025 (UTC)
:::: {{re|Polygnotus}} The joke in my reply is that I did work for the WMF from 2012 to 2020, and fixing boring bugs and working on the Action API are things I did. I also worked on Scribunto, TemplateSandbox, TemplateStyles, Special:ApiSandbox, and a bunch of other stuff. But eventually I wound up with a manager who seemed primarily interested in trying to climb the management ladder (yay kiss up kick down), and in general office politics became a bigger and bigger requirement, and eventually they fired me, no real reason given. Even if they'd be willing to rehire me, I hear the office politics have continued to get worse since so I still might not fit in anymore. Anomie⚔ 00:03, 4 June 2025 (UTC)
:::::@Anomie I've heard that WMF CEO is a decently paid position. Maybe you have some spare time? Polygnotus (talk) 03:49, 4 June 2025 (UTC)
::::::I don't think I'd do very well at that job. My skills are elsewhere. Anomie⚔ 11:49, 4 June 2025 (UTC)
:::::::@Anomie Then you'll fit right in. Polygnotus (talk) 18:29, 4 June 2025 (UTC)
::: I would want to fix longstanding bugs too, and while as a volunteer I'm probably not as dedicated as Jonesey95 I actually did apply for a job with the Wikimedia Foundation and got an interview but was turned down. * Pppery *it has begun... 15:31, 3 June 2025 (UTC)
::::@Pppery Why were you turned down? I imagine you have asked that question. Also by whom? Polygnotus (talk) 15:34, 3 June 2025 (UTC)
::::: "While we were impressed with your qualifications and experience, we have decided to proceed with other candidates". No other detail, and I didn't ask. I don't think "by whom" is a useful question to ask; the email I got was from a member of the recruiting team but I'm sure she wasn't the one actually making the decision. * Pppery *it has begun... 15:37, 3 June 2025 (UTC)
:Cui bono? Is there some financial incentive to make this decision which, from our point of view, doesn’t make any sense? I mean for the foundation as a whole, not any individual working there. I’m trying to understand where the pressure to do this is coming from. 3df (talk) 02:45, 4 June 2025 (UTC)
::@3df, believe it or not, there are people out there who think this kind of thing is useful, and a great many of them work in development. No ulterior or secret motive is necessary to explain it. -- asilvering (talk) 02:47, 4 June 2025 (UTC)
:::@Asilvering Agreed. Although it would be very very beneficial for the AI company to have people correcting the output of their AI for free. They normally have to hire people to do that job. Polygnotus (talk) 02:50, 4 June 2025 (UTC)
Nope, I don't want an additional floating window of content for editors to argue over. Not helpful or better than a simple article lead. ꧁Zanahary꧂ 06:36, 4 June 2025 (UTC)
{{clear}}
Keep AI out of Wikipedia. That is all. WMF staffers looking to pad their resumes with AI-related projects need to be looking for new employers. Carrite (talk) 16:01, 3 June 2025 (UTC)
I'm glad that WMF is thinking about a solution of a key problem on Wikipedia: most of our technical articles are way too difficult. My experience with AI summaries on Wikiwand is that it is useful, but too often produces misinformation not present in the article it "summarises". Any information shown to readers should be greenlit by editors in advance, for each individual article. Maybe we can use it as inspiration for writing articles appropriate for our broad audience. —Femke 🐦 (talk) 16:30, 3 June 2025 (UTC)
:But 99% of the AI consumers knowingly interact with is trained on Wikipedia, so they don't need wikipedia.org for that. So the WMF is proposing making a shittier version of something that already exists. Polygnotus (talk) 16:49, 3 June 2025 (UTC)
::It would be good if we had our own version of it, where we could control what is shown and how it is shown, instead of having a for-profit company modify our content as they please with no way for anyone to do anything about it, and no free and open alternative. Matma Rextalk 17:39, 3 June 2025 (UTC)
:::That appears to be based on a bunch of incorrect assumptions. It is not like a nuke, we don't need to have it just because others do.
:::{{tq|we could control what is shown and how it is shown}} Being able to set a system prompt is not control, you'd have to train your own model, which means either copyright violations on a massive scale or training on model exclusively on Wikipedia data, meaning it would be completely inferior to what is available.
:::{{tq|instead of having a for-profit company modify our content as they please with no way for anyone to do anything about it}} Have you read WP:REUSE? This is what you signed up for.
:::{{tq|and no free and open alternative}} What are you talking about? Anyone can download ollama. https://ollama.com/ The WMF does not have the money and brainpower required to develop a serious alternative to the existing models, and if they try that is a clear indication that they don't understand their role. But the screenshot says that the name of the model is Aya. Aya is a family of models by Cohere Labs. https://cohere.com/research/aya Which is a for profit company. Polygnotus (talk) 18:10, 3 June 2025 (UTC)
::::@Polygnotus Your comparison to nuclear bombs seems out of proportion.
::::{{tq|Being able to set a system prompt is not control}} I don't mean a system prompt, I mean the user interface around the summary (see the mockup earlier in the thread, with ample warnings and notes, and compare it to the summary in Google or whatever else) and I mean the editorial control to hide or override these summaries if they're egregiously wrong, which I hope will be available if this experiment becomes a real feature.
::::{{tq|Have you read WP:REUSE}} I think it's a bit rude of you to imply I don't know what I'm talking about. Anyway, have you seen how the content is actually presented by the various LLM companies? They don't generally cite content as they should (and not just from Wikipedia), and as far as I can tell no one yet has managed to force them to do it.
::::{{tq|What are you talking about? Anyone can download ollama}} Famously, everyone on the planet has a computer able to run large language models, and will not mind waiting several seconds or minutes for the results to come out. Oh, wait. Local models are only a viable alternative for a small group of people.
::::I don't think you replied to what I said, only to things you imagined I said. I'm happy to argue for a bit, but please slow down. Matma Rextalk 21:01, 3 June 2025 (UTC)
:::::{{tq|They don't generally cite content as they should (and not just from Wikipedia), and as far as I can tell no one yet has managed to force them to do it.}} DuckDuckGo does these days. Izno (talk) 21:06, 3 June 2025 (UTC)
:::::{{tq|I mean the user interface around the summary (see the mockup earlier in the thread, with ample warnings and notes, and compare it to the summary in Google or whatever else) and I mean the editorial control to hide or override these summaries if they're egregiously wrong, which I hope will be available if this experiment becomes a real feature.}} People do not read banners and warnings, see Banner blindness. You can never make a banner big enough to force people to read it. {{tq|override these summaries if they're egregiously wrong}} Even the example they provided is already egregiously wrong, of course they will be. Having humans override the after the fact is not a reasonable solution to a giant problem.
:::::I don't think WP:REUSE is a very popular page, and there are tons of people who don't realize that basically anyone can copy anything from Wikipedia, and no one does anything about it, even if they do not follow the terms of the license.
:::::{{tq|have you seen how the content is actually presented by the various LLM companies? They don't generally cite content as they should (and not just from Wikipedia), and as far as I can tell no one yet has managed to force them to do it.}} Yes, I have, which is why my opinion is what it is.
:::::{{Tq|Local models are only a viable alternative for a small group of people.}} agreed. You talked about {{tq| no free and open alternative}} which is why I mentioned Ollama.
:::::{{Tq|please slow down}} I mean if they really do this I think we've lost the war and I'll just leave Wikipedia. Or set up an alternative and then leave. Polygnotus (talk) 21:13, 3 June 2025 (UTC)
::::::What war? Whose against whom? And what does MAD has to do with this discussion? Do you think we're building Skynet here or something? I am baffled and at a loss as to how to reply to this. Matma Rextalk 21:18, 3 June 2025 (UTC)
:::::::You wrote {{Tq|what does MAD has to do with this discussion?}} in response to me writing {{Tq|The nuke thing is a famous example of game theory, see Mutually Assured Destruction for more.}} which was my reponse to {{tq|Your comparison to nuclear bombs seems out of proportion}} in response to me writing {{tq|It is not like a nuke, we don't need to have it just because others do.}}
:::::::See how meta-conversations are near impossible on Wikipedia (and real life)? Polygnotus (talk) 21:24, 3 June 2025 (UTC)
::One of the reasons many prefer chatGPT to Wikipedia is that too large a share of our technical articles are way way too difficult for the intended audience. And we need those readers, so they can become future editors. Ideally, we would fix this ourselves, but my impression is that we usually make articles more difficult, not easier, when they go through GAN and FAC. As a second-best solution, we might try this as long as we have good safeguards in place. —Femke 🐦 (talk) 18:32, 3 June 2025 (UTC)
:::@Femke You seem to ignore [https://en.wikipedia.org/wiki/Wikipedia:Village_pump_(technical)#c-Polygnotus-20250603181000-Matma_Rex-20250603173900 this comment] where I explained that the WMF can't compete with AI companies whose core business is to develop AI models, the fact that a model trained exclusively on Wikipedia data would be far inferior to a model trained on a far far larger dataset, and the fact that they are using Aya. {{tq| as long as we have good safeguards in place}} What do you mean? Polygnotus (talk) 18:37, 3 June 2025 (UTC)
::::As in: moderation before something is put to readers, rather than after the fact. Which would in practice restrict the feature to high-priority technical articles, given that we have limited editor time for this. I don't know enough about the specifics of Aya to comment intelligently there. —Femke 🐦 (talk) 18:42, 3 June 2025 (UTC)
:::::@Femke I think you know that is not what the WMF is proposing. So your comments make no sense. {{tq|we might try this}} They are not proposing that {{tq|we}} try anything. They are proposing giving the most important screen real estate we have (the WP:LEAD) of every article to a for-profit company.Polygnotus (talk) 18:45, 3 June 2025 (UTC)
::::::In the comment above, they say that the moderator workflow is still to be determined. You're probably right they don't have a 'check first' workflow in mind, but if there is consensus to implement this (and it seems from this discussion so far that there probably isn't), I imagine the community would only be okay with this with extremely strong moderation in place. Like, the CMD example below is something that needs to be avoided at all costs.
::::::Perhaps, it's time to start a Wikiproject and some type of contest to fix the problem identified and ensure we write articles that people can actually understand. My Challenges seem not to work as an encouragement. —Femke 🐦 (talk) 18:52, 3 June 2025 (UTC)
:::::::You think people are lining up to check the work of an AI model? Especially when summarizing complicated technical topics most people don't even understand? Polygnotus (talk) 18:58, 3 June 2025 (UTC)
:::::::::: Just to be clear, I oppose the test of unverified AI summaries as is slated for next week to 10% of mobile editors (phab:T393940). The foundation cannot dictate content. —Femke 🐦 (talk) 06:49, 4 June 2025 (UTC)
A truly ghastly idea. In other words: Yuck. Since all WMF proposals steamroller on despite what the actual community says, I hope I will at least see the survey and that—unlike some WMF surveys—it includes one or more options to answer "NO". Yngvadottir (talk) 17:02, 3 June 2025 (UTC)
:It sure looks like they are planning to ask casual readers who use the mobile app. And if you ask them, their answer would probably be "yes". But that doesn't mean that it is a good idea. And introducing AI summaries would probably lead to a fork and an exodus. I would honestly be shocked if AI is not the final straw in the relationship between the WMF and the community. Polygnotus (talk) 17:17, 3 June 2025 (UTC)
:::The [https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq survey] linked below was much as I expected: almost nowhere to express disapproval of the whole putrid idea. Bonus: video without closed captions. Of text. On YouTube. Classy and inclusive. Yngvadottir (talk) 23:24, 3 June 2025 (UTC)
::::@Yngvadottir I really enjoyed the question about a {{tq|An AI-powered dashboard that shows the quality and metrics for summaries for my wiki}}.
::::What is an AI-powered dashboard??? This was clearly written with someone who is unfamiliar with the capabilities and limitations of AI technology.
::::You'd expect something like "powered by Kibana".
::::Also {{tq|my wiki}} shows that they want to dump this trash on multiple wikis, despite the fact that everyone who speaks languages other than English and English knows how much worse the experience in another language is, compared to English. Polygnotus (talk) 01:57, 4 June 2025 (UTC)
::::Yea I saw that too. The survey that I did had nowhere for me to say that it was a bad thing. In fact there were even worse ideas listed there too, such as edits to summaries might not even be listed anywhere unlike edits to normal pages. 2007GabrielT (talk) 12:17, 4 June 2025 (UTC)
:Laudable goal, but if it is to go through, it should be only if established editors, i.e. extended confirmed editors, decide if the generated summary can supercede the current lead, or decide that the generated content requires modifications before using. – robertsky (talk) 19:03, 3 June 2025 (UTC)
::@Robertsky {{tq|if the generated summary can supercede the current lead}} That is not what they are proposing at all... {{tq|if established editors, i.e. extended confirmed editors, decide}} that is also not what they are proposing {{tq|decide that the generated content requires modifications before using}} that is also not what they are proposing. Polygnotus (talk) 19:06, 3 June 2025 (UTC)
:::@Polygnotus, The lead is supposed to be the summary of the article. Why have another machine generated summary if the lead is doing the job? {{tq|editor moderated}} is what they are proposing, and they asked for editors' {{tq|preferences for generating, editing, and moderating summaries}}. So I am suggesting as such. – robertsky (talk) 19:17, 3 June 2025 (UTC)
::::@Robertsky {{tq|Why have another machine generated summary if the lead is doing the job?}} Are you asking me that? That is the WMFs proposal, and I am saying it is a bad idea...
::::Look at the screenshot. It shows both the current lead and the AI summary that contains multiple errors.
::::[https://en.wikipedia.org/wiki/Wikipedia:Village_pump_(technical)#c-Polygnotus-20250603185800-Femke-20250603185200 You think people are lining up to check the work of an AI model? Especially when summarizing complicated technical topics most people don't even understand?]
::::My brother in Zeus, Cohere Labs is worth [https://www.reuters.com/technology/ai-startup-cohere-seeks-5-bln-valuation-latest-fundraising-source-says-2024-03-21/ billions]. Do you want Wikipedia volunteers to work for them for free??? You do realize that AI companies hire people to do the work you seem to think should be done by unpaid volunteers?
::::https://time.com/6247678/openai-chatgpt-kenya-workers/ {{Tq|Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic}} Polygnotus (talk) 19:19, 3 June 2025 (UTC)
::::::Praise be to Zeus! Polygnotus (talk) 19:30, 3 June 2025 (UTC)
:{{od|:::::}} A note that the WMF has begun requesting responses to surveys via the QuickSurveys extension, so some (like me) should get a prompt inviting you to the survey if enabled. Some of the questions... aren't great if I'm honest. – Isochrone (talk) 20:45, 3 June 2025 (UTC)
::@Isochrone How can we opt in? Can we get some screenshots? Polygnotus (talk) 20:49, 3 June 2025 (UTC)
::Since the WMF is willing to be this sneaky, I don't think we should feel guilty if we fill in the survey a couple hundred times. Polygnotus (talk) 21:04, 3 June 2025 (UTC)
:::Whilst I am not against sharing the survey, let's not intentionally skew the results :) – Isochrone (talk) 21:05, 3 June 2025 (UTC)
::::Let's intentionally skew the results! The WMF intentionally skewed it by picking who to show it to; the community should skew the results to tell the WMF to stop trying to put AI in Wikipedia! Polygnotus (talk) 21:14, 3 June 2025 (UTC)
:::::Yes, and then the WMF will just throw out all of the survey data because it has been openly rigged. ⫷doozy (talk▮contribs)⫸ 23:16, 3 June 2025 (UTC)
::::::@Doozy Exactly! That is what we want! Polygnotus (talk) 23:17, 3 June 2025 (UTC)
:::::::And then the legitimate consensus among the people who took the survey will probably be thrown out, requiring us to rely even further on RfCs and other community headaches. ⫷doozy (talk▮contribs)⫸ 23:20, 3 June 2025 (UTC)
::::::::Perfect! We can use the existing Wikipedia ways to get consensus, like RfCs and talkpages and all that. Polygnotus (talk) 23:21, 3 June 2025 (UTC)
:::::::::Point conceded. I had no idea where you were going with this initially. ⫷doozy (talk▮contribs)⫸ 23:23, 3 June 2025 (UTC)
::::::::::@Doozy I am very confusing (sorry about that). Polygnotus (talk) 23:25, 3 June 2025 (UTC)
If this were to actually happen, some or many readers would just glance at the summary instead of reading the article. Since the summary will form the glancers' entire impression of the subject, it needs to be extremely accurate. I suspect it is often not. Even if editor moderation helps this problem, you may as well just go to Simple English Wikipedia and get the same thing but actually made by humans. ⫷doozy (talk▮contribs)⫸ 20:54, 3 June 2025 (UTC)
Haven’t we been getting good press for being a more reliable alternative to AI summaries in search engines? If they’re getting the wrong answers, let’s not copy their homework. 3df (talk) 21:16, 3 June 2025 (UTC)
Oppose. We already have summaries of our encyclopedia articles: the lead sections of our encyclopedia articles are the summaries of the article. Also, Wikipedia is already a massive collection of great summaries, because writing an encyclopedia (tertiary source) is the skill of summarizing secondary sources such as newspapers and books. Also, our leads (summaries) are so good that Google and other search engines use them in their knowledge panels. Wikipedia and AI are in the same business (summarizing) and we humans at Wikipedia are better at it than AI. I see little good that can come from mixing in hallucinated AI summaries next to our high quality summaries, when we can just have our high quality summaries by themselves. –Novem Linguae (talk) 22:12, 3 June 2025 (UTC)
::Thanks! So a planned deployment to 1% of readers on Monday 9 June, and then expanding to 10% of readers on Tuesday 10 June, and a possible further expansion on Monday 16 June. CMD (talk) 04:35, 4 June 2025 (UTC)
::@Novem Linguae Do you happen to have access to that stuff hosted on Google?
::Do they really think a CTR>4% is a meaningful metric to call this a success? Polygnotus (talk) 04:38, 4 June 2025 (UTC)
:::I do not have access. It's probably WMF-only. –Novem Linguae (talk) 04:50, 4 June 2025 (UTC)
Are we for real?? Just last week I noted to staff that any rollout of LLM content would be unacceptable to the community, and now we're suggesting integrating it into a core aspect of content? Leads are already a shortened version of a page. The best leads have been carefully crafted by dozens of editors and represent some of the best content in the world. Using LLMs to generate what is effectively a worse lead will only amplify problems like at Sambhaji (angry readers read a lead but not the body). Devs, please realize that we are not going to accept integrated LLMs, and stop wasting time on such boondoggles. I will strongly oppose the integration of this feature into EnWP or any other Wiki. Our reputation is on the line here. CaptainEekEdits Ho Cap'n!⚓ 23:09, 3 June 2025 (UTC)
:Oh, and to be clear: I absolutely, categorically, totally oppose the 2-week testing period. I support removing it via site CSS, or just blacking out the site. Or better yet, the Foundation can back down from this foolish course of action. CaptainEekEdits Ho Cap'n!⚓ 23:22, 3 June 2025 (UTC)
::Thanks @CaptainEek, it's really heartening to see an arb saying this. -- asilvering (talk) 02:19, 4 June 2025 (UTC)
:::An arb's view (whether for or against) has no more weight than that of any other single editor. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:10, 4 June 2025 (UTC)
I feel like people seriously underestimate the brand risk this sort of thing has. Wikipedia's brand is reliability, traceability of changes and "anyone can fix it". AI is the opposite of these things. Public sentiment (among some corners) is reacting against the AI trend quite harshly. Projects like this threaten the brand good-will that Wikipedia has based on its unique strengths. The current climate gives Wikipedia a unique opportunity to be the answer to the AI-ification of everything. Why are we throwing that away? Bawolff (talk) 00:17, 4 June 2025 (UTC)
I am going to join the chorus of editors saying that this is a uniquely bad idea. But I'll focus on one thing from the survey: generic readers should not be the ones who decide whether the AI summary is "helpful". If I come to Wikipedia because I want to know who founded the Roman Empire, and AI tells me it was Tiberius, I would be eternally grateful... until I find out that it was Augustus. I don't doubt that this was created with the best of intentions, but I really hope that the WMF spends its resources elsewhere (for example, fixing the remaining mobile communication bugs). HouseBlaster (talk • he/they) 01:10, 4 June 2025 (UTC)
The use of AI today comes with a caveat that the "AI can be wrong" and "AI should not be trusted 100%". I am not saying that Wikipedia is 100% right, but Wikipedia should remain to be more reliable than whatever Grok or ChatGPT can write up. If we use AI, we would go down into their level of reliability, which I didn't think to be the best way to go. We have seen many people, both with good and bad intentions, have criticized the reliability of Wikipedia. Going down the AI route will only reduce our "perceived reliability" even more. ✠ SunDawn ✠Hail Me! 01:20, 4 June 2025 (UTC)
::My reply distracts from the AI, but I am surprised that no one has pointed this out. LightNightLights (talk • contribs) 18:08, 4 June 2025 (UTC)
Copying my comment from Village pump (WMF), I believe it is better here:
:I feel like Simple Article Summaries (SAS) are contrary to a lot of things readers want in an encyclopedia. Readers come to the site trusting that we can give them all the information they want, while (crucially!) substantiating everything we say with sourcing and adhering to NPOV. While other readers could feel differently than I when I decided to join this community, without these two things, Wikipedia would be just another site.
:I've experimented with using AI on an encyclopedia. I've had it review my writing. I've asked it to write, with the intention to find shortcomings in my own ideas (if I forgot to say something). Just today, I delt with a user who has made over a thousand edits who cited sources that have never existed, at what appears to be the direction of a LLM. There is absolutely no evidence I've seen, either lived or in my line of work at an AI company, which would lead me to believe that an LLM can stick to the facts. Even the output in your survey is fraught with hallucinations.
:Likewise, using LLMs in my line of work, I've noticed the personality fluctuate in dramatic ways with model updates. I've tried my very hardest to correct it with a custom prompt, instructing it to use prose and maintain a neutral, skeptical perspective, but even this has not worked. There is absolutely no evidence I've seen, either lived or in my line of work at an AI company, which would lead me to believe an LLM can write neutrally. The most obvious example is WP:NOTCENSORED, whereas LLMs very much are.
:Yes, human editors can introduce reliabilty and NPOV issues. But as a collective mass, it evens out into a beautiful corpus. With Simple Article Summaries, you propose giving one singular editor with known reliabilty and NPOV issues a platform at the very top of any given article, whist giving zero editorial control to others. It reenforces the idea that Wikipedia cannot be relied on, destroying a decade of policy work. It reenforces the belief that unsourced, charged content can be added, because this platforms it. I don't think I would feel comfortable contributing to an encyclopedia like this. No other community has masterered collaboration to such a wonderous extent, and this would throw that away. Scaledish! Talkish? [https://xtools.wmflabs.org/ec/en.wikipedia.org/ScaledishStatish]. 01:44, 4 June 2025 (UTC)
:Every article on Wikipedia has a WP:LEDE which summarizes the article. We do not need another summary to summarize that summary above the summary. This is a solution looking for a problem. I won't necessarily disagree that some article ledes are overly technical, but the solution to that is to address it among the communities of editors which oversee those articles. Furthermore, if a possible Wikipedia reader is daunted by the language in a lede and is too incurious or thoughtful to try and understand the language presented, they can always use an AI tool on their own time to summarize it. Adding our own AI summary only serves as a barrier to understanding, and is not an improvement in any way. I cannot state strongly enough how much I oppose any implementation of generative AI on Wikipedia. A Cynical Idealist (talk) 02:01, 4 June 2025 (UTC)
:Oh god no. What a horrible way to damage Wikipedia's brand image right when we need to be emphasizing it the most. Think of the signal you'd be sending to everyone on the internet who is currently thinking "at least Wikipedia is still written by humans". -- asilvering (talk) 02:12, 4 June 2025 (UTC)
Count me as a "yuck". I've been experimenting quite a lot with useful applications of LLMs in my professional work, and naturally also tried some Wikipedia-themed experiments. I've even specifically tried to generate leads for my GAs. The AI-generated leads are terrible and I have never used a word of one! Even if they were of acceptable quality, I agree with others that any use of AI to generate user-visitor content will torpedo Wikipedia's current reputation as the last bastion of actual facts on the internet. Leads are already supposed to provide an accessible, brief overview of the article. If the WMF feels they too rarely achieve that goal, they could fund an editathon with prizes like the World Destubathon. ~ L 🌸 (talk) 02:27, 4 June 2025 (UTC)
:Oh, thinking of things we already have: the [https://simple.wikipedia.org/wiki/Main_Page Simple English Wikipedia] also exists! I've edited there a little and it is definitely a project that could use more love. ~ L 🌸 (talk) 02:37, 4 June 2025 (UTC)
:Another crucial consideration: this cannot be treated as "just" a test. Wikipedia will become the "main character" of social media for a day, as a villain. Thousands of people will hear about the feature, not examine it for themselves, conclude that Wikipedia doesn't care about fact-checking any more, and never learn that the test ended. Anything that is implemented as user-facing on English Wikipedia must be treated as a fully complete feature that en-wiki fully endorses. (Also, apologies for so many instances of "staircase wit" -- it just keeps seeming worse the more I think about it.) ~ L 🌸 (talk) 02:46, 4 June 2025 (UTC)
::Good points, and agree with the ending parenthetical. CMD (talk) 02:52, 4 June 2025 (UTC)
Going to expand on my early "yuck" a bit, since I'm a bit dumbfounded nobody's brought this up yet. The limiting factor on the English Wikipedia - which is the Foundation's primary revenue source - is the time of good-faith volunteer editors. This is extremely reminiscient of short descriptions: generate simplified summaries of articles by some automated off-wiki process, find out they're inevitably terrible, foist them off on the "infinite supply of volunteers" to fix. This was, I'm not even sure when, sometime before March 2017 at the very least, and we're only now starting to catch up. At least then, someone, somewhere, thought the descriptions would be usable as-is. You don't have that excuse this time, and we're not going to put up with it again. —Cryptic 03:14, 4 June 2025 (UTC)
:Gross. If this happens on Wikipedia, I'm done. What makes Wikipedia unique in 2025 is we don't accept unedited AI garbage. If a non-wiki-editable process is responsible for showing content on Wikipedia, you have successfully killed the website. Its "Wiki"pedia for goodness sake. JackFromWisconsin (talk | contribs) 03:34, 4 June 2025 (UTC)
::@JackFromWisconsin, fwiw, they do appear to be planning on allowing editors to edit it. In my opinion, this is even worse, because then they're generating a huge and completely unnecessary cleanup problem for volunteers to handle, but there you are. -- asilvering (talk) 03:36, 4 June 2025 (UTC)
:::I saw that on the survey. Just a completely unnecessary use of AI. Our backlogs are log enough as is. We don't need a "check AI summaries" backlog too. If a new gadget were to be developed to show human-made summaries, that'd okay. (But arguably unnecessary, that's what the LEAD is for!) But any moment AI gets involved, it will completely ruin the process. There are only so much volunteer hours. JackFromWisconsin (talk | contribs) 03:40, 4 June 2025 (UTC)
::::@JackFromWisconsin Think of the poor multi-billion dollar AI company who now has to pay up to 2 dollars an hour for people to correct the output of their model! They really deserve our help, for free. Polygnotus (talk) 03:42, 4 June 2025 (UTC)
:::One of the options on the survey said that these edits might not even be listed (so no edit history). Do we want it to be easer for people vandalize something already likely to be a false AI hallucination? 2007GabrielT (talk) 12:21, 4 June 2025 (UTC)
I'm curious about what {{u|Thryduulf}} and {{u|WhatamIdoing}}'s thoughts are on all this. They've been advocates for AI in past Village Pump discussions, so it'd be interesting to hear from a different perspective. Some1 (talk) 03:54, 4 June 2025 (UTC)
:I don't advocate for AI. I advocate against creating unenforceable policies and guidelines about AI, and I advocate against editors making unproveable accusations that others are wrongly using AI, especially when those people are not native English speakers.
:For myself, I'd never enable something like this. In fact, I turn it off at every opportunity. If anyone reading this happens to have a contact at DuckDuckGo, please tell them that I'd like a way to turn off their AI that persists even in a private/incognito window. WhatamIdoing (talk) 04:30, 4 June 2025 (UTC)
:@Some1 just as you did with WAID you have misstated my position: I do not advocate for AI, I advocate against policies and guidelines related to AI that are based on hypoerbole, misinformation and assumptions rather than facts and evidence. I advocate against allowing (let alone encouraging) personal attacks on editors based on unprovable (and frequently incorrect) assumptions.
:As for the actual subject of this discussion, the idea of a non-technical ~5 sentence summary of a long article is a good one, iff that summary is always both accurate and neutral. Based on the evidence presented in this discussion, the current implementation of this idea does not consistently meet those requirements. Given that some of our articles are tagged as having disputed accuracy and/or disputed neutrality, I am not sure that it is possible to meet both requirements. Thryduulf (talk) 10:35, 4 June 2025 (UTC)
I received the invitation to take the survey and was quite displeased when I saw what it was about. I have major concerns regarding the reliability of LLM-generated summaries, the extra workload maintaining their accuracy would create for human editors, and the emissions/environmental impact of LLMs. I don't think any of the potential benefits outweigh the costs here and would urge any WMF staff reading this to reconsider this idea. Ethmostigmus 🌿 (talk | contribs) 04:09, 4 June 2025 (UTC)
:The view of the Wikimedia Foundation on Wikipedia is evident in the framing of this survey. The first two suggestions for adding "simple summaries"—indeed, an obfuscation of the significant role that artificial intelligence wields in this process—suppose that administrators should be responsible for managing simple summaries. At best, that is a dangerously negligent perspective; at worst, it is woefully scornful of a community that has persisted for nearly a quarter century without the idea that administrators should supercede content. What has been proposed is a box that is invisible to many editors who have not opted to use Vector 2022 and which appears to be difficult to edit. elijahpepe@wikipedia (he/him) 04:39, 4 June 2025 (UTC)
::Good point, I also found the proposals that admins could be responsible for the AI tasks to be alarming and almost shocking. "Admin effort" is one of our most precious resources, and it shows a real lack of understanding about en-wiki's needs and priorities to propose inventing a new task for admins to deal with, let alone one so spirit-killing as AI cleanup. ~ L 🌸 (talk) 04:55, 4 June 2025 (UTC)
:I got so many thoughts about this, and they're all "fuck LLMs, turn back right now". LilianaUwU(talk / contributions) 04:31, 4 June 2025 (UTC)
:Done editing if this is enacted. Best space on the Internet and of course you have to fuck it up Toffeenix (talk) 05:35, 4 June 2025 (UTC)
I just got the survey, responded to it (I hate the idea with a burning passion and made sure to share my opinion in the survey) and immediately searched for the discussion because I knew there'd be one. There are so many problems with this whole idea, and they all boil down to "AI that we as editors of the website can't directly control". Of course, directly controlling an AI to output a specific summary defeats the purpose of using an AI. Also, from MOS:INTRO: {{tq|The lead section should briefly summarize the most important points covered in an article, in such a way that it can stand on its own as a concise version of the article.}} So we already have a "simple summary", and it's one that we can directly edit, thus maintaining both the quality of the encyclopedia and the trust those who use Wikipedia have in us. Why do we need a second one?
: Side note: When I did the survey, the order of responses, which ranged from "Good" on the left to "Bad" on the right for the first few pages, got switched so "Bad" was on the left and "Good" was on the right on the last page. I'm glad I happened to glance (without even really meaning to) at those responses or I wouldn't have even noticed the (accidental?) switcheroo.—Gestrid (talk) 05:44, 4 June 2025 (UTC)
LLMs are [https://pivot-to-ai.com/2024/09/04/dont-use-ai-to-summarize-documents-its-worse-than-humans-in-every-way/ notoriously bad] at summarising. Abusing vulnerable people (mobile visitors) as guinea pigs for an experiment which is unlikely to collect any useful information is, at best, rude. The WMF should stick to its better ideas about LLMs, which are about using them for anything but prose generation. For example, instead of presenting a summary of the article as if it were true, you could pick some sentences from the article and quote them verbatim, possibly ask some questions about them. Nemo 06:08, 4 June 2025 (UTC)
'Opppose in the strongest of all possible ways. Are y'all (by that, I mean WMF) trying to kill Wikipedia? Because this is a good step in that way. We're trying to keep AI out of Wikipedia, not have the powers that be force it on us and tell us we like it. - The BushrangerOne ping only 06:05, 4 June 2025 (UTC)
:I can't help feeling that what this would create is a duplicate lead that editors would not just have to verify but also moderate and curate. The lead of some articles is the result of potentially long-winded discussion, this risks requiring all that work again. These would also require oversight to ensure they are not targeted by vandals or POV pushers, as they would be highly visible. The leads if many articles should be shorter and less cluttered, but something that would in effect duplicate the lead (and the work required to maintain the lead) doesn't seem like the greatest idea. -- LCU ActivelyDisinterested«@» °∆t° 06:21, 4 June 2025 (UTC)
::The mass addition of unverified AI text by any editor would also seem against current policy. -- LCU ActivelyDisinterested«@» °∆t° 06:37, 4 June 2025 (UTC)
:::These kind of discussions are common inside and outside of contentious topic areas. Even seemingly simple articles may have had weeks of discussion on how to correctly summarise the article in the lead. -- LCU ActivelyDisinterested«@» °∆t° 10:15, 4 June 2025 (UTC)
Absolutely not, all of the work that countless people put into these articles being thrown away in favour of LLMs we don't even really understand. Besides that, we have simple for a reason. Weirdguyz (talk) 06:30, 4 June 2025 (UTC)
Absolutely not This duplicates the existing summary function of our article leads. If readers find that an article lead poorly summarizes its content, that is one of many avenues by which we hope to recruit them into editors improving the content they care about. As HouseBlaster notes, when other LLMs ask for thumbs up/down feedback, they are assessing whether you liked the response, not whether you double-checked its accuracy. To implement that here will train models to reinforce reader preconceptions, rather than accurately summarize our articles. As CMD demonstrates below, even the dopamine example demonstrates hallucinations that will severely diminish our reliability, so even a mobile trial is untenable. ViridianPenguin🐧 (💬) 06:52, 4 June 2025 (UTC)
:To reiterate {{u|A Cynical Idealist}}'s criticism over on Meta, Simple Article Summaries#Userlytics experiment presenting the flawed dopamine summary to eight subjects is laughably insufficient as evidence to proceed with showing AI summaries to ~10K of our readers, and no explanation is given to ignore that a whopping quarter of respondents dislike the feature. ViridianPenguin🐧 (💬) 07:08, 4 June 2025 (UTC)
Are you serious? We're trying to discourage AI use, not promote it. I think we are operating at cross-purposes. LizRead!Talk! 07:06, 4 June 2025 (UTC)
Not a good idea. Pavlor (talk) 07:15, 4 June 2025 (UTC)
Very bad idea. First, LLMs have inherent reliabilty issues (as demonstrated above, the summary in the demo video used to promote the idea is wrong). The effort to correct summaries would need people willing to check the article in detail; it would swallow up editor time that could better be spent on improving article leads. Second, it would do a lot of damage to Wikipedia's brand. On social media, people are praising the existence of WP as something that has not turned into AI slop. Even if AI were only used for small summaries, it would raise suspicions and upset people.OsFish (talk) 07:26, 4 June 2025 (UTC)
Strongest possible oppose. We do not want, need, or wish for AI on this project. At all. Ever. Stifle (talk) 07:51, 4 June 2025 (UTC)
So in a world where Wikipedia is one of the only respected bastions against hallucinated bullshit information by LLMs, our response is to implement them onto the project??? Not a hope in hell. ser!(chat to me - see my edits) 07:53, 4 June 2025 (UTC)
Absolutely not for obvious reasons that have already been written about above. JavaHurricane 08:00, 4 June 2025 (UTC)
I have my doubts that trying to duplicate Google's work (or "work") or the efforts editors writing leads engage in is worthwhile. Having AIs that can read an article and write a WP:LEAD compliant lead would be something actually useful, as it can be fairly difficult for people. Or AIs that you feed sources to and write or expand articles with them, that would be even better. One should avoid the tendency to develop AIs in imitation/inspiration of existing efforts, because chances are you are duplicating existing work/reinventing the wheel for no good reason. Jo-Jo Eumerus (talk) 08:34, 4 June 2025 (UTC)
In my opinion, this would be a gross overstep of the bounds we typically expect the Foundation to adhere to. We are not paid, obviously, but if this is implemented over the objection of the community, the Foundation would effectively be signalling intent for Wikipedia to become one more publication that has fired their editors, in part or in whole, in order to put out AI slop. Alpha3031 (t • c) 10:49, 4 June 2025 (UTC)
:Oppose - the lead exists for a reason. AI summaries are extremely unreliable (as this thread constantly mentions), and it would ruin Wikipedia's reliability. VERIFIABILITY is one of our core policies - AI just hallucinates, leaves stuff out, and makes shit up. This is proof that the WMF does not care about the community of volunteers that work this website. PhoenixCaelestis (Talk · Contributions) 11:25, 4 June 2025 (UTC)
Strong oppose, including the trial. This is utter disrespect to both readers and editors, and dangerous to the point where we risk losing plenty of both. AI generated stuff is not only inaccurate and unreliable, it is also cheap and therefore almost worthless (since you can get it everywhere nowadays). Even the trial is highly problematic as it sends the wrong signal to the users (namely that Wikipedia became enshittified), and can therefore irreversibly damage Wikipedia's reputation. Let's keep Wikipedia AI-free – that's a strength, not a weakness. --Jens Lallensack (talk) 11:37, 4 June 2025 (UTC)
:Oppose - Making AI generated summaries is exactly none of our business. Google are already on top of that with their AI Overview nonsense -- let's leave that stuff to them. DJ-Aomand (talk) 11:55, 4 June 2025 (UTC)
:I don't understand the point. A piece of text is generated, which in best-case scenarios is duplicating already existing text, then editors review it and edit for quality? If they're going to be manually edited afterwards anyway, what's the purpose of automatically generating them? 207.11.240.2 (talk) 12:22, 4 June 2025 (UTC)
:I'm very uncomfortable with both this proposal and with the direction of travel it suggests. This is either going to wastefully duplicate what's already there, or require scarce editor time to fix it up. Even if this is not affecting the underlying article content, it is generative AI material being presented to a reader under our name, and that is a line we should not cross. Andrew Gray (talk) 12:53, 4 June 2025 (UTC)
:Oppose any movement towards the integration of generative AI on Wikipedia. This is a volunteer project based on community effort, always has been. We're the last best place on the internet because we're built on love, not automation. Don't ruin our reputation by falling victim to the craze for dumbing down everything to the lowest possible attention span - and then add insult to injury by doing it with generative AI. ♠PMC♠ (talk) 14:02, 4 June 2025 (UTC)
Volunteers have spent tremendous amounts of time both crafting the lead sections of articles, and trying to keep chatbot-generated junk out of articles in general. Now, here's this proposal, to flip a middle finger at both efforts, and say "Hey, we'll put some chatbot-generated junk above the actual leads the volunteers worked so hard on building consensus for!". I do not believe I have the words to express how disgusted I am at this proposal. SeraphimbladeTalk to me 14:07, 4 June 2025 (UTC)
Very strongly opposed to this. LLMs are nothing but unethical, mindless plagiarism-machines built with the stolen work of actual humans. Keep this techbro bubble out of Wikipedia. -- kazerniel (talk | contribs) 15:31, 4 June 2025 (UTC)
:What the hell? No, absolutely not. Not in any form or shape. Not on any device. Not on any version. I don't even know where to begin with everything that is wrong with this mindless PR hype stunt. — HELLKNOWZ∣TALK 15:59, 4 June 2025 (UTC)
No, no, no This will destroy whatever reputation for accuracy we currently have. People aren't going to read past the AI fluff to see what we really meant. --SarekOfVulcan (talk) 17:14, 4 June 2025 (UTC)
Validation issues An experiment seems reasonable as our articles can often seem inpenetrable. And it's good to develop expertise and experience with AI tech as it is becoming omnipresent. But the idea of human validation may be difficult. One issue is that our articles may change frequently and so the summary and validation would need to be redone frequently too. Another issue is that every page has a disclaimer saying that our content is not guaranteed to be valid and so an assertion of validity would be sending a mixed message. Andrew🐉(talk) 18:17, 4 June 2025 (UTC)
:To say publicly what I wrote in the survey: from a structural standpoint, an AI-generated summary seems pointless. The one paragraph summary of the article is the lead paragraph. The multi-paragraph summary is the entire lead. If AI can write a better one-paragraph summary than the lead paragraph, then that paragraph should just be the lead paragraph. Why have two summaries at the top of an article, especially if one is worse than the other?
:A better deployment for AI here would be to have AI write a summary of the article and then compare its summary to the existing lead, and post the differences/analysis on the article's talk page, essentially an AI edit suggestion. Human editors can then review the suggestion and implement it (or not) as appropriate. AI could do this not just for the lead but for entire articles. Of course, this hinges on an AI that has sufficient accuracy to the point where it's suggestions are actually useful.
:A next-level deployment would be for AI to read the sources of the article, summarize those, and then compare its summary to the written article, and post suggestions for changes on the talk page.
:A next-next-level deployment would be for AI to suggest new sources for the articles, or to read and summarize sources that aren't in the article, and then post edit suggestions to the talk page. AFAIK AI isn't ready for this level yet, but the first two suggestions above could probably be done today with reasonable accuracy using large and small language models.
:I hope the WMF keeps developing, experimenting, testing, and iterating on these approaches. Levivich (talk) 19:17, 4 June 2025 (UTC)
::+1. AI that helps editors address problems with Wikipedia is welcome (if it works). We already use it, in various places, to varying degrees of success, and to relatively little pushback. -- asilvering (talk) 19:25, 4 June 2025 (UTC)
::I would not be in support of large AI-suggested edits. To properly evaluate what an AI writes takes a lot of brainpower. In my opinion it takes the same or more brainpower as just reading the sources/article yourself and writing it without AI. I think an AI that posts suggested edits on talk pages would just distract editors or get ignored, and reminds me of Microsoft trying to have AI write patches and post them on GitHub. There is [https://www.reddit.com/r/ExperiencedDevs/comments/1krttqo/my_new_hobby_watching_ai_slowly_drive_microsoft/ a thread on Reddit called "My new hobby: watching AI slowly drive Microsoft employees insane"] making fun of how terrible AI has been at writing software patches (7,000 upvotes in r/ExperiencedDevs). It would be sad if Wikipedia got into a similar situation. –Novem Linguae (talk) 20:08, 4 June 2025 (UTC)
:::I think {{tq|Of course, this hinges on an AI that has sufficient accuracy to the point where it's suggestions are actually useful.}} covers that already. That's a big ask. -- asilvering (talk) 20:13, 4 June 2025 (UTC)
::::The problem is that many people do think that LLM is useful and it can be good for certain low-hanging fruit but it can also be dangerously wrong with no way to know. Andre🚐 21:42, 4 June 2025 (UTC)
{{od|2}}
Well, it's easy enough to test, so I asked AI to summarize a Wikipedia article in one paragraph and then to compare its summary with Wikipedia's summary, and to tell me the differences between the two summaries. I did this for Today's Feature Article, 1880 Republican National Convention, using the TFA blurb as the example of Wikipedia's one-paragraph summary (although one could just as easily use the lead paragraph, or entire lead). I figure TFA is a good proxy for top-quality one-paragraph article summary written by Wikipedia volunteers. I asked ChatGPT and Gemini to "summarize https://en.wikipedia.org/wiki/1880_Republican_National_Convention in one paragraph", and then to "compare these two paragraphs and tell me what's different between them". Here are their responses:
{{cot|TFA one-paragraph summary}}
The 1880 Republican National Convention (June 2 to 8, 1880, Chicago) resulted in the nominations of James A. Garfield of Ohio and Chester A. Arthur of New York as the Republican Party candidates for U.S. president and vice president. Leaders in the early balloting at the convention, which deadlocked for several days, were former president Ulysses S. Grant, Senator James G. Blaine of Maine, and John Sherman, the treasury secretary. After the 35th ballot, Blaine and Sherman switched their support to a new "dark horse", Garfield, a congressman from Ohio, who had nominated Sherman. On the 36th ballot, Garfield won the nomination. After Arthur won the vice presidential nomination on the first ballot, the longest-ever Republican National Convention adjourned. The Garfield–Arthur Republican ticket narrowly defeated Democrats Winfield Scott Hancock and William H. English in the 1880 presidential election.
{{cob}}
{{cot|ChatGPT's one-paragraph summary}}
The 1880 Republican National Convention, held from June 2 to 8 in Chicago, was marked by intense factionalism and a prolonged deadlock. The leading contenders were former President Ulysses S. Grant, backed by the Stalwart faction favoring patronage; Senator James G. Blaine, supported by the Half-Breeds advocating civil service reform; and Treasury Secretary John Sherman, representing a more moderate stance. After 35 ballots failed to produce a nominee, a compromise emerged when Blaine and Sherman shifted their support to James A. Garfield, a congressman from Ohio not initially in the running. Garfield secured the nomination on the 36th ballot. To balance the ticket and appease the Stalwarts, Chester A. Arthur of New York was nominated for vice president. The Garfield-Arthur ticket narrowly defeated Democrats Winfield Scott Hancock and William Hayden English in the 1880 presidential election. Tragically, Garfield was assassinated in 1881, leading to Arthur's succession to the presidency.
{{cob}}
{{cot|ChatGPT's comparison of its summary ("Paragraph 1") with the TFA blurb ("Paragraph 2")}}
Here are the key differences between the two paragraphs:
Tone and Style:
Paragraph 1 is more narrative and descriptive, using phrases like “marked by intense factionalism” and “a compromise emerged.”
Paragraph 2 is more concise and encyclopedic, structured more like a historical summary.
Emphasis on Factions:
Paragraph 1 details the factions backing each candidate (Stalwarts, Half-Breeds, etc.).
Paragraph 2 omits faction names and focuses on the individuals.
Garfield’s Role:
Paragraph 1 mentions Garfield was “not initially in the running.”
Paragraph 2 notes Garfield “had nominated Sherman,” subtly emphasizing his presence at the convention before his surprise nomination.
Vice Presidential Nomination:
Paragraph 1 explains Arthur’s nomination as a move to “appease the Stalwarts.”
Paragraph 2 simply states Arthur won on the first ballot, omitting strategic context.
Convention Description:
Paragraph 1 highlights the prolonged deadlock with rich detail.
Paragraph 2 mentions it was the “longest-ever Republican National Convention” but without elaboration.
Garfield’s Assassination:
Paragraph 1 includes it and its consequence (Arthur’s presidency).
Paragraph 2 omits it entirely.
In summary, Paragraph 1 is more detailed and interpretive; Paragraph 2 is more formal and fact-focused.
{{cob}}
{{cot|Gemini's one-paragraph summary}}
The 1880 Republican National Convention, held in Chicago, Illinois from June 2 to June 8, 1880, saw the nomination of James A. Garfield of Ohio for president and Chester A. Arthur of New York for vice president. Despite Ulysses S. Grant, James G. Blaine, and John Sherman being strong contenders, Garfield ultimately secured the nomination after multiple ballots, with Arthur as his running mate. The Garfield–Arthur ticket went on to narrowly win the 1880 presidential election against Democrats Winfield Scott Hancock and William Hayden English.
{{cob}}
{{cot|Gemini's comparison of its summary ("Paragraph 1") with the TFA blurb ("Paragraph 2")}}
The two paragraphs describe the 1880 Republican National Convention, but Paragraph 2 provides more specific details and additional information compared to Paragraph 1.
Here are the key differences:
Format of Dates and Location: Paragraph 1 states "held in Chicago, Illinois from June 2 to June 8, 1880," while Paragraph 2 uses a more concise parenthetical format: "(June 2 to 8, 1880, Chicago)".
Detailed Nomination Process:
Paragraph 1 generally states, "Garfield ultimately secured the nomination after multiple ballots, with Arthur as his running mate."
Paragraph 2 offers much more detail, mentioning "Leaders in the early balloting at the convention, which deadlocked for several days," and specifies the "35th ballot," how "Blaine and Sherman switched their support to a new 'dark horse', Garfield," and that Garfield "had nominated Sherman." It also mentions "On the 36th ballot, Garfield won the nomination."
Arthur's Vice Presidential Nomination:
Paragraph 1 briefly notes "with Arthur as his running mate."
Paragraph 2 adds that "After Arthur won the vice presidential nomination on the first ballot."
Convention Duration/Length: Paragraph 2 explicitly states that "the longest-ever Republican National Convention adjourned," a detail not present in Paragraph 1.
Full Name of Democratic Vice Presidential Candidate: Paragraph 1 refers to "William Hayden English," while Paragraph 2 shortens it to "William H. English."
{{cob}}
Helpful? I think so. It would probably be better if the prompt were improved to instruct it write an encyclopedia-style summary, or even a Wikipedia-style lead. Novem's probably right that posting these on talk pages, particularly in volume, could be overwhelming, but it would be easy to create a toolforge-hosted python script that uses the AI APIs to perform these two steps for any given article, and thus generate "AI suggestions for improvement" on-demand for any given article... as long as somebody pays the API credits. But WMF has hundreds of millions of dollars and could afford to foot that bill, at least up to some reasonable cap. Levivich (talk) 21:51, 4 June 2025 (UTC)
:If we're just going to ask the AI to write in the style of a Wikipedia lead, what's the use in having the AI? Wikipedia-style leads are already present.The Morrison Man (talk) 22:05, 4 June 2025 (UTC)
::@The Morrison Man: The use of the AI is that the AI may have good suggestions for improvement. For example, ChatGPT's comparison above identifies the following possibilities for improving the TFA blurb:
::* name the factions backing each candidate (Stalwarts, Half-Breeds, etc.)
::* explicitly state that Garfield was not initially in the running
::* explicitly state that Arthur's nomination was a move to appease the Stalwarts
::* mention that Garfield was assassinated the following year, making Arthur president
::Editors can debate whether any of these changes are, in fact, improvements to the TFA blurb, but it seems to me to be a pretty helpful list of things to consider when thinking about how to improve the TFA blurb. And it took less than one minute -- way less time than it would have taken me to read the article and the TFA blurb and figure out suggestions for improvement. That's what AI can do, today. Levivich (talk) 22:26, 4 June 2025 (UTC)
:::Reading the lead would've gotten you all this information on-site... The Morrison Man (talk) 22:30, 4 June 2025 (UTC)
::::Well first, that's flat not true. The lead doesn't have the third or fourth bullet points. Second, none of it is in the lead paragraph. Third, duh, reading the 4-paragraph lead would give more info than the 1-paragraph TFA blurb, and reading the article would certainly have all that info, but this demonstration is about improving the 1-paragraph TFA blurb, so pointing out that some of this info is in the lead isn't really relevant. Levivich (talk) 22:39, 4 June 2025 (UTC)
You can count me retired if this goes through in any way, shape or form. No ifs or buts. ~~ AirshipJungleman29 (talk) 19:50, 4 June 2025 (UTC)
Absolutely not. Completely unnecessary and harmful to Wikipedia's reputation. ARandomName123 (talk)Ping me! 20:30, 4 June 2025 (UTC)
:I also tested it with a few articles I know well, and if asked to keep the summary short it generally did well. Longer "summaries" meant more hallucinations, though. Cremastra (u — c) 21:55, 4 June 2025 (UTC)
:{{tq|as long as somebody pays the API credits.}} look at :wikitech:Machine Learning/LiftWing. They don't need to pay for API credits, they spend significant amount of money to run various models themselves. Polygnotus (talk) 22:00, 4 June 2025 (UTC)
:An informal test does not prove the absence of hallucinations. They are insidious and they may appear in unpredictable places. Andre🚐 22:40, 4 June 2025 (UTC)
This is a bad idea. While AI generated summaries of Wikipedia content are potentially useful and, given our free licensing, ultimately inevitable, there is no reason for the Foundation to be generating or hosting them. The primary role of the Foundation should be assisting volunteers in the creation and curation of content. Since it's free delivery in digestible packets mostly takes care of itself. Eluchil404 (talk) 00:05, 5 June 2025 (UTC)
:I see below that the WMF has already decided to pause the idea. However, I really think it should just be scrapped altogether rather than "reformatted", and I believe that any attempts by the WMF to push through with it in any capacity, despite community opposition, will not end well. Generative AI is already very fond of hallucinating, even when trained on proper data, and promoting technology that encourages such things would only hurt the project's reputation. I get the idea, I understand that there are good intentions behind it, and I do think that AI has its use cases (not on-wiki though). But now is not the right time, and until AI gets past its hallucinations, I don't think it ever will. Narutolovehinata5 (talk · contributions) 01:59, 5 June 2025 (UTC)
I'm not sure what I could say that hasn't been already said above, but I strongly oppose this as well. Relativity ⚡️ 02:35, 5 June 2025 (UTC)
= Reply WMF =
Hey everyone! This is Olga, the product manager who is working on the summary feature at WMF. I just wanted to let you all know that we’re following the conversation here closely and will be getting back to you with some thoughts and next steps we can discuss later today. OVasileva (WMF) (talk) 07:37, 4 June 2025 (UTC)
:@OVasileva (WMF) If you ever want some help weeding out bad ideas, please ask me on my talkpage. I am happy to help and good at finding potential problems/downsides. Polygnotus (talk) 07:46, 4 June 2025 (UTC)
::To weed out bad ideas would be to not do this at all, ever, under any circumstances. 2007GabrielT (talk) 12:25, 4 June 2025 (UTC)
:Also, can you please give us access to those 3 documents hosted on Google? Thanks! Polygnotus (talk) 07:59, 4 June 2025 (UTC)
:Dopamine is a neurotransmitter, a chemical messenger that carries signals between brain cells. It plays a vital role in several brain functions, including emotion, motivation, and movement. When we experience something enjoyable or receive a reward, our brain releases dopamine, creating a sense of pleasure and reinforcement. This neurotransmitter also helps us focus and stay motivated by influencing our behavior and thoughts. Dopamine imbalance has been associated with various disorders, such as depression and Parkinson's disease, highlighting its importance in maintaining overall brain health and function.
The first sentence is in the article. However, the second sentence mentions "emotion", a word that while in a couple of reference titles isn't in the article at all. The third sentence says "creating a sense of pleasure", but the article says "In popular culture and media, dopamine is often portrayed as the main chemical of pleasure, but the current opinion in pharmacology is that dopamine instead confers motivational salience", a contradiction. "This neurotransmitter also helps us focus and stay motivated by influencing our behavior and thoughts". Where is this even from? Focus isn't mentioned in the article at all, nor is influencing thoughts. As for the final sentence, depression is mentioned a single time in the article in what is almost an extended aside, and any summary would surely have picked some of the examples of disorders prominent enough to be actually in the lead.{{pb}}So that's one of five sentences supported by the article. Perhaps the AI is hallucinating, or perhaps it's drawing from other sources like any widespread llm. What it definitely doesn't seem to be doing is taking existing article text and simplifying it. CMD (talk) 18:43, 3 June 2025 (UTC)
:As someone who has tested a lot of AI models; no AI technology that is currently available to the public is reliably able to make an accurate summary of a complicated article. We may get there at some point, but we aren't there yet. Polygnotus (talk) 18:47, 3 June 2025 (UTC)
::CMD makes some good points but maybe the WMF is not using a good AI. I tried asking Gemini 2.5 Pro to summarise the article "in one paragraph using English suitable for a general readership." The result was as follows:{{tqb|Dopamine is a chemical messenger that plays several vital roles in the body. In the brain, it acts as a neurotransmitter, sending signals between nerve cells, and is particularly known for its role in the brain's reward system, with levels increasing in anticipation of rewards. Many addictive drugs affect dopamine pathways. Beyond the brain, dopamine also functions as a local messenger. Imbalances in the dopamine system are linked to several significant nervous system diseases, such as Parkinson's disease and schizophrenia, and many medications for these conditions work by influencing dopamine's effects.}}This seems a reasonable summary as all the points it makes appear in the article's lead and so there's no hallucination. Note that Gemini lists its sources and it only lists the Wikipedia article so it presumably was just working from that. The language is still not easy as you have to understand concepts like "pathways" but it seems reasonably free of the technical jargon which makes the article's lead quite difficult. Andrew🐉(talk) 18:39, 4 June 2025 (UTC)
:::@Andrew Davidson Yeah but now do it a thousand times. Or ten thousand. The hallucinations will creep in. {{tq|Note that Gemini lists its sources and it only lists the Wikipedia article so it presumably was just working from that.}} That is not how that works. {{tq|The language is still not easy as you have to understand concepts like "pathways" but it seems reasonably free of the technical jargon which makes the article's lead quite difficult.}} If the problem is that the leads of the articles are difficult to understand, one solution could be direct people to simple.wiki. Another idea is to set up a taskforce/wikiproject whatever. Another idea is to use available Readability-tools (via some API):
:::Average Reading Level Consensus
:::Automated Readability Index
:::Flesch Reading Ease
:::Gunning Fog Index
:::Flesch-Kincaid Grade Level
:::Coleman-Liau Readability Index
:::SMOG Index
:::Original Linsear Write Formula
:::Linsear Write Grade Level Formula
:::FORCAST Readability Formula
:::Combine that with the pageview data ([https://pageviews.wmcloud.org/topviews/?project=en.wikipedia.org&platform=all-access&date=last-month&excludes pageviews.wmcloud] or [https://dumps.wikimedia.org/other/pageviews/ the dump]) and then check which are the hardest and try to improve those. There are thousands of ways to deal with this perceived problem ethically and uncontroversially. Polygnotus (talk) 18:51, 4 June 2025 (UTC)
::::Running things ten thousand times would be a significant experiment and that's what the WMF are proposing. The results are unlikely to be perfect but the starting point here is that the current human-generated article leads are far from perfect. It would be good to gather statistics on just how bad the current situation is using readability tools and other consistency checks. We'd then have a baseline for assessing potential improvements. Andrew🐉(talk) 20:09, 4 June 2025 (UTC)
:::{{tqq|maybe the WMF is not using a good AI}} I share this concern as well. The WMF is using Aya, and while I understand the choice of using an open-source multilingual LLM, I question whether Aya's accuracy is good enough, and whether it's better than ChatGPT, Gemini, or other LLMs. If the accuracy is worse, why would we use it? Why we re-create what is already available? The only way this makes sense is if the LLM used produced a better summary than what Google is already showing users with Gemini. I question whether any LLM on the market today has sufficient accuracy. Levivich (talk) 19:09, 4 June 2025 (UTC)
::::@Levivich Aya is slightly worse than ChatGPT in my testing. You can test it yourself, for free. Needs an account tho https://dashboard.cohere.com/playground/chat
::::{{tq| I question whether any LLM on the market today has sufficient accuracy.}} No need to question that, there is no sufficiently accurate AI for this task. Polygnotus (talk) 19:13, 4 June 2025 (UTC)
:::{{tq|1=maybe the WMF is not using a good AI}} is not really the problem with hallucination. LLMs are inherently stochastic and will hallucinate sometimes if you run the model enough times. Just because you got better results with a test run of one model doesn't imply you'll always get that. Dan Leonard (talk • contribs) 19:16, 4 June 2025 (UTC)
:::Just asked Gemini to provide a medium-length paragraph summary of Genghis Khan, which I wrote. I count two inaccuracies in 162 words. Not a good look for a "good AI". ~~ AirshipJungleman29 (talk) 20:05, 4 June 2025 (UTC)
:I don’t like that it’s not in wikivoice, either. Or that it’s loaded with AI words to watch. Seriously disheartening, and disrespects all of the amazing work that AICLEAN users have been doing to keep these issues out of articles. This is exactly the kind of text we are searching for to remove.
:How do we clean this? Do we need to plan ahead with templates like, “the above AI summary contains the following errors”? What if these summaries appear on top of meticulously planned CTOP leads like Donald Trump, which are built on dozens of established and coded consensuses, down to the precise words chosen? 3df (talk) 01:15, 4 June 2025 (UTC)
::I had to draftify a pretty ridiculous percentage of new articles as AI-generated articles during the recent WP:NPP backlog drive. To have this now shoved onto Wikipedia anyway is incredibly disheartening, and if it went through, I'd genuinely consider quitting editing altogether. 🌸wasianpower🌸 (talk • contribs) 04:18, 4 June 2025 (UTC)
{{tqqi|A two-week experiment on the mobile website}} seems to be the most immediate hazard; such an experiment would harm readers and negatively affect our reputation as a fairly reliable, non-AI source of information. Instead of freaking out, we should come up with some plan to persuade the WMF that this not a good idea and to stop them from rolling this out at any level.
Should the Wikipedia community do something to prevent or protest this "experiment", and if yes, what can/should we do? Cremastra (u — c) 21:25, 3 June 2025 (UTC)
:@Cremastra We should blast this survey link to everyone and anyone, and have them fill it out. Start an RFC with it. Spread it on Discord and IRC and post it on Village Pumps et cetera.
:https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq Polygnotus (talk) 21:28, 3 June 2025 (UTC)
::I already filled out the survey through the usual method. People are welcome to fill out the survey but I don't think we should submit multiple responses each. Something like an open letter to the WMF would be more effective than screwing around with their data. Also, if in reality the survey is an overwhelming "no", intentionally skewing the results would compromise their legitimacy. Cremastra (u — c) 21:30, 3 June 2025 (UTC)
:::@Cremastra The legitimacy the survey had was already zero, because they are intentionally choosing not to actually ask the community about it. Because we don't use surveys on Wikipedia, we use talkpages and RfCs and Village Pump discussions and the like. So the fact that they are intentionally evading our consensus building mechanisms makes that survey null and void already. Polygnotus (talk) 21:33, 3 June 2025 (UTC)
::::@Polygnotus Are the survey results even posted publicly? If not this seems like a "yes we care about your feedback" followed by throwing it in the trash bin. Scaledish! Talkish? [https://xtools.wmflabs.org/ec/en.wikipedia.org/ScaledishStatish]. 01:56, 4 June 2025 (UTC)
:::::@Scaledish No, the survey results are hidden. So unless you hack their account or the Qualtrics database you have to trust them when they report the results. But the fact that they use an external survey service instead of the normal ways to get consensus on Wikipedia, and that I had to search through their JavaScript to find the link, shows that they did not want us to voice an opinion and did not want me to share this link... Polygnotus (talk) 02:00, 4 June 2025 (UTC)
::::::@Polygnotus Thank you for finding the link. I tried for a good 10 minutes to be presented with the survey that is being given to editors and I was never given it. A/Bing that survey is gross. Scaledish! Talkish? [https://xtools.wmflabs.org/ec/en.wikipedia.org/ScaledishStatish]. 02:02, 4 June 2025 (UTC)
:::::::If they try this again, here in https://en.wikipedia.org/w/load.php?modules=ext.quicksurveys.lib&debug=true around line 611 it says surveyData and there is the configuration:
:::::::https://wikimedia.qualtrics.com/jfe/form/SV_1XiNLmcNJxPeMqq Polygnotus (talk) 02:05, 4 June 2025 (UTC)
:::::::@Scaledish Yeah if this survey was above board and an honest way to gauge consensus, why hide the link? Why not invite everyone to voice their opinion? I am no conspiracy theorist, but this seems fishy. Polygnotus (talk) 02:06, 4 June 2025 (UTC)
:We could always cban a few WMF people for WP:IDHT in regard to the insertion of unreliable content. Just spitballing. Thebiguglyalien (talk) 🛸 21:34, 3 June 2025 (UTC)
::I mean, there's nothing wrong with that policy-wise, if they did actually insist on it, but it might be a tad extreme. Cremastra (u — c) 21:37, 3 June 2025 (UTC)
:::Yeah but now we can negotiate downward. Thebiguglyalien (talk) 🛸 21:39, 3 June 2025 (UTC)
::If some random user implemented this{{snd}}adding an AI summary to every article{{snd}}after this discussion made it clear there was no consensus to do that, that user would be cbanned even if the summaries were accurate. 3df (talk) 23:27, 3 June 2025 (UTC)
:In the world of community-WMF squabbling, our standard playbook includes an open letter (e.g. WP:OPENLETTER2024), an RfC with community consensus against whatever the WMF wants to do (e.g. WP:FR2022RFC) or in theory some kind of drastic protest like a unilateral blackout (proposed in 2024) or an editor strike. My preference in this case is an RfC to stop the silliness. If the WMF then explicitly overrides what is very clear community consensus, we're in new territory, but I think they're unlikely to go that far. Cremastra (u — c) 21:36, 3 June 2025 (UTC)
::@Cremastra Maybe you can start an RfC on a very visible place? Something like:
::The WMF has started a survey to ask if we want to put an AI summary in every article's lead section.
::Unsurprisingly, even the example they gave in their screenshot [https://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)&oldid=1293819617#The_Dopamine_summary contains hallucinated AI nonsense].
::Please voice your opinions! Polygnotus (talk) 21:39, 3 June 2025 (UTC)
:::I took the survey. Its questions are confusing, and watch out for the last question: the good-bad, agree-disagree direction for the response buttons is REVERSED. Sloppy survey design. – Jonesey95 (talk) 21:40, 3 June 2025 (UTC)
::::I just hit this survey in the wild so to speak, so I did fill it out due to seeing it there. That last question switcheroo totally threw me, I don't think those results will be usable. CMD (talk) 02:54, 4 June 2025 (UTC)
::::I noticed that too. I'm not convinced it wasn't on purpose. In any case, I wouldn't trust the results of that last part. DJ-Aomand (talk) 11:39, 4 June 2025 (UTC)
::::Sloppy, or intended to produce more pleasing results? The Morrison Man (talk) 12:55, 4 June 2025 (UTC)
:::As I said at the top, I think our immediate concern should be the actual proposed experimentation, not the survey.
:::I was thinking something along the lines of
:::The WMF has proposed testing AI-generated summaries appended in front of article leads ([https://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(technical)&oldid=1293819617#The_Dopamine_summary example]). Does the community approve of this use of AI, or is this inappropriate and contrary to Wikipedia's mission? Cremastra (u — c) 21:42, 3 June 2025 (UTC)
::::They will use the survey as a weapon and pretend it gives them free reign to do whatever they want. A lot of people here will simply leave the second they see such an implementation of AI on a Wikipedia page, because that goes against everything we stand for. Getting those people back will be near impossible. Polygnotus (talk) 21:44, 3 June 2025 (UTC)
:::::If the WMF feels like acting with impunity, they'll do so. There has been little to no response from the WMF on this page, which suggests to me they're just going to roll ahead with their fingers in their ears. Which as thebiguglyalien points out above, may remind you of a certain guideline. Cremastra (u — c) 21:46, 3 June 2025 (UTC)
::::::I am certain {{ping|EBlackorby-WMF}} is not doing this because they are evil, I honestly believe these are goodfaith people who do not understand what they are saying, and what the consequences of their words are.
::::::If I say things like {{tq|They are proposing giving the most important screen real estate we have (the WP:LEAD) of every article to a for-profit company.}} they haven't looked at it that way, because that is not how they think.
::::::I do not think they should be banned/blocked, I think they should be educated. But we must stop them from doing more damage, one way or the other. Polygnotus (talk) 21:51, 3 June 2025 (UTC)
:::::::No one here thinks the WMF or any of their employees are "evil"; that is a ludicrous word to be using. If the WMF would respond to the feedback on this page (which is overwhelmingly against the proposal), it would reasssure me and many others. The present state of silence is what worries me. Cremastra (u — c) 21:53, 3 June 2025 (UTC)
::::::::{{tq|No one here thinks the WMF or any of their employees are "evil"}} hahahhaha Polygnotus (talk) 21:54, 3 June 2025 (UTC)
::::::::Yes, some people here honestly think the WMF is evil. Seriously. I even had to defend them in the context of the ANI vs WMF courtcase thing. They were falsely accusing the WMF of throwing those editors under the bus and abandoning them. Complete nonsense of course. But yeah some people do harbor an irrational hatred against the WMF. Polygnotus (talk) 21:56, 3 June 2025 (UTC)
::::::Y'all, please take a look at Special:Log/newusers/EBlackorby-WMF and do the math. She's been around for three (3) weeks. She very likely has no input into the design of any of this. You could make her job easier by just filling out the survey and encouraging everyone else to do the same.
::::::That said, we need to keep in mind that "what readers think" and "what readers want" has very little overlap with what editors want. For example: We write complex texts that take half an hour or more to read; readers usually spend less than 10 seconds on the page. We provide dozens or even hundreds of inline citations; readers don't click on any of them for 299 out of 300 page views, and on the 1/300th case, they only click through to one (1) source. We usually have zero or one images in an article; readers would like a dozen or more. We (well, some of us) worry about Wikipedia's reputation; a surprising percentage of readers don't actually remember that they're reading Wikipedia. In other words, it's entirely possible that many readers would be completely happy with this, even though the core community will hate it. WhatamIdoing (talk) 04:41, 4 June 2025 (UTC)
:::::::@WhatamIdoing {{tq|You could make her job easier by just filling out the survey and encouraging everyone else to do the same.}} If they wanted us to fill in the survey, why didn't they post the link?
:::::::{{Tq|it's entirely possible that many readers would be completely happy with this}} Good news for them, most search engines already include AI fluff that you explicitly have to opt-out of, so they can get their AI hallucination fix on any topic faster (and more conveniently) than they can reach Wikipedia. Polygnotus (talk) 04:45, 4 June 2025 (UTC)
::::::::And perhaps one based solely on the Wikipedia article, instead of Wikipedia plus who knows what else, would actually be an improvement for these readers. It doesn't interest me, but I'm not going to tell other people what they're allowed to read. WhatamIdoing (talk) 05:00, 4 June 2025 (UTC)
::@Izno Not really, since that is about AI development, something the WMF is incapable of doing. Polygnotus (talk) 21:42, 3 June 2025 (UTC)
:::@Polygnotus, Matma said it nicely earlier. Let me say it a little less nicely: Tone it down, now. You are being needlessly antagonistic and on top of that bludgeoning this discussion. Find something else to do for a while. Izno (talk) 21:55, 3 June 2025 (UTC)
::::@Izno That is indeed not very nice, and rather antagonistic. Polygnotus (talk) 22:00, 3 June 2025 (UTC)
::I was under the impression that discussion was broader and of the type that spends three months hammering out a wording. This is focused on a quick response to a specific issue. Cremastra (u — c) 21:43, 3 June 2025 (UTC)
:::Yes, I agree that's the impression, but I don't think that you can demonstrate consensus to do anything about this discussion without showing consensus in that discussion, without your own separate RFC. Izno (talk) 21:57, 3 June 2025 (UTC)
::::I think think is right, and I have now !voted in the Cent RfC accordingly. CMD (talk) 02:08, 4 June 2025 (UTC)
::Even though (as mentioned above) that discussion is about AI development as a whole, a few WMF employees actually discuss Simple Summaries in a bit of detail over there, so it may be worth reading through. —Gestrid (talk) 06:26, 4 June 2025 (UTC)
:Can we use site CSS to suppress it? Nardog (talk) 22:33, 3 June 2025 (UTC)
::If we can't, we will have to add a note that is displayed on every single article that tells readers to ignore the summary (and perhaps hide that note on desktop). —Kusma (talk) 10:58, 4 June 2025 (UTC)
I am just about the least qualified editor here, but I'd think spreading the survey and participating in the current AI development RfC should come before anything drastic. ⫷doozy (talk▮contribs)⫸ 21:52, 3 June 2025 (UTC)
I suggest starting an RfC at the VPProposals page with a simple question ("Should English Wikipedia articles offer AI-generated summaries?" or something like that) and a link to the :mediawikiwiki:Reading/Web/Content Discovery Experiments/Simple Article Summaries project page. Keep it simple. I predict that 99% of the users will !vote to oppose the feature, but at least with an RfC, the WMF will know where the "community" stands on this specific project. Some1 (talk) 22:49, 3 June 2025 (UTC)
:Yes, this. Cremastra (u — c) 23:06, 3 June 2025 (UTC)
I would presume interface admins could remove the extension once the WMF forces it onto us, right? LilianaUwU(talk / contributions) 05:42, 4 June 2025 (UTC)
:It could probably be suppressed with site CSS. 3df (talk) 07:46, 4 June 2025 (UTC)
:Interface administrators have access to gadgets, user scripts, and sitewide JavaScript and CSS, not extension installation and configuration. Extension installation and configuration is done by WMF folks using a different process (patches and deploys of the operations/mediawiki-config repo in Gerrit). –Novem Linguae (talk) 07:58, 4 June 2025 (UTC)
:: User:Novem Linguae: are you saying that there is nothing we can technically do if this gets deployed to stop or hide the AI summaries? —Femke 🐦 (talk) 12:00, 4 June 2025 (UTC)
::: Likely they could add CSS or JS to remove or hide the box with the AI content. Remember WP:Superprotect? That was added back in 2014 when German Wikipedia was doing much the same to hide MediaViewer. I don't think they'd try to bring back superprotect to fight back if we did it, but they might do other things. Anomie⚔ 12:12, 4 June 2025 (UTC)
:::Nope. I was just making the point that interface administrators do not have direct control of MediaWiki extensions. As mentioned by some others, it's possible to break some things using hacks (in this case the hack would probably be an edit to MediaWiki:Mobile.css or MediaWiki:Common.css or similar). This would be similar to what Portuguese Wikipedia did to block IP addresses from editing. We should think very carefully before crossing that bridge though. That would be a major escalation with the WMF. –Novem Linguae (talk) 17:15, 4 June 2025 (UTC)
I will note that I've asked folks at the WMF to reconsider this decision. There probably needs to be a wider discussion (both internally and potentially onwiki) about the rules around what can and cannot be A/B tested (stuff like, "hey should we have a bigger donate button" doesn't require consensus, but this feels closer to a pseudo-deployment). I think it also might make sense to potentially spin this tool in a different direction, say as an LLM that highlights hard technical language text on the lede that the user can then fix. (I think the core problem here still definitely needs addressing) Sohom (talk) 13:10, 4 June 2025 (UTC)
:I don't think we can begin to discuss spinning such a feature in the direction of highlighting "hard" or "technical" language without clearly defining what that threshold should be. What reading level are we aiming for across ENWiki? Grabbing a quote from the [https://www.mediawiki.org/wiki/Reading/Web/Content_Discovery_Experiments/Simple_Article_Summaries/Usability_study mediawiki page on the usability study for Simple Article Summaries]:
::"Most readers in the US can comfortably read at a grade 5 level,[CN] yet most Wikipedia articles are written in language that requires a grade 9 or higher reading level. Simple Summaries are meant to simplify and summarize a section of an article in order to make it more accessible to casual readers."
:A grade 5 level would mean that all lede sections would need to be completely understandable for a 10-11 year old. I fear simplifying text to this degree will end up reducing the nuance present in articles (which, per its nature, is already reduced in the lede). The Morrison Man (talk) 13:23, 4 June 2025 (UTC)
::Grade 9 is a good level for the lead of an article. It's not our fault that most Americans are bad readers. Cremastra (u — c) 13:35, 4 June 2025 (UTC)
:::I think it's fine for editor-facing tooling to be wrong at times, (assume a lower grade/have the grade be configurable) primarily cause editors have the ability to make judgement calls and not show parts of the text, something that readers can't. Sohom (talk) 14:30, 4 June 2025 (UTC)
I personally find it very problematic that we cannot do 2 week experiments. Experimentation is the basis of learning, of evolving of knowing where to go from where you are. If a two week experiment is this problematic, I think we should question the longevity of the project (on a generational scale). If people want to give input, they should give input, but that shouldn't block a 2 week experiment. —TheDJ (talk • contribs) 13:27, 4 June 2025 (UTC)
:@TheDJ I think the problem here isn't so much experimentation (which imo is fine), but rather the fact that this "feels like a deployment". Peeps who would see such a experiment would assume that Wikipedia is going the AI way (when it is not in fact doing that and is actively discouraging people from using AI in their writing). If the experimentation had community buy-in, I think we would have a completely different story. Sohom (talk) 13:32, 4 June 2025 (UTC)
:Experiments are fine, when they are conducted ethically. That is especially true of experiments involving human subjects. In this case, it was proposed that we present potentially misleading AI content to readers, who would not be aware of, nor had consented to being, test subjects. For things like minor UI changes, such unknowing A/B type testing may indeed be ethical, but not for some massive change like that. Readers to Wikipedia do not expect to receive AI-generated material; indeed, one of the things I love most about Wikipedia is that it's written by people, and does not use any "algorithm" or the like to try to shove something in anyone's face. You just get the article you pull up, and if from there you want a different one, you choose which one you read next. Now, if there were an opt-in process for people to consent to being part of such an experiment and provide feedback, that might be a different story. SeraphimbladeTalk to me 16:48, 4 June 2025 (UTC)
:@TheDJ Let's not pretend that the community reacts like this because it is a 2 week experiment. That is the mother of all strawmen.
:The whole thing is clear proof that the WMF is completely out of touch, does not understand its own role, and has no respect for the volunteers, or the readers. Polygnotus (talk) 18:27, 4 June 2025 (UTC)
At the same time as this underhanded attempt to sneak AI slop into the content, they are also making a request on meta to run test donation banners more often exclusively on enwiki. Starting at the extreme so as to work backwards, I suggest revoking all donation banner permissions until such time as everyone employed by or elected to WMF and affiliate roles with generative AI COI or positive views towards the same are terminated and prohibited from holding elected office. Competence is required. Awareness of community norms is required for anyone holding an elevated role on enwiki. Hold WMF to the same standards as you hold admins and contributors. Recall the WMF. 216.80.78.194 (talk) 20:08, 4 June 2025 (UTC)
=Beyond just this case=
This is a prime reason I tried to formulate my statement on WP:VPWMF#Statement proposed by berchanhimez requesting that we be informed "early and often" of new developments. We shouldn't be finding out about this a week or two before a test, and we should have the opportunity to inform the WMF if we would approve such a test before they put their effort into making one happen. I think this is a clear example of needing to make a statement like that to the WMF that we do not approve of things being developed in virtual secret (having to go to Meta or MediaWikiWiki to find out about them) and we want to be informed sooner rather than later. I invite anyone who shares concerns over the timeline of this to review my (and others') statements there and contribute to them if they feel so inclined. I know the wording of mine is quite long and probably less than ideal - I have no problem if others make edits to the wording or flow of it to improve it.{{pb}}Oh, and to be blunt, I do not support testing this publicly without significantly more editor input from the local wikis involved - whether that's an opt-in logged-in test for people who want it, or what. Regards, -bɜ:ʳkənhɪmez | me | talk to me! 22:55, 3 June 2025 (UTC)
:I mostly agreed with the thrust of your statement formulation before, but unfortunately this case makes it seem too weak. Bluntly, whether we are informed is somewhat of a moot point here. The issues with the example should have been caught internally, far before they made it to the craft-a-custom-youtube-video-for-a-survey phase, and far before they would need to inform communities. In the survey linked above, the tool blatantly and obviously fails on its own merits for its own purpose. To be at the two-week live test phase now, with the tool as it is? Informing us is not the issue. CMD (talk) 02:17, 4 June 2025 (UTC)
:Another approach would be to no longer allow the WMF to monetize the work of Wikipedians, and instead run our own banners to collect money for a new war chest. The WMF will never take the community seriously as long as they are the only possible provider of what we need. If there is a viable alternative that will change. Polygnotus (talk) 02:26, 4 June 2025 (UTC)
::Proposals to create an alternate WMF are not going to be helpful to this discussion. We are an existing community trying to work with the WMF, forking is a distraction. CMD (talk) 02:56, 4 June 2025 (UTC)
:::@Chipmunkdavis Sorry, I was perhaps unclear. What I mean is not forking, but making sure we are capable of doing that. Polygnotus (talk) 02:59, 4 June 2025 (UTC)
::::In the technical sense, we are capable of doing that as is. In practical and logistical senses, it would take moving some mountains which lie far outside the scope of this discussion. CMD (talk) 03:02, 4 June 2025 (UTC)
:::::Good to know, thanks. Polygnotus (talk) 03:20, 4 June 2025 (UTC)
We may need to start another RfC that says something like: "The WMF is not allowed to use secret surveys and has to use the conventional Wikipedia consensus building methods (talkpages, RfCs, et cetera)." Polygnotus (talk) 02:19, 4 June 2025 (UTC)
:Hm. Originally I thought this was some kind of A/B test and we should let the experiment play out without interference...for science! But now that I've seen the questions, this is not an A/B test. This is trying to gauge community support. It is trying to be an RfC. It should not have been hidden and doled out randomly. It should have been a public survey. Consider me suitably outraged. Toadspike[Talk] 03:21, 4 June 2025 (UTC)
::@Toadspike Indeed. And it is pretty worrying that the WMF uses their technical access to the servers to try to evade our consensus building methods. Polygnotus (talk) 03:25, 4 June 2025 (UTC)
:::It might be more worrying that editors don't grasp the point of random sampling. Public surveys, and even quasi-private ones, tend to get a lot more responses from certain types of contributors (e.g., editors with >30K edits) than others. If you want to know what everyone thinks, then posting the link to a page where mostly highly active editors will see it (and only a tiny fraction of them – only 1 in 500 registered editors ever posts to the Village pumps, and even if you look only at WP:EXCON editors, it's just one in six of them) is not a way to go about it. Surveying a biased sample set is exactly the kind of bad behavior by survey operators that we see at Wikipedia:Fringe theories/Noticeboard all the time, so we shouldn't be advocating for using it here. WhatamIdoing (talk) 04:57, 4 June 2025 (UTC)
::::@WhatamIdoing As someone whose second-favourite book is about lying with statistics... any choice you make is wrong, and it is about choosing the lesser of a bunch of evils. This was a terrible choice. Polygnotus (talk) 05:01, 4 June 2025 (UTC)
:::::If your goal is to get an accurate understanding of the sentiment in a given population, and you believe that 100% responses are unlikely, then proper random sampling is not "the lesser of a bunch of evils"; it is the right choice.
:::::If your goal is to show off that you subscribe to some non-scientific human values (e.g., "transparency!" or "following our conventional consensus-building methods"), then of course you wouldn't want to do things in a statistically sound manner. Instead, you'd want to take a leaf from the marketing manuals. I could suggest a model that I believe would work, except that (a) I don't think marketing-led software development is the right approach for Wikipedia, and (b) I don't want to provide a manual on how to do it. WhatamIdoing (talk) 05:12, 4 June 2025 (UTC)
::::::@WhatamIdoing You are invited to come read the book. It has an entire chapter that deals with problems such as this (and ethics more broadly).
::::::The idea that this is, somehow, "science", and that therefore we can do all kinds of bad/unethical stuff has historically been proven to be a bad one. You most likely know a bunch of examples.
::::::Who cares about a statistically sound manner of doing research when someone is proposing to give the best screen real estate we have, the lead sections of our articles, to some multi-billion dollar AI company, and to use the volunteers as free labour?
::::::Sorry, I can't pretend that there is a discussion to be had about survey methodology instead of one about incompetence and disrespect for the volunteers. Polygnotus (talk) 05:21, 4 June 2025 (UTC)
:::::::AI companies are already using Wikipedia volunteers as free labor. See recent discussion (now archived): {{slink|Wikipedia:Village_pump_(miscellaneous)/Archive_82|Have_editors_become_free_labor_for_AI_techbro_oligarchs?}}. Some1 (talk) 12:08, 4 June 2025 (UTC)
:::::::Random sampling is neither "bad" nor "unethical". NB that I'm talking about your suggestion above that {{xt|"The WMF is not allowed to use secret surveys and has to use the conventional Wikipedia consensus building methods (talkpages, RfCs, et cetera)."}} and not about whether AI is desirable in general, or this proposed use is desirable in practice. WhatamIdoing (talk) 20:29, 4 June 2025 (UTC)
::::::::@WhatamIdoing {{tq|Random sampling is neither "bad" nor "unethical".}} no one said it was, as far as I know. Polygnotus (talk) 20:31, 4 June 2025 (UTC)
:::::::::It sure looks like you did: "The idea that this [random sampling and proper statistical standards] is, somehow, "science", and that therefore we can do all kinds of bad/unethical stuff". WhatamIdoing (talk) 20:35, 4 June 2025 (UTC)
::::::::::I wrote: {{Tq|The idea that this is, somehow, "science", and that therefore we can do all kinds of bad/unethical stuff has historically been proven to be a bad one.}}
::::::::::You wrote: {{tq|It sure looks like you did: "The idea that this [random sampling and proper statistical standards] is, somehow, "science", and that therefore we can do all kinds of bad/unethical stuff".}}
::::::::::Spot the differences. Polygnotus (talk) 20:38, 4 June 2025 (UTC)
:::::::::::Square brackets are a convention in the English language to identify words that have been added as a clarification by an editor. You might have run across that in academic sources in the past.
:::::::::::I am using this convention to tell you what I understood the Antecedent (grammar) of the pronoun "this" in your sentence to mean. A typical response to such a statement sounds like one of these two:
:::::::::::# 'I apologize for being unclear. When I wrote "The idea that this is somehow science...", I didn't mean statistics; I meant "The idea that [fill in the blank with, e.g., 'AI' or 'marketing' or whatever is somehow science..."', or
:::::::::::# 'Yes, you understood me correctly. I think it's wrong to consider random sampling and proper statistical standards to be any type of science. Instead, I think statistics should be considered a [fill in the blank with, e.g., 'non-science like fine artwork' or 'a pseudoscience like Time Cube']."
:::::::::::::I'm telling you how I actually do (and did) interpret that pronoun. WhatamIdoing (talk) 20:58, 4 June 2025 (UTC)
::::::::::::::@WhatamIdoing You still interpret it like that? command-a-03-2025 and c4ai-aya-expanse-32b didn't get it either. But Claude did. Polygnotus (talk) 23:13, 4 June 2025 (UTC)
{{textdiff|1=The idea that this is, somehow, "science", and that therefore we can do all kinds of bad/unethical stuff has historically been proven to be a bad one.|2=The idea that this [random sampling and proper statistical standards] is, somehow, "science", and that therefore we can do all kinds of bad/unethical stuff}}
:A decade ago, work-me ran one of these surveys. We offered an on-wiki version and an off-wiki (Qualtrics) version. We got about 500 (yes, five hundred) responses in Qualtrics and just two (2) on wiki. People voted with their feet, and I've no reason to believe that it would be any different for any other survey. You might not approve of their choices (it's ever so much easier to argue with people who give the 'wrong' answer if it's on wiki, with their username right there...), but these are the choices people make, and I'd rather get 500 responses in Qualtrics than just two (or even ten) on wiki. WhatamIdoing (talk) 04:49, 4 June 2025 (UTC)
:Speaking of evil, I noticed as I landed on the last page that the order of good and bad responses had been switched at one point during the survey. Can't help but feel like they did this intentionally. LilianaUwU(talk / contributions) 05:19, 4 June 2025 (UTC)
::Let's not assume bad survey design is intentional? CMD (talk) 05:21, 4 June 2025 (UTC)
:::Yeah I give them the benefit of the doubt on that one. They obviously didn't take much care in designing the survey. Polygnotus (talk) 05:23, 4 June 2025 (UTC)
::There are indeed benefits to random sampling. Asking "do you like this new feature or not" is fine. But the survey asks several questions about who should moderate this new content which would certainly be subject to community approval later anyways, which is weird. Toadspike[Talk] 10:19, 4 June 2025 (UTC)
::I also was thrown off by the switch from "agree"/"disagree" to "unhelpful"/"helpful" and it almost caused me to vote in favor of AI usage. Whether from deception or incompetence, it renders the results of last set of questions completely useless as there's no way to know how many people voted incorrectly. Dan Leonard (talk • contribs) 19:27, 4 June 2025 (UTC)
=Does Phab hide more?=
Who is willing to check Phabricator to see what other LLM stuff they have in store? Polygnotus (talk) 08:26, 4 June 2025 (UTC)
:[https://phabricator.wikimedia.org/search/query/YxhbIuZpTgQC/#R Here's a list of tasks which mention "AI"] Jo-Jo Eumerus (talk) 08:36, 4 June 2025 (UTC)
:To answer the question asked in the section title. No.
:Lets tone down the the witch hunt. (Also yes, the number of tasks mentioning AI might be more, but Tone Check and Simple Article Summaries are the only two WMF led ones planned for now). Sohom (talk) 11:48, 4 June 2025 (UTC)
:Phab doesn't hide things unless it is NDA bound. If anything is hidden, it is hidden in plain sight. —TheDJ (talk • contribs) 13:34, 4 June 2025 (UTC)
::+1 to what TheDJ said here, there is very little behind NDA, except security bugs and the like. Sohom (talk) 14:17, 4 June 2025 (UTC)
= Taking a step back =
Hey everyone, this is Olga, the product manager for the summary feature again. Thank you all for engaging so deeply with this discussion and sharing your thoughts so far.
Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March. As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further. With that in mind, we’d like to take a step back so we have more time to talk through things properly. We’re still in the very early stages of thinking about a feature like this, so this is actually a really good time for us to discuss here.
A few important things to start with:
Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such.
We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.
With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.
We’ve also started putting together some context around the main points brought up through the conversation so far, and will follow-up with that in separate messages so we can discuss further.
:{{tqq|With all this in mind, we’ll pause the launch of the experiment so that we can focus on this discussion first and determine next steps together.}} Wonderful. Thank {{u|OVasileva (WMF)|you}} very much. Cremastra (u — c) 13:36, 4 June 2025 (UTC)
::Concurring with the other editors below. Thank you very much for pausing, but I think the next steps should be an agreement to not go forward with this at all. It doesn't take an admin to see that there is overwhelming consensus here against this proposal, and this website operates by consensus. This proposal should be treated as any other, from any editor, but in this case it has been clearly rejected by the community. Cremastra (u — c) 15:00, 4 June 2025 (UTC)
:Thank you for listening to the community on this one - but may I suggest simply scrapping the whole idea? I fail to see any way it will ever be acceptable to the vast majority of editors. CoconutOctopustalk 14:12, 4 June 2025 (UTC)
::@CoconutOctopus I think there are valid ways of implementing this idea, perhaps as a stand-alone browser extension, or maybe even as a tool that highlights technically worded or hard to understand text for editors or for that matter, maybe a tool that popups up relevant related articles to look at for definitions of technical terms. I would not call for scraping this line of work, but I would definitely call for caution since it can be easy to accidentally erode trust with readers. Sohom (talk) 14:27, 4 June 2025 (UTC)
:Glad to hear this. Please keep in mind that while it's true that editor involvement is essential, volunteer time is our most precious resource, and a lot of us are already spending a lot of that time cleaning up AI-generated messes. -- asilvering (talk) 14:17, 4 June 2025 (UTC)
::Good reminder about the influx of AI garbage at AfC and NPP as a key context here. I think this proposal felt particularly misguided because it was actively counter to editors' most pressing needs re: AI, namely, anything that could help us spend fewer hours of our precious youth carefully reading content that no human took the time to write. ~ L 🌸 (talk) 17:16, 4 June 2025 (UTC)
:::Indeed. AI tools that help editors identify which articles are most likely to be most in need of a more simplified lead? That could be hugely useful. AI tools that give us more shit to shovel, while dealing possibly irreparable harm to our current image as "[https://www.wired.com/story/wikipedia-online-encyclopedia-best-place-internet/ the last best place on the Internet]"... I'll pass. -- asilvering (talk) 17:26, 4 June 2025 (UTC)
:I think I'm with CoconutOctopus on this one. What you're seeing here isn't like the initial opposition to Visual Editor (where it wasn't yet fit for purpose, but one day might be, and indeed after more effort was put into it, it was and it was then pretty readily accepted). This is primarily opposition to the proposal altogether, that AI-generated material would ever be presented as article content. I do not see any way that such a thing could ever be acceptable, regardless of what was done with it. SeraphimbladeTalk to me 14:30, 4 June 2025 (UTC)
:Echoing the other editors. There is absolutely zero way in which I would ever be comfortable with presenting readers with AI generated content. Your step back is a little win, but I definitely don’t like the implication that you will return in the future. Scaledish! Talkish? [https://xtools.wmflabs.org/ec/en.wikipedia.org/ScaledishStatish]. 14:54, 4 June 2025 (UTC)
:Thank you very much for listening to the needs of the community! The idea did get me thinking: while there is strong opposition to AI-generated content, I haven't seen as much discussion about the other part of the idea, namely, bringing summaries to articles. While, in most articles, it would be redundant with the lead, a "simple summary" could be interesting to consider for articles with a long and technical lead. The infrastructure for this project can definitely be used to work on an implementation of volunteer-written summaries on technical articles, if the community and the WMF are both interested! Chaotic Enby (talk · contribs) 15:09, 4 June 2025 (UTC)
::I agree, I think users being able to write simple summaries themselves would also be not a bad idea Sohom (talk) 15:21, 4 June 2025 (UTC)
:::I'm realizing that it could be done with a template (maybe a reskin of a collapsible box) and would not necessarily need WMF involvement, although their help would still be welcome for some technical aspects like Visual Editor integration and for A/B testing variants of the format once the idea has community consensus (if that happens). Also thinking that, since these summaries would be user-editable, it might be neat to have a gadget to edit them directly (like Wikipedia:Shortdesc helper and the lead section edit link). Chaotic Enby (talk · contribs) 15:31, 4 June 2025 (UTC)
:::Indeed, and a tool that would help editors with these might be useful, as opposed to creating new layers of summaries. CMD (talk) 19:01, 4 June 2025 (UTC)
:::Infoboxes are yet another type of summary. And the proposed feature seems rather like Page Previews which are another existing type of article summary. Wikipedia has a problem of proliferating complexity -- see feature creep. Andrew🐉(talk) 22:16, 4 June 2025 (UTC)
:Grateful for a) the editors that spoke up here, and b) WMF for recognizing community concerns and agreeing that this needed to be paused. Just adding my voice to say - with no ill will toward the teams that developed it - this seems like an extremely bad idea on its face. 19h00s (talk) 15:39, 4 June 2025 (UTC)
:+1 to what others have said. AI-generated summaries should not have a place on Wikipedia. 🌸wasianpower🌸 (talk • contribs) 15:46, 4 June 2025 (UTC)
:Thank you Olga for responding and taking the community's concerns on board. --Grnrchst (talk) 16:41, 4 June 2025 (UTC)
:To reiterate what others have said, I do not see any scenario in which I support any readers or editors, ever, viewing AI-generated content on Wikipedia. This project is fundamentally against the Wikipedia ethos and should be done away with entirely. —Ganesha811 (talk) 16:44, 4 June 2025 (UTC)
:@OVasileva (WMF): I hope the WMF will use randomly-selected surveys of editors and readers to gather feedback rather than self-selected surveys, because
:self-selected surveys (like comments on wiki talk pages) will always result in skewed feedback. Those of us who want the WMF to keep iterating, experimenting, and testing may not be as vocal as others but we may be more numerous, who knows. Levivich (talk) 17:03, 4 June 2025 (UTC)
::I think there are a lot of contexts where I would agree with this sentiment - that is the comments are a form of elite that are not representative of a bigger group. However, in this case where there is going to be an explicit need for editor moderation, a discussion with 85 participants probably does have some degree of representativeness of the kinds of people who would then do that moderation. Best, Barkeep49 (talk) 18:52, 4 June 2025 (UTC)
:I think it is already clear that "cancel" would fit better here than "pause". 3df (talk) 17:34, 4 June 2025 (UTC)
:A bit late to this conversation, but I agree with the "Yuck" sentiments. I think that a pause on development on this feature is insufficient, and a cancellation is the minimum acceptable response here, and ideally should include better communication so wee don't ever get 2 weeks away from something like this again. Do we need a RFC now to give our interface admins preclearance to remove these summaries if the WMF ever does roll them out? Tazerdadog (talk) 20:34, 4 June 2025 (UTC)
:I'll have to agree with everyone else: it shouldn't be a pause on development, it should be an outright cancellation. We're the last mainstream website without AI being continually rammed down our throats, and it should remain that way. LilianaUwU(talk / contributions) 22:11, 4 June 2025 (UTC)
::...and if you want to experiment, use test.wikipedia.org not en.wikipedia.org Polygnotus (talk) 22:13, 4 June 2025 (UTC)
::Hi all (ping @Polygnotus and @Geni). I’m Marshall Miller, working with Olga (but in a different timezone!) Thanks for noting this — the survey is still running. It’s too late in the day for the team to turn it off from a technical perspective — tomorrow is the soonest we can do it. And I understand your concern — we don’t want this survey to accidentally convey that we are definitely building/deploying this feature broadly. I’m hopeful that by the time we can turn it off, there will be enough data collected for us to be able to look at informative results together (staff and volunteers). MMiller (WMF) (talk) 02:10, 5 June 2025 (UTC)
:::Great, thanks. Cremastra (u — c) 02:12, 5 June 2025 (UTC)
:::{{tq|I’m hopeful that by the time we can turn it off, there will be enough data collected for us to be able to look at informative results together (staff and volunteers).}} Note that the survey is incredibly flawed in a bunch of ways, so it is impossible to draw conclusions from it. Also note that surveys are not how we make decisions here, the Wikipedia community has discovered that our consensus-based model is superior to simple voting. It would be very good to have a retrospective where we can discuss what went wrong and how we can avoid making similar mistakes in the future. Also, I am pretty sure that the community wants assurances that something like this won't happen again. They are already drafting up ways to tell the WMF to stop doing this.
:::As a nerd I like AI stuff and I use AI every day, but as a Wikipedian I know how careful we gotta be if we want to use AI properly on Wikipedia. Polygnotus (talk) 02:25, 5 June 2025 (UTC)
I may be wrong, but it seems to me that the deployment of AI for article descriptions is a bit of a solution in search of a problem. It looks like people want to use AI and then think this is a good way. Can we think about what the problems are on Wikipedia and how to solve them instead? Perhaps the answers involve AI, perhaps they do not. In the case at hand:
Is it true that lead sections are often too technical?
If yes, is there a way to identify which lead sections are too technical?
If yes, how can we improve these lead sections?
AI could possibly help with these things, but not by replacing our human written leads with AI generated ones. That is what software companies do who do not have an army of highly intelligent and opinionated volunteers working for them for free. (Google or Facebook might do these things, because they are technology based, but there is absolutely no reason for a human-based place like Wikipedia to replace human output by machine output; it is antithetical to the way we work). Any deployment of AI on Wikipedia must be subordinate to the humans, not replace them. So anyway, could we do the process the right way around: first identify a problem that needs solving, then discuss how to approach the problem and what tools to best use for it. —Kusma (talk) 19:03, 4 June 2025 (UTC)
:Well put! I'm not 100% against any interaction between AI tools and Wikipedia - but they must be deployed judiciously on the back end to solve specific problems, not suddenly rolled out to 10% of all mobile users in a "test" in order to replace the work of human editors. —Ganesha811 (talk) 21:52, 4 June 2025 (UTC)
:Yes, thank you for this. I see the implementation of AI—at least right now—similarly to putting WiFi in toothbrushes. Is it "the latest tech-y thing"? Yes. Does it make our lives easier? No. Relativity ⚡️ 02:48, 5 June 2025 (UTC)
::@Relativity As someone who is strongly against this proposal I should say that using AI can truly be beneficial. They completely missed the mark on the Five Ws and how to communicate and all that, but the technology in itself isn't uniformly bad. I am using AI when I edit Wikipedia in ways that are beneficial and non controversial.
::For example, Claude just wrote a script for me that shows the currently active surveys on my userpage. So if the WMF has another bad idea I will know about it.
::And I have also used AI for things like detecting typos, missing words and other small imperfections. Ultimately, I take the decision and the responsibility, and the AI sometimes says a bunch of nonsense, but it can be a useful tool, if you know how to work with it. Polygnotus (talk) 02:55, 5 June 2025 (UTC)
Tech News: 2025-23
Latest tech news from the Wikimedia technical community. Please tell other users about these changes. Not all changes will affect you. Translations are available.
Weekly highlight
The Chart extension is now available on all Wikimedia wikis. Editors can use this new extension to create interactive data visualizations like bar, line, area, and pie charts. Charts are designed to replace many of the uses of the legacy Graph extension.
Updates for editors
It is now easier to configure automatic citations for your wiki within the visual editor's citation generator. Administrators can now set a default template by using the _default key in the local MediaWiki:Citoid-template-type-map.json page (example diff). Setting this default will also help to future-proof your existing configurations when new item types are added in the future. You can still set templates for individual item types as they will be preferred to the default template. [https://phabricator.wikimedia.org/T384709]
Starting the week of June 2, bots logging in using action=login or action=clientlogin will fail more often. This is because of stronger protections against suspicious logins. Bots using bot passwords or using a loginless authentication method such as OAuth are not affected. If your bot is not using one of those, you should update it; using action=login without a bot password was deprecated in 2016. For most bots, this only requires changing what password the bot uses. [https://phabricator.wikimedia.org/T395205]
From this week, Wikimedia wikis will allow ES2017 features in JavaScript code for official code, gadgets, and user scripts. The most visible feature of ES2017 is async/await syntax, allowing for easier-to-read code. Until this week, the platform only allowed up to ES2016, and a few months before that, up to ES2015. [https://phabricator.wikimedia.org/T381537]
Scholarship applications to participate in the GLAM Wiki Conference 2025 are now open. The conference will take place from 30 October to 1 November, in Lisbon, Portugal. GLAM contributors who lack the means to support their participation can apply here. Scholarship applications close on June 7th.
I'm trying to figure out how to use the new Chart extension. As far as I can tell, your data has to exist in a page on Commons, in the Data namespace? Is that correct? RoySmith(talk) 00:45, 3 June 2025 (UTC)
:Currently, yes. Izno (talk) 00:46, 3 June 2025 (UTC)
::That's astonishing. Why? RoySmith(talk) 00:49, 3 June 2025 (UTC)
:::The FAQ says {{tq2|Chart definitions will live on their own .chart pages on Commons, under the Data: namespace. We want to treat charts as a standalone content type, rather than just a part of an article. It will be easy to reuse the same chart across wikis, and beyond Wikimedia platforms by making them available as links. Editors who want to embed charts in an article will be able to do so with a short piece of wikitext, similar to including an image from Commons, all without needing to interact with complex templates.
We have heard comments that requiring the data come from Commons tabular data may not address some common data sourcing flows, like from MediaWiki APIs or Wikidata Query Service. While those sources are not the focus for this project, we want to ensure the extension is designed in a way that they can be supported in the future.
}} My memory in addition to that is that it was seen as a minimum viable product. The particular point as been a pain for other editors since the project got to the point of developing this new extension, see :mw:Extension talk:Chart/Project#Data source and I suspect other conversations on that talk page. (And I've seen groaning elsewhere.) Izno (talk) 01:22, 3 June 2025 (UTC)
:::::I want to use this to chart the sizes of the various queues that feed the WP:DYK system: number of nominations pending, number of approved hooks, etc. I'll have a bot that computes these things and updates the data once a day. I guess that falls into the "some common data sourcing flows" bucket. Logically, I would have that data page live somewhere near the rest of the DYK pages, like Wikipedia:Did you know/DYK hook count. Having to put it on Commons won't break anything, but it seems silly, confusing, and arbitrary. I'm all for getting a MVP out the door, but how does hard-wiring Commons:Data into the path for the source make things simpler on the developers? RoySmith(talk) 10:37, 3 June 2025 (UTC)
::::::And, since this will involve a bot to write the data files, it will require that I go through the commons bot approval process, when I already have an approved bot on enwiki which could do the same thing with a lot less fuss. RoySmith(talk) 11:13, 3 June 2025 (UTC)
:::::::I don't even see how to run tests without polluting the global namespace. Normally I would start to play with something like this in my sandbox, but the current setup seems to make that impossible. RoySmith(talk) 13:15, 3 June 2025 (UTC)
::::::::RoySmith, it's really no big deal getting bot approval on Commons and once you have it you can do other things. The advantage of Commons is the data is accessible to Lua modules from any wiki. Thus your program can be copied to any Wiki, without having to copy the data. Of course if the data is enwiki specific it wouldn't matter so much, but if the data for each wiki was kept under the same tree on Commons than conceivably someone could write a summary program that looks at all wikis data, and that program would then be installable on any wiki. It's nice to have a universally accessible place to store data even though there is some initial setup to get bot approval. — GreenC 17:33, 3 June 2025 (UTC)
:::::::::Hmmm. I asked on commons and was surprised (in a good way) to discover that I didn't actually need permission. And I've since figured out that I can do my testing in Data:Sandbox/RoySmith, which seems a bit weird, but OK, not a blocker, so I've been playing around with commons:Data:Sandbox/RoySmith/DYK-test.chart, which produces:
:::::::::so at least I'm making some progress. I still need to figure out some layout issues. And to really make this useful, I'll need @Snævar's Transforms module but that's not quite ready.
:::::::::The sad part is using [https://wikitech.wikimedia.org/wiki/Prometheus Prometheus] would be so much easier, but apparently I'm not allowed to do that since it's reserved for production. Even easier would have been [https://wikitech.wikimedia.org/wiki/Graphite Graphite] but that's not a thing any more. RoySmith(talk) 17:51, 3 June 2025 (UTC)
::::::::::RoySmith, Nice. Glad to know about this. I want to graph User:SDZeroBot/Category counter. The issue with time series it grows forever while Commons has a file size limit. One can create new .tab files for each year, but the plumbing works gets complicated on back and front end. — GreenC 21:49, 3 June 2025 (UTC)
:::::::::::That was one of the nice things about graphite. It would time-compress older data so it took up less space. You could get (for example) 5 second resolution for the most recent data points, but the really old data might be aggregated to one sample every hour.
:::::::::::I'm thinking I'll want to store maybe 10 parameters, one sample per day. So maybe 200 kB per year which is trivial. If you've got a lot more data, maybe not so trivial for your use case. RoySmith(talk) 21:58, 3 June 2025 (UTC)
::::::::::@RoySmith Have you test this in dark mode? Polygnotus (talk) 21:58, 3 June 2025 (UTC)
:::::::::::That's way, way, down on my list of things to worry about. RoySmith(talk) 21:59, 3 June 2025 (UTC)
:OMG. I have written a module that takes input from Commons data. The system works very well. However, the data is generated by a bot which can easily write JSON. Editing the data manually would be totally stupid (too difficult, too easy to make mistakes, too hard for others to check). Conceivably there could be a wikipage where some simple formatted data was entered (wikitext) and a bot could periodically copy changes to Commons. But using Commons data would be laughably impractical without a workaround. Johnuniq (talk) 05:59, 3 June 2025 (UTC)
::"But using Commons data would be laughably impractical" I still don't get why that is so unpractical. Is this because people don't want to go to Commons ? Why not ? All other media is primarily there as well. IS it because people don't understand JSON and only understand a simple key:value notation ? —TheDJ (talk • contribs) 09:37, 3 June 2025 (UTC)
:::When I looked at it (long ago in May 2020), the only way a human could update a number was to edit the whole JSON file, I think. I didn't worry about it because {{u|GreenC}} provided a bot which did all the hard work of maintaining the numbers and writing them in the correct format. I might be missing something, but I clicked 'edit' at :c:Data:Wikipedia statistics/data.tab and saw a hard-to-follow table. I could handle it because I would suck it into a proper editor, search for what I wanted, check that it was the right thing, and change it. I suspect most Wikipedians would be uncomfortable with something so unfamiliar. I haven't seen an example of data for a graph—perhaps that works out ok? Johnuniq (talk) 10:05, 3 June 2025 (UTC)
::::@Johnuniq You might want to enable "Tabular Import/Export" gadget in your Commons preferences. It adds buttons to .tab pages to import and export from and to csv and excel files. It's 8 years old, but it still seems to work, even though it could really use an update. —TheDJ (talk • contribs) 10:45, 3 June 2025 (UTC)
:::{{Tq|1="Is it because people don't understand JSON and only understand a simple key:value notation"}}—That probably applies to the significant majority of contributors. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 13:22, 3 June 2025 (UTC)
::::I don't know. I'd expect anyone working with datasets to have basic knowledge of JSON these days. It is so ubiquitous. —TheDJ (talk • contribs) 13:59, 3 June 2025 (UTC)
:::::Sure, for some value of "working with datasets". For anyone doing it professionally, it's a basic requirement. But for the average Wikipedia contributor, wanting a chart of their favourite sports team's performance, or their city's population over time..?
:::::Hopefully we will soon have (or link to) tools for making and editing JSON easily. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 11:06, 4 June 2025 (UTC)
::::You don't need to edit the JSON, at least for updating the tabular data. There's an "Edit as Table" button up at the top when editing a .tab file. --Ahecht (TALK PAGE) 14:22, 4 June 2025 (UTC)
::::::@Pigsonthewing The button appears while editing, so e.g. [https://commons.wikimedia.org/w/index.php?title=Data:Wikipedia_statistics/data.tab&action=edit]. Although on that page, when I click it, I get an error saying that "Dataset is too large to edit as a table", which honestly seems like a silly limitation… it's 50 kB of data, my computer has more than 1,000,000 times as much memory, surely it could handle it. It works on smaller pages, e.g. [https://commons.wikimedia.org/w/index.php?title=Data:Artwork_types.tab&action=edit]. Matma Rextalk 16:05, 4 June 2025 (UTC)
:::::::(I filed T396047 about this size limit.) Matma Rextalk 16:09, 4 June 2025 (UTC)
:::::::Probably that button should be more exposed than it is today. Perhaps with a treatment like VE two-button solution (providing both buttons on the top line) or like TemplateData, which has the 'edit with the special tool' link where it is displayed. Izno (talk) 16:14, 4 June 2025 (UTC)
:::Because most charts are single use, despite what most developers might assume, and every single chart requires 2 different pages to be created. For example, each Demographics wikipedia page (ie Demographics of India) has 2-4 charts (on average). Given the fact there are about 200 of these alone, there will be 400-800 pages in wikimedia commons just for this single use-case. Furthermore none of these charts are legitimately used outside of wikipedia, perhaps a different language would find it useful, but does that require 2 different files, why not have the option to just have one .chart file? It's easy to nest JSON after all. Additionally it is rather repetitive to create these files, so much so I have a bot request in wikimedia commons just for this purpose. GalStar (talk) 22:44, 3 June 2025 (UTC)
What I'm not seeing in any of the docs is how to select which columns I want to chart. Let's say I have a data set with 10 columns, but for a particular chart I want to show columns 1, 3, and 4. How do I do that? RoySmith(talk) 22:14, 3 June 2025 (UTC)
:There is no such way to do that. If you click on items in the key they do disappear. The system is pretty broken, the full specification is here: https://www.mediawiki.org/wiki/Extension:Chart/Specification. GalStar (talk) 22:34, 3 June 2025 (UTC)
:I will say that chart transforms will be arriving soon, which might be able to fix that. GalStar (talk) 22:34, 3 June 2025 (UTC)
:Additionally I found this feature request for it on the talk page: https://www.mediawiki.org/wiki/Extension_talk:Chart#Feature_request:_Enable_data_selection_for_y-axis GalStar (talk) 22:40, 3 June 2025 (UTC)
::Sigh. I really wonder what use case this extension is meant to address. It sure seems to have a non-optimal complexity:functionality ratio. RoySmith(talk) 22:43, 3 June 2025 (UTC)
:::I'm rather inclined to agree. I personally am working on writing a bot to automatically port everything over, but I am worried about the effects of vandalism in wikipedia commons. GalStar (talk) 22:45, 3 June 2025 (UTC)
::::So how will transforms work in practice? mw:Extension:Chart/Transforms says it will allow you to insert Lua modules. That's fine, but I'm not planning to learn Lua just to be able to produce a chart. Will there be some library of sharable modules which do common operations? RoySmith(talk) 16:26, 4 June 2025 (UTC)
:::::No clue, I also don't know how to use lua modules. I think there will be shared lua modules, but from the looks of it, it will be very disorganized. GalStar (talk) 16:32, 4 June 2025 (UTC)
::::::So, let's see. We have data arranged in tabular form. And we're inserting filters which transform that data into a different tabular form. Sounds like a problem that we've already figured out how to solve, so maybe somebody could build a transform which lets you just execute SQL. Which I'm sure is a bit of work, but given that SQLite and even [https://lunarmodules.github.io/luasql/ LuaSQL] already exist, probably not a huge job for somebody who has already figured out how transforms work. RoySmith(talk) 16:57, 4 June 2025 (UTC)
:::::::In my experience (with data visualization in general, not the Chart extension), SQL doesn't work well for these use cases, because you can't do pivot tables in it (not without weird extensions). I expect that someone will write useful Lua modules to do it soon. Matma Rextalk 21:01, 4 June 2025 (UTC)
:@RoySmith Once transforms are enabled, someone could write :commons:Module:Select columns that would allow you to do something like "transform": {"module": "Select columns", "function": "show", "args": {"1": "true", "3": "true", "4": "true"}} or "transform": {"module": "Select columns", "function": "hide", "args": {"2": "true", "5": "true", "6": "true"}}--Ahecht (TALK PAGE) 17:43, 4 June 2025 (UTC)
Do we need "Wikipedia:Village pump (charts)" or "Wikipedia:Village pump (technical)/Charts"; or some other page; at least while this is new? Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 20:50, 4 June 2025 (UTC)
:I feel having a proper Help:Charts would solve a lot of the questions we have right now, and some discussion on that talk page might be good. GalStar (talk) 20:55, 4 June 2025 (UTC)
I've been working on a bot to generate .tab and .chart files from the {{temp|Graph:Chart}} template. It started as a copy-paste thing, but at the moment it requires 2 inputs:
Name of the article
Names of each of the graphs
I have been trying to convert this into a true bot that doesn't require user intervention and was thinking that somehow using templates to mark graphs needing conversion as well as their names might be the best way forward. Is there any prior example of this that I could use as a template/scaffold?
:If you were to follow this approach, perhaps a new parameter for the {{tl|Graph:Chart}} template indicating an identifier to be used would be easiest? isaacl (talk) 21:37, 3 June 2025 (UTC)
::Good point, I'll add a proposal to that talk page. GalStar (talk) 22:05, 3 June 2025 (UTC)
:There should also be an category for charts that could be ported over. So a category of transclusions that only use the arguments x, y, type, xAxisTitle, yAxisTitle, xAxisFormat and yAxisFormat. Anything else is wating for more development work on the WMF side. Snævar (talk) 09:06, 4 June 2025 (UTC)
::I have an edit request for Graph:Chart, the main issue is adding the name and title parameters. Also it'd need to support y1, y2, ... and y1Title, y2Title, ...
::This seems like a good way to prevent unsupported items from being ported over for now. (atleast until those parameters on Graph:Chart get added). GalStar (talk) 16:31, 4 June 2025 (UTC)
::I created Template:PortGraph for this. GalStar (talk) 18:12, 4 June 2025 (UTC)
:@GalStar Any plans to support other related templates such as {{tl|Graph:Lines}}, {{tl|Graph:Stacked}}, or {{tl|Graph:PieChart}}? --Ahecht (TALK PAGE) 17:33, 4 June 2025 (UTC)
::I will likely support porting them too. I have to first get my commons bot request approved, and am in the process of filing one for Wikipedia. After I have approval to operate, expanding the scope should be easier. GalStar (talk) 18:12, 4 June 2025 (UTC)
::Note that `Category:Graphs_to_Port` is now being searched for test-edits. GalStar (talk) 21:28, 4 June 2025 (UTC)
= How to manage the width/height of charts =
The old graph extension used the Graph:Chart template, this had width and height properties that allowed for this. However the new chart extension says that "Currently, charts occupy the entire container width where you place them". This is highly undesirable. I tried putting them in a div and then styling that div, but to little success as it causes weird scrollbars. Any ideas as to how to fix this? GalStar (talk) 22:09, 3 June 2025 (UTC)
:I'm working on a template to fix this, ported from the german wikipedia. GalStar (talk) 02:56, 4 June 2025 (UTC)
::{{tl|ChartDisplay}} now exists. This should probably supersede just including charts. GalStar (talk) 03:01, 4 June 2025 (UTC)
:::However it only can handle width, not height, testing shows that trying to modify chart height doesn't end well. GalStar (talk) 03:20, 4 June 2025 (UTC)
::::{{replyto|GalStar}} You need something like this:
.enwiki-chart-content {
height: 20em;
}
wiki-chart {
display: flex;
justify-content: center;
max-width: 100%;
max-height: 100%;
}
wiki-chart > svg {
height: auto;
width: auto;
max-width: 100%;
max-height: 100%;
}
:::::Enforcing the fluid dimensions of both content elements, and setting max, to avoid them from leaving their parent sizes. —TheDJ (talk • contribs) 10:11, 4 June 2025 (UTC)
:::::I was planning on porting that from the german wikipedia as well, perhaps an admin could include this in common.css? GalStar (talk) 16:26, 4 June 2025 (UTC)
::::::Common.css will not be modified for this. Since you're making a template for it, you should use WP:TemplateStyles.
:::::::Thanks for the pointer, however since the graph itself is not being modified I think it's fine. GalStar (talk) 16:49, 4 June 2025 (UTC)
:Having the option to specify chart sizes is bug phab:T376845. Snævar (talk) 09:00, 4 June 2025 (UTC)
::Yeah I asked on media wiki and fonud that out, shame the extension is so underdeveloped. GalStar (talk) 16:25, 4 June 2025 (UTC)
=Transparency=
It seems that some effort has been taken to hide how this works. I recently saw {{diff|Channel Tunnel|1293640580|1293602613|these edit}} on my watchlist (I'm about two days behind, due to real-life commitments), and if I hadn't also seen this thread, would have been very puzzled. Certainly, in a construct like {{#chart:Channel Tunnel Traffic.chart}} it's not at all obvious what this refers to; and if you edit the section and preview, a chart does appear but there's nothing to suggest where to go in order to examine the source data. Unlike an image, clicking the graph doesn't help. Contrast transcluded templates, where the preview provides a handy list. Contrast images, where you can click the image in order to get to the file description page, which may itself have a link to the original file page on Commons. So, having read the posts above, I know that it pulls information from c:Data:Channel Tunnel Traffic.chart; but that's only a tiny part of the story, since that page clearly does not have a list of points. Editing the page shows a line "source": "Channel_Tunnel_Traffic.tab" so there is also c:Data:Channel Tunnel Traffic.tab. What else am I missing? --Redrose64 🌹 (talk) 20:36, 4 June 2025 (UTC)
:Oh haha, that was me actually. I am trying to fix this by force adding a "source" field. Hopefully ChartDisplay template fixes some of this confusion. GalStar (talk) 20:41, 4 June 2025 (UTC)
:Oh and I'd like to add that some other edits of mine were accidentally reverted because they looked like vandalism. So it does seem like this confuses a lot of people. GalStar (talk) 20:44, 4 June 2025 (UTC)
:It's interesting that both c:Data:Channel Tunnel Traffic.chart and c:Data:Channel Tunnel Traffic.tab display a note at the bottom about "Usage on en.wikipedia.org: Channel Tunnel"; yet nothing on Channel Tunnel (nor in the page information, nor in the debug data below the wikitext editor) displays a note about those data pages being used. But the metadata is clearly being stored somewhere, so this should be an easy fix. I found T382806 about it on Phabricator, and it seems like it's already a work in progress. Matma Rextalk 21:12, 4 June 2025 (UTC)
:I believe it's the comma after the month. In preview, changing "11:00, 3 June, 2025 (UTC)" to "11:00, 3 June 2025 (UTC)" causes it to hyperlink the timestamp, which is a sign it is then detected as a comment. Skynxnex (talk) 15:05, 3 June 2025 (UTC)
::I think that's the issue as well. {{u|JeanetteMartin}}, in at least a few recent comments, your signature has used a non-standard date/time-stamp. Can you tell us more about what's going on? Many gadgets/tools/bots/scripts rely on uniform timestamps. Firefangledfeathers (talk / contribs) 15:57, 3 June 2025 (UTC)
:::JeanetteMartin made an odd double signature [https://en.wikipedia.org/w/index.php?title=Wikipedia:Teahouse&diff=prev&oldid=1293734364] where the second signature had a valid timestamp but she then deleted the second signature.[https://en.wikipedia.org/w/index.php?title=Wikipedia:Teahouse&diff=prev&oldid=1293734443] I guess the first signature was made manually. {{ping|JeanetteMartin}} If you want a customized signature then use the signature field at Special:Preferences with a checkmark at "Treat the above as wiki markup". When you use the new topic tool or reply tool, your post is automatically signed with your signature in preferences. In other situations, sign with ~~~~. PrimeHunter (talk) 19:51, 3 June 2025 (UTC)
Time precision
Hi everyone,
When using the Wd module to grab dates from Wikidata, how does one change the precision of the returned data? More precisely, I interested in just returning the year from what is usually a down-to-the-day date. Any ideas? Julius Schwarz (talk) 14:35, 3 June 2025 (UTC)
:{{#time:Y|{{#invoke:Wd|...}}}} should work...? Izno (talk) 16:03, 3 June 2025 (UTC)
::Thanks for the quick reply @Izno. Unfortunately, I would be using this within a module and I apparently cannot simply add {{#time:Y| before one of the functions of the module... Julius Schwarz (talk) 07:11, 4 June 2025 (UTC)
:::@Julius Schwarz The Lua equivalent to {{#time}} is mw.getContentLanguage():formatDate(), e.g. mw.getContentLanguage():formatDate("Y", "25 January 2024") returns 2024. --Ahecht (TALK PAGE) 13:35, 4 June 2025 (UTC)
::::Maybe that's a neater solution that my tonumber(string.match( option. Can I just paste :formatDate() after a return X (like return X:formatDate())? Julius Schwarz (talk) 14:28, 4 June 2025 (UTC)
:::::X needs to be a language object. If X is your date, you could do local lang = mw.getContentLanguage() and then return lang:formatDate("Y", X)--Ahecht (TALK PAGE) 15:25, 4 June 2025 (UTC)
::FYI, turns out wrapping string.match ([long-form date], '%d%d%d%d') seems to work. Julius Schwarz (talk) 07:20, 4 June 2025 (UTC)
:::That will work only for years with 4 numbers in it I think? Izno (talk) 15:57, 4 June 2025 (UTC)
::::Indeed, but not an issue in this case. Julius Schwarz (talk) 20:39, 4 June 2025 (UTC)
XTools edit counts
Why was this changed? I normally check my daily edit patrol counts by going to User contributions for Maile66, click on Edit Statistics at the bottom,and then it takes me where I can find the info. What happened? I can't even find my daily patrol counts. Everything got changed, and I can't find anything. Why? And does anyone know how I can quickly and easily find my Page Patrol counts. Thanks for your help. — Maile (talk) 01:19, 4 June 2025 (UTC)
:I never noticed the "Edit Statistics" link, but I personally use the "User" submenu added by Twinkle (see below) and then select Analysis -> Analysis - XTools. — BE243 (about | talk) 01:57, 4 June 2025 (UTC)
::Actually, the Edit Statistics link does show for me at the bottom of the page. — BE243 (about | talk) 01:58, 4 June 2025 (UTC)
:::Yes, the edit stats link does show for me at the bottom of the page. But clicking on that is supposed to take you to the full XTools page, where you can find the patrol stats. Looks like someone fixed this since I orginally posted it. Everything is fine now. Thanks for addressing this. — Maile (talk) 02:03, 4 June 2025 (UTC)
New [[Special:Tags|Tag]] Request
Could I request the tag ooze (display as and target to: OOZE) for my new script? Similar tags exist for Twinkle, Huggle, RedWarn and Ultraviolet. Thank you :) Ed (talk) 21:00, 4 June 2025 (UTC)
Toolforge down?
I get an error message when I try to run [https://empty-categories.toolforge.org/ this operation]. This just happened about 3 weeks ago. I let Legoktm know but he hasn't been active this week. Any ideas? Is there a Phab ticket open? LizRead!Talk! 23:55, 4 June 2025 (UTC)
:Your title makes it sound like all of Toolforge is down. Is the problem just with this one tool? Which error message do you get? The page works for me. Just now I clicked the blue button, then it spent about 1-2 minutes displaying "Loading...", then it displayed 5 results. –Novem Linguae (talk) 01:11, 5 June 2025 (UTC)
::It looks like it is working now, thanks for the feedback. For what it's worth, this was the error message:
::*Webservice is unreachable
::*The tool responsible for the URL you have requested, https://empty-categories.toolforge.org/go, is not currently responding.
::*If you have reached this page from somewhere else...
::*This URI is managed by the empty-categories tool, maintained by Legoktm.
::*You may wish to notify the tool's maintainers (above) about the error.
::*If you maintain this tool
::*The web service for this tool is running but cannot be reached. Please check the error logs of your web service.
::I didn't know if all of Toolforge was down, it just wasn't working for me. But thanks again for offering a response. LizRead!Talk! 02:39, 5 June 2025 (UTC)