Commons:Village pump/Proposals/Archive/2024/02
This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Contents
- 1 Ban the output of generative AIs
- 2 no include categories for DR
- 3 Promoting steward elections
- 4 New protection group for autopatrollers
- 5 Revert policy change for "Overwriting existing files"
- 6 Require community consensus for new non-copyright restriction templates
- 7 Restrict closing contentious deletion discussions to uninvolved admins
- 8 Chinese and Japanese characters as disambiguation?
- 9 Tracking file usage on wikis run by Wikimedia movement affiliates
Ban the output of generative AIs
- The following discussion is archived. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
- There is consensus against adopting these changes, as currently proposed. The Squirrel Conspiracy (talk) 04:20, 6 February 2024 (UTC)
Now we know that Artificial Intelligences are being trained on modern nonfree works. Please read this: Generative AI Has a Visual Plagiarism Problem > Experiments with Midjourney and DALL-E 3 show a copyright minefield, by Gary Marcus and Reid Southen — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 10:11, 9 January 2024 (UTC)
- Support At least if the output is generated by Midjourney, if not also Dall-E. Although the later seems to be less susceptible to it, but at the end of day both were trained on nonfree works. So there's a risk of creating duratives with either one. It's not like we can't allow for images generated by models that were trained on free licensed images if or when there are any either. But allowing from a model that clearly disregards copyright, apparently even when someone uses a benign prompt, is just asking for trouble. Not to mention it's also antithetical to the projects goals. I don't think a full ban on anything generated by AI what-so-ever, regardless of the model or type of output, would really be workable though. At the end of day things like image up-scaling and colorization are probably not harmful enough to justify banning them. --Adamant1 (talk) 10:39, 9 January 2024 (UTC)
- Strong Support as proposer, obviously. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 10:53, 9 January 2024 (UTC)
- Strong oppose for a general ban on everything that an AI is involved, as the title of this section might suggest. I doubt that "AI" denoising or sharpening can cause a copyright problem. AI colorization or AI upscaling yields mostly very poor results, but I cannot see the copyright problems either. I don't mind if images created by generative AI are excluded that are just based on a text prompt, possibly with very few exceptions that are needed to illustrate pages about AI. However, was an actual copyright problem identified with current AI-based uploads to Commons that is so serious or general that this requires a blanket ban for generative AI? I know, much of this might be out of scope anyway. --Robert Flogaus-Faust (talk) 11:32, 9 January 2024 (UTC)
- @Robert Flogaus-Faust: There's been several DRs lately involving clear duratives, including Commons:Deletion requests/Files found with insource:" happy-looking Gandalf". One of the problems here is that people who are pro AI artwork will turn every DR having to do with into an argument over AI models can't generate COPYVIO to begin with because of how many images they are trained on. It's also sort of impossible to know what is or isn't COPYVIO with AI art generators because we don't have access to the original training sets. So take something like a seemingly benign painting of a 15th century knight. We have zero way of knowing if it's an exact copy of prior artwork, a derivative of one made in the 15th century, or based on a modern painting that's still copyrighted. Since there's no source or any other way to confirm anything. The fact that there's clear instances of AI art generators creating derivatives even when people ask for them just puts the whole thing in doubt though. --Adamant1 (talk) 11:50, 9 January 2024 (UTC)
- What you call clear duratives are images that look not at all like Gandalf but that word was used in the prompt alongside other changes to get the AI to not create evil looking Asian people with Samurai-style hats but to create old men with wizard hats. That word is often used in high-quality fan art centric to the concept of the kind of wizard I wanted so I used that as a technique to make it produce images that more closely resemble contemporary ideas of what wizards are. And no, that they can't generate COPYVIO to begin with is not what I or anybody else I saw ever argued which should be even clearer in the explanation below. They can and such images should be deleted and have been deleted. Prototyperspective (talk) 12:50, 9 January 2024 (UTC)
- @Robert Flogaus-Faust: There's been several DRs lately involving clear duratives, including Commons:Deletion requests/Files found with insource:" happy-looking Gandalf". One of the problems here is that people who are pro AI artwork will turn every DR having to do with into an argument over AI models can't generate COPYVIO to begin with because of how many images they are trained on. It's also sort of impossible to know what is or isn't COPYVIO with AI art generators because we don't have access to the original training sets. So take something like a seemingly benign painting of a 15th century knight. We have zero way of knowing if it's an exact copy of prior artwork, a derivative of one made in the 15th century, or based on a modern painting that's still copyrighted. Since there's no source or any other way to confirm anything. The fact that there's clear instances of AI art generators creating derivatives even when people ask for them just puts the whole thing in doubt though. --Adamant1 (talk) 11:50, 9 January 2024 (UTC)
- Strong oppose That article is about what one could call 'hacking' generative AIs to reproduce parts of works they trained on. Such malicious images are difficult to create, rare, and should simply be deleted.
- Moreover, training on nonfree works is allowed as much as you are allowed to view copyrighted images on artstation (or e.g. public exhibitions) and "learn" from them, such as getting inspiration and ideas or understanding specific art-styles. This is similar to human visual experience where anything you create is based on your prior experience which includes lots of copyrighted works. Various authoritative entities have clarified that AI works are not copyrighted. Like Photoshop or Lightroom, it's a new tool people can use in many ways and with very different results. It's a great boon to the public domain and not "antithetical to the projects goals" but matching it where it's finally starting to become possible to create good-quality images of nearly everything you can imagine without very high technical artistic skills. Stable Diffusion is open source and has been trained on billions of images to understand concepts in prompts to it. Prototyperspective (talk) 11:41, 9 January 2024 (UTC)
- training on nonfree works is allowed Companies can train models on nonfree works all they want. That doesn't mean we should allow for images that are highly likely to be based on copyrighted works though. I'm not going to repeate myself, but see my reply to Robert Flogaus-Faust for why exactly I think it's such an issue. The gist of it though is that AI works are copyrighted when they are based on (or exact copies of) copyrighted works and we just have zero way of knowing when that the case because we don't have access to what images the models were trained on. So it's just as likely that a painting of a historical figure would be based on newer copyrighted works then older free licensed ones. If anything, there's more chance since there's less images of historical figures the further back you go. There's just no way of us knowing or checking regardless though. At least with normal artwork we know who created it, what it was inspired by, and where it came from. None of that is true with AI artwork. An image has no business being on Commons if there's no source or at least a description of what it's based on. Period.--Adamant1 (talk) 11:58, 9 January 2024 (UTC)
- They are not based on individual images with few exceptions that the link in the original post is about and that I addressed in my explanations. You also learn concept such as 'what is a rhinoceros' from your visual experience. Do you think if you never saw a real rhinoceros and all you ever saw was copyrighted films of such an image you created based on your knowledge gained through these films would be copyright violations? I don't need to clarify that they aren't since multiple entities have done so. As said, cases where it maliciously usually deliberately replicates some image should be deleted and are rare. Prototyperspective (talk) 12:22, 9 January 2024 (UTC)
- No offense, but your comparison of AI generators to humans and how they learn or create things is just a ridiculously bad faithed, dishonest way to frame the technology. It's also not a valid counter to anything I've said. We still require a source when someone uploads artwork created by a human and neither a prompt or what AI generator the image was created by qualifies as one. Period. --Adamant1 (talk) 12:40, 9 January 2024 (UTC)
- No, we don't list the visual experiences and inspirations and so on for artworks entirely made manually by humans. You seem to have bad faith against my explanations where "ridiculously bad faithed" doesn't even make sense. Just calling it "not a valid counter" isn't a good point. Prototyperspective (talk) 12:46, 9 January 2024 (UTC)
- No offense, but your comparison of AI generators to humans and how they learn or create things is just a ridiculously bad faithed, dishonest way to frame the technology. It's also not a valid counter to anything I've said. We still require a source when someone uploads artwork created by a human and neither a prompt or what AI generator the image was created by qualifies as one. Period. --Adamant1 (talk) 12:40, 9 January 2024 (UTC)
- They are not based on individual images with few exceptions that the link in the original post is about and that I addressed in my explanations. You also learn concept such as 'what is a rhinoceros' from your visual experience. Do you think if you never saw a real rhinoceros and all you ever saw was copyrighted films of such an image you created based on your knowledge gained through these films would be copyright violations? I don't need to clarify that they aren't since multiple entities have done so. As said, cases where it maliciously usually deliberately replicates some image should be deleted and are rare. Prototyperspective (talk) 12:22, 9 January 2024 (UTC)
- @Prototyperspective: What have Midjourney and Dall-E been trained on, hmmm? — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 12:03, 9 January 2024 (UTC)
- Also billions of images. Since you didn't address what I wrote about it I'll just quote it to avoid walls of text creating circular repetitions: training on nonfree works is allowed as much as you are allowed to view copyrighted images on artstation (or e.g. public exhibitions [public television etc etc]) and "learn" from them, such as getting inspiration and ideas or understanding specific art-styles. This is similar to human visual experience where anything you create is based on your prior experience which includes lots of copyrighted works. Various authoritative entities have clarified that… Prototyperspective (talk) 12:15, 9 January 2024 (UTC)
- The difference is that a normal user will be banned if they repetitiously create and upload derivative works. Yet, apparently, if an AI generator has a history of creating COPYVIO that's perfectly fine "because technology." It's really just glorified meat puppeting though and your only response seems to be acting like it's not an issue when there's a plethora of evidence to the contrary. --Adamant1 (talk) 12:22, 9 January 2024 (UTC)
- These are not derivative works and text2image generators who similar to humans learned concepts through visual learning do not produce copyright violations by default. You want to a novel art tool "because technology" and I explained why it's unreasonable and why nothing backs your unfounded conclusions while subject-level authoritative entities have clarified these are not copyvios. It's glorified avoidance of new technical capacities for no good reason. Prototyperspective (talk) 12:26, 9 January 2024 (UTC)
- Which images aren't derivatives? The ones in the article that Jeff linked to clearly are, and no one even ask them in that case. So you can stick your fingers in your ears about it, but AI generators clearly produce copyrighted works. And no I don't want to "ban a novel art tool because technology." I've multiple times that we should allow for AI generators that are trained on freely licensed images. So I'd appreciate it if you didn't misconstrue my position. Your the only one taking an extreme, all or nothing position on this. --Adamant1 (talk) 12:30, 9 January 2024 (UTC)
- we should allow for AI generators that are trained on freely licensed images Such in the sense of being useful are impossible and it will remain like that for a few a decades if not much longer. Which images aren't derivatives? Images made via Stable Diffusion, Midjourney & Co except for images like in the links which I addressed, not ignored, with such "malicious images are difficult to create, rare, and should simply be deleted". Prototyperspective (talk) 12:44, 9 January 2024 (UTC)
- I beg to differ. There's also iStock's AI generator. And your the one saying I don't understand or have experience with the technology. Regardless, both create perfectly good quality images that I assume would be safe to upload and I'm sure there's others. So it would be perfectly reasonable to only allow artwork from models that were trained on freely licensed images with where the technology is at right now. --Adamant1 (talk) 12:52, 9 January 2024 (UTC)
- Those are not freely licensed.
- Not sure why you advocate for these commercial proprietary AI models. Stock images are usually not accurate and/or creative depictions of things either and details about NVIDIA Picasso remain unknown. Prototyperspective (talk) 13:00, 9 January 2024 (UTC)
- I don't care if the underlining technology is freely licensed. That's not the issue. If people can use the images without having to worry about violating someone else's copyright is and per Getty Images website images created with their software are "commercially‑safe—no intellectual property or name and likeness concerns, no training data concerns." Which is what's important here. Not if the underlining software is open source or whatever --Adamant1 (talk) 13:06, 9 January 2024 (UTC)
- The images trained on are not freely licensed. I do see how you don't care about open source but that isn't what I meant. --Prototyperspective (talk) 13:09, 9 January 2024 (UTC)
- That's not the point. Your just being obtuse. --Adamant1 (talk) 13:13, 9 January 2024 (UTC)
- The images trained on are not freely licensed. I do see how you don't care about open source but that isn't what I meant. --Prototyperspective (talk) 13:09, 9 January 2024 (UTC)
- I don't care if the underlining technology is freely licensed. That's not the issue. If people can use the images without having to worry about violating someone else's copyright is and per Getty Images website images created with their software are "commercially‑safe—no intellectual property or name and likeness concerns, no training data concerns." Which is what's important here. Not if the underlining software is open source or whatever --Adamant1 (talk) 13:06, 9 January 2024 (UTC)
- I beg to differ. There's also iStock's AI generator. And your the one saying I don't understand or have experience with the technology. Regardless, both create perfectly good quality images that I assume would be safe to upload and I'm sure there's others. So it would be perfectly reasonable to only allow artwork from models that were trained on freely licensed images with where the technology is at right now. --Adamant1 (talk) 12:52, 9 January 2024 (UTC)
- we should allow for AI generators that are trained on freely licensed images Such in the sense of being useful are impossible and it will remain like that for a few a decades if not much longer. Which images aren't derivatives? Images made via Stable Diffusion, Midjourney & Co except for images like in the links which I addressed, not ignored, with such "malicious images are difficult to create, rare, and should simply be deleted". Prototyperspective (talk) 12:44, 9 January 2024 (UTC)
- Which images aren't derivatives? The ones in the article that Jeff linked to clearly are, and no one even ask them in that case. So you can stick your fingers in your ears about it, but AI generators clearly produce copyrighted works. And no I don't want to "ban a novel art tool because technology." I've multiple times that we should allow for AI generators that are trained on freely licensed images. So I'd appreciate it if you didn't misconstrue my position. Your the only one taking an extreme, all or nothing position on this. --Adamant1 (talk) 12:30, 9 January 2024 (UTC)
- These are not derivative works and text2image generators who similar to humans learned concepts through visual learning do not produce copyright violations by default. You want to a novel art tool "because technology" and I explained why it's unreasonable and why nothing backs your unfounded conclusions while subject-level authoritative entities have clarified these are not copyvios. It's glorified avoidance of new technical capacities for no good reason. Prototyperspective (talk) 12:26, 9 January 2024 (UTC)
- The difference is that a normal user will be banned if they repetitiously create and upload derivative works. Yet, apparently, if an AI generator has a history of creating COPYVIO that's perfectly fine "because technology." It's really just glorified meat puppeting though and your only response seems to be acting like it's not an issue when there's a plethora of evidence to the contrary. --Adamant1 (talk) 12:22, 9 January 2024 (UTC)
- Also billions of images. Since you didn't address what I wrote about it I'll just quote it to avoid walls of text creating circular repetitions: training on nonfree works is allowed as much as you are allowed to view copyrighted images on artstation (or e.g. public exhibitions [public television etc etc]) and "learn" from them, such as getting inspiration and ideas or understanding specific art-styles. This is similar to human visual experience where anything you create is based on your prior experience which includes lots of copyrighted works. Various authoritative entities have clarified that… Prototyperspective (talk) 12:15, 9 January 2024 (UTC)
- training on nonfree works is allowed Companies can train models on nonfree works all they want. That doesn't mean we should allow for images that are highly likely to be based on copyrighted works though. I'm not going to repeate myself, but see my reply to Robert Flogaus-Faust for why exactly I think it's such an issue. The gist of it though is that AI works are copyrighted when they are based on (or exact copies of) copyrighted works and we just have zero way of knowing when that the case because we don't have access to what images the models were trained on. So it's just as likely that a painting of a historical figure would be based on newer copyrighted works then older free licensed ones. If anything, there's more chance since there's less images of historical figures the further back you go. There's just no way of us knowing or checking regardless though. At least with normal artwork we know who created it, what it was inspired by, and where it came from. None of that is true with AI artwork. An image has no business being on Commons if there's no source or at least a description of what it's based on. Period.--Adamant1 (talk) 11:58, 9 January 2024 (UTC)
- Oppose largely on the basis of terminology. "AI" is a marketing buzzword and not well-enough defined to make policy around. As Robert Flogaus-Faust mentions, there are plenty of things that are called "AI" that are fine for Commons, at least from a copyright perspective. --bjh21 (talk) 12:01, 9 January 2024 (UTC)
- @Bjh21 I mean generative AIs. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 12:07, 9 January 2024 (UTC)
- @Jeff G.: I think even that is probably too broad. For instance it would cover GPT-4 used for machine translation. --bjh21 (talk) 12:40, 9 January 2024 (UTC)
- @Bjh21: Translation starts with a source work of the same type as the output. By contrast, generative AIs (typically that are today creating medium-resolution images) don't start with a source image; or they start with many source images, some of which are non-free. They also are not notable artists. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 16:42, 10 January 2024 (UTC)
- @Jeff G.: I don't really understand this field, but en:Generative artificial intelligence defines generative AI as "artificial intelligence capable of generating text, images, or other media, using generative models," and mentions GPT-4 as an example (it even has the word in its name). en:Machine translation notes that "one can also directly prompt generative large language models like GPT to translate a text." This leads me to some concern that banning all output of generative AIs might exclude large classes of use that aren't problematic. But maybe machine translation by generative AI is problematic; I don't know. --bjh21 (talk) 17:25, 11 January 2024 (UTC)
- @Bjh21: Translation starts with a source work of the same type as the output. By contrast, generative AIs (typically that are today creating medium-resolution images) don't start with a source image; or they start with many source images, some of which are non-free. They also are not notable artists. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 16:42, 10 January 2024 (UTC)
- @Jeff G.: I think even that is probably too broad. For instance it would cover GPT-4 used for machine translation. --bjh21 (talk) 12:40, 9 January 2024 (UTC)
- @Bjh21 I mean generative AIs. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 12:07, 9 January 2024 (UTC)
- Comment AI generated files need to uploaded as PD, as there no sweat of brow involved and all such services are trained on materials it has found on the internet. Either that all AI generated files are not allowed because the under lying source material isnt declared, we con only accept freely sourced, where those sources are provided materials. As for some minor editing tools to adjust colours, sharpen, or remove noise those types of adjustments have always been acceptable. Gnangarra 12:12, 9 January 2024 (UTC)
- The underlying source material are billions of images for txt2img; you want to have a sorted list of thousand–billions of images listed beneath each file? e.g. Stable Diffusion’s initial training was on low-resolution 256×256 images from LAION-2B-EN, a set of 2.3 billion English-captioned images from LAION-5B‘s full collection of 5.85 billion image-text pairs, as well as LAION-High-Resolution, another subset of LAION-5B with 170 million images greater than 1024×1024 resolution (downsampled to 512×512). Prototyperspective (talk) 12:29, 9 January 2024 (UTC)
- they are only numbers they generate only based on a smaller subset as picture of a cow has influence on a picture of a flower. Clearly our images must be honest products of photographers otherwise they serve no encyclopaedic/educational purpose about the subject. Diagrams have allways covered the gap photographs cant convey. Gnangarra 12:48, 9 January 2024 (UTC)
- Just because you can't think of other potential use-cases doesn't mean there aren't some. For example illustrating art styles. There are thousands and thousand of photos of images for whatever photographable thing you can think of but yet other subjects of human culture don't seem to be worthy of benefiting from novel technology at all. I put thousand–billions there instead of billions because the images have different degrees of relevance of the image. If you generated merely an image of a cow, which wouldn't be useful, then obviously the countless labelled photographs of cows would be most-relevant to the image. Prototyperspective (talk) 12:54, 9 January 2024 (UTC)
- This isnt about potential uses I can think, this is about the movements honesty and reliability the end user must be able to trust that what is available on every project is from a reliable source. There are many endeangered species, past wars, and deceased person where we dont have photographs of. When there is no photograph we should not dishonestly present such photographs as existing. Gnangarra 13:03, 9 January 2024 (UTC)
- Agree. That's not a case for banning AI images. Btw here is an AI image depicting the extinct dodo. Prototyperspective (talk) 13:06, 9 January 2024 (UTC)
- that image is false anyway as it doesnt show the birds colourings, nor depict it in its natural environement with plant species from its habitat. My point is that when we have reliable illustrations already including colour details we dont need these images anyway, if we do then these images mislead the viewer and make a mokery of everything we strive to do in being reliable trustworthy source. Gnangarra 13:15, 9 January 2024 (UTC)
- Inaccuracies should be pointed out and also occur for manually made images. Moreover, the images can be improved via new versions and the AI software can also improve over time. There are many files in Category:Inaccurate paleoart. Lastly, for many cases we don't have such images available, images being on WMC doesn't mean they need to be used, WMC is a free useful media repository while Wikipedia is the encyclopedia, and all of what you said isn't a case for banning but for properly describing and/or deleting various files. Prototyperspective (talk) 13:33, 9 January 2024 (UTC)
- If someone wants AI generated media then they will go to the AI service of their choosing and create as and when they need, logically it allows them to grab the most upto date reconning. Gnangarra 13:49, 9 January 2024 (UTC)
- Doesn't make sense. I don't think you have much experience with these tools beyond generating very simple images overly broadly. You wouldn't also say "ah people just make a new diagram about xyz when they need it so we don't need to host it and the same goes for artworks of e.g. cubism". There clearly is an anti-AI-tools bias with lots of unfounded dismissals. Prototyperspective (talk) 14:17, 9 January 2024 (UTC)
- I have yet to see or hear a legitimate use case for most, if not all, AI images despite all your capitulating about it other then the Wikibook specifically having to do with AI. That's not to say there isn't one, but arguments like "AI artwork is educational because AI artwork is educational" are just tautological. All your doing is talking in circles while claiming other people who disagree with you are bias once in a while. Same goes repeated instance to make this about other mediums of artwork. Apparently your incapable of talking about AI artwork without deflecting or trying to change the subject for some reason. Even though it's supposedly in-scope and there's no reason to ban it. I don't think people here who think it should be moderated aren't open to alternatives, but your clearly not making a case for them. Let alone have you even proposed any. All you've done is get in the way of there being any changes to how we handle AI artwork what-so-ever. Otherwise propose something instead of just getting in the way of everyone else who's trying to deal with the issue. --Adamant1 (talk) 14:45, 9 January 2024 (UTC)
- I explained specific use-cases and the wikibook is about explaining use-cases (see "applications" in the title). Probably last reply to you here but I'm not trying to change the subject for some reason like you accuse me to. As should be clear to people reading the discussion I'm always addressing specific points in a prior comment. Interesting that you dismiss all my points in comments like this where you alleging I'm doing nothing but calling people biased or circular reasoning. Prototyperspective (talk) 14:53, 9 January 2024 (UTC)
- You really haven't. I'm pretty sure I've said it already, but they all boil down to vague handwaving about use cases that either don't exist to begin with or no one is or will use the images for. Like your claim that an image was in scope because you could use it on your personal blog that you don't even have to begin and aren't using the image for regardless. Same goes for the Jeff Koons knock off image. You claimed it could be used in a Wikipedia article, but no one is using it for that and it would probably be removed if anyone added it to an article anyway. The "uses" have to at least be realistic and ones that people will actually use the images for. You can't just invent a random, unrealistic reason to keep an image and then act everyone else is just being bias or whatever when they tell you it's not legitimate. --Adamant1 (talk) 15:01, 9 January 2024 (UTC)
- I explained specific use-cases and the wikibook is about explaining use-cases (see "applications" in the title). Probably last reply to you here but I'm not trying to change the subject for some reason like you accuse me to. As should be clear to people reading the discussion I'm always addressing specific points in a prior comment. Interesting that you dismiss all my points in comments like this where you alleging I'm doing nothing but calling people biased or circular reasoning. Prototyperspective (talk) 14:53, 9 January 2024 (UTC)
- I have yet to see or hear a legitimate use case for most, if not all, AI images despite all your capitulating about it other then the Wikibook specifically having to do with AI. That's not to say there isn't one, but arguments like "AI artwork is educational because AI artwork is educational" are just tautological. All your doing is talking in circles while claiming other people who disagree with you are bias once in a while. Same goes repeated instance to make this about other mediums of artwork. Apparently your incapable of talking about AI artwork without deflecting or trying to change the subject for some reason. Even though it's supposedly in-scope and there's no reason to ban it. I don't think people here who think it should be moderated aren't open to alternatives, but your clearly not making a case for them. Let alone have you even proposed any. All you've done is get in the way of there being any changes to how we handle AI artwork what-so-ever. Otherwise propose something instead of just getting in the way of everyone else who's trying to deal with the issue. --Adamant1 (talk) 14:45, 9 January 2024 (UTC)
- Doesn't make sense. I don't think you have much experience with these tools beyond generating very simple images overly broadly. You wouldn't also say "ah people just make a new diagram about xyz when they need it so we don't need to host it and the same goes for artworks of e.g. cubism". There clearly is an anti-AI-tools bias with lots of unfounded dismissals. Prototyperspective (talk) 14:17, 9 January 2024 (UTC)
- If someone wants AI generated media then they will go to the AI service of their choosing and create as and when they need, logically it allows them to grab the most upto date reconning. Gnangarra 13:49, 9 January 2024 (UTC)
- Inaccuracies should be pointed out and also occur for manually made images. Moreover, the images can be improved via new versions and the AI software can also improve over time. There are many files in Category:Inaccurate paleoart. Lastly, for many cases we don't have such images available, images being on WMC doesn't mean they need to be used, WMC is a free useful media repository while Wikipedia is the encyclopedia, and all of what you said isn't a case for banning but for properly describing and/or deleting various files. Prototyperspective (talk) 13:33, 9 January 2024 (UTC)
- that image is false anyway as it doesnt show the birds colourings, nor depict it in its natural environement with plant species from its habitat. My point is that when we have reliable illustrations already including colour details we dont need these images anyway, if we do then these images mislead the viewer and make a mokery of everything we strive to do in being reliable trustworthy source. Gnangarra 13:15, 9 January 2024 (UTC)
- Agree. That's not a case for banning AI images. Btw here is an AI image depicting the extinct dodo. Prototyperspective (talk) 13:06, 9 January 2024 (UTC)
- This isnt about potential uses I can think, this is about the movements honesty and reliability the end user must be able to trust that what is available on every project is from a reliable source. There are many endeangered species, past wars, and deceased person where we dont have photographs of. When there is no photograph we should not dishonestly present such photographs as existing. Gnangarra 13:03, 9 January 2024 (UTC)
- Just because you can't think of other potential use-cases doesn't mean there aren't some. For example illustrating art styles. There are thousands and thousand of photos of images for whatever photographable thing you can think of but yet other subjects of human culture don't seem to be worthy of benefiting from novel technology at all. I put thousand–billions there instead of billions because the images have different degrees of relevance of the image. If you generated merely an image of a cow, which wouldn't be useful, then obviously the countless labelled photographs of cows would be most-relevant to the image. Prototyperspective (talk) 12:54, 9 January 2024 (UTC)
- they are only numbers they generate only based on a smaller subset as picture of a cow has influence on a picture of a flower. Clearly our images must be honest products of photographers otherwise they serve no encyclopaedic/educational purpose about the subject. Diagrams have allways covered the gap photographs cant convey. Gnangarra 12:48, 9 January 2024 (UTC)
- @Gnangarra only A.I. art in countries that follow U.S. jurisprudence may be allowed to be hosted here. But not UK A.I. art: see this. JWilz12345 (Talk|Contrib's.) 12:32, 9 January 2024 (UTC)
- We decide Commons policy, the options are none, only if all sources are acknowledged, and only PD licenses. none these option override any US laws. The same way we apply precautionary principle, a person who generates and publishes on Commons which is in the US is subject solely to US laws. Gnangarra 12:45, 9 January 2024 (UTC)
- @Gnangarra that may be true, until a British A.I. artist files letter of complaint to Wikimedia. Files should also be free in the source country and not just U.S..
- English Wikipedia can host unfree British A.I. art though as enwiki only follows U.S. laws. JWilz12345 (Talk|Contrib's.) 14:46, 9 January 2024 (UTC)
- For one, the "source country" (in the sense of the Berne Convention) of any work first published on the internet and accessible from any country in the world may be considered to be any country. Various US courts have found that simultaneous publication occurs when a work is published online, and thus that works first published online are US works for the purpose of copyright law.
- But more generally, any instance in which Common goes above and beyond US law is up to the community. You could argue that Commons should treat this like PD-Art. D. Benjamin Miller (talk) 06:13, 3 February 2024 (UTC)
- We decide Commons policy, the options are none, only if all sources are acknowledged, and only PD licenses. none these option override any US laws. The same way we apply precautionary principle, a person who generates and publishes on Commons which is in the US is subject solely to US laws. Gnangarra 12:45, 9 January 2024 (UTC)
- The underlying source material are billions of images for txt2img; you want to have a sorted list of thousand–billions of images listed beneath each file? e.g. Stable Diffusion’s initial training was on low-resolution 256×256 images from LAION-2B-EN, a set of 2.3 billion English-captioned images from LAION-5B‘s full collection of 5.85 billion image-text pairs, as well as LAION-High-Resolution, another subset of LAION-5B with 170 million images greater than 1024×1024 resolution (downsampled to 512×512). Prototyperspective (talk) 12:29, 9 January 2024 (UTC)
- Oppose per the precedent that we allow a human artist to view, say, 5-10 copyrighted images of a person, and then draw a portrait of that person based on the likeness they have gleaned from those copyrighted images. A generative AI has seen far more images than that, and any copyrightable portion is likely to be heavily diluted, more so than the case of the human artist. Of course, individual generations can be nominated for deletion if a close match to a specific copyrighted image can be identified or if it is clearly a derivative work of a copyrighted subject. As for the objection "what if there's some image it's copying that we don't know about", the same objection applies for human artists: "what if the artist is not honest about their sources?" -- King of ♥ ♦ ♣ ♠ 17:39, 9 January 2024 (UTC)
- the same objection applies for human artists It could just be copium, but I feel like there's a difference of scale there that makes derivatives created by humans easier to sus out then it is for AI generated images. Since at the end of the day people are working extremely small data sets that usually relate to their specific area of interest. For instance if we are talking about someone who mainly speaks Mandarin Chinese and has a history of uploading images from China, it's a pretty good bet the image in question won't be a derivative of a 1940s American cartoon character. Or we can at least ask another user who has speak the languages and/or is from China if they have seen the character before. We can't do that with AI artwork though because the dataset is essentially every single image created in last 500 years. So sure, the same problem exists regardless, but it's the difference between looking through your junk drawer to find a key versus trying to find a grain of sand in the ocean. --Adamant1 (talk) 18:27, 9 January 2024 (UTC)
- Your argument essentially argues against itself. As you say, AI learning works from pretty much the sum total of human visual arts, and doesn’t even use any particular one of those at a time. It’s highly unlikely you’ll just randomly get a copyrighted character if you don’t ask for one. Dronebogus (talk) 01:49, 12 January 2024 (UTC)
- It’s highly unlikely you’ll just randomly get a copyrighted character if you don’t ask for one. @Dronebogus: I've use Dall-E to create portraits of women and every so often it will generate one of Scarlett Johansson, even though I don't explicitly ask for images of her. So I think it either has an algorithm that favors creating images based on popular characters or people, or it just happens to have been trained on images of female celebrities from the past 20 years more then anything else. So likenesses of Scarlett Johansson just get rendered more because of how the weighting in the training model works or something. Either way, if I can generated a couple thousands portraits where a none trivial number of them look living movie stars then I don't know the same wouldn't occur for modern movie or cartoon characters. I think it naturally follows that would be the case anyway because there's inherently more images of the Simpsons out there that it was trained on then say a cartoon like Mutt and Jeff. Same goes for it rendering images of women that look like Scarlet Johansson versus Carole Lombard or for that matter just a "random" woman. --Adamant1 (talk) 10:24, 12 January 2024 (UTC)
- If it’s super obvious then you filter it out as a copyvio. This isn’t difficult. Dronebogus (talk) 12:25, 12 January 2024 (UTC)
- the same objection applies for human artists It could just be copium, but I feel like there's a difference of scale there that makes derivatives created by humans easier to sus out then it is for AI generated images. Since at the end of the day people are working extremely small data sets that usually relate to their specific area of interest. For instance if we are talking about someone who mainly speaks Mandarin Chinese and has a history of uploading images from China, it's a pretty good bet the image in question won't be a derivative of a 1940s American cartoon character. Or we can at least ask another user who has speak the languages and/or is from China if they have seen the character before. We can't do that with AI artwork though because the dataset is essentially every single image created in last 500 years. So sure, the same problem exists regardless, but it's the difference between looking through your junk drawer to find a key versus trying to find a grain of sand in the ocean. --Adamant1 (talk) 18:27, 9 January 2024 (UTC)
- Oppose much too broad. This would mean we couldn't even have examples of AI-generated artwork. I suggest reading the section beginning "That said, there are good reasons to host certain classes of AI images on Commons" at Commons talk:AI-generated media. - Jmabel ! talk 19:38, 9 January 2024 (UTC)
- Oppose (a) not all current and future models are trained with nonfree works; (b) not all models trained with nonfree works produce work that's legally considered derivative; (c) commons should follow, not lead when it comes to making decisions based on the law. Sometimes we understand the law and enact a policy that's more conservative, but in this case we'd be enacting a policy that's miles beyond any legal lines set thus far AFAIK. — Rhododendrites talk | 22:10, 9 January 2024 (UTC)
- Oppose, the exclusion of AI-generated works should be done on a case-by-case basis, not as a blanket exclusion. Good illustrative educational works that are obviously in the public domain shouldn't be grouped together with AI-generated images of Sailor Moon, Optimus Prime, and Magneto. We should judge AI-generated works on a case-by-case basis. This is still largely unregulated and current United States legislation sees most AI-generated works as public domain, let's not be stricter than the law. Yes, we should be as cautious as possible, but that caution should not be applied this broad. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 22:32, 9 January 2024 (UTC)
- I don't understand the proposition here. Training AIs on the content here is one issue, and I can see an argument based on 'ban non-licence observant AI training from our licensed content', difficult to implement as that might be.
- However the solution here 'ban AI uploads' seems unrelated to that.
- I would not (as yet) ban AI uploads. Maybe I could be convinced otherwise. But I do that that we should immediately (or ASAP) require all AI to be clearly tagged as such, and maybe its source identified. Whatever we decide in the future is going to be made much easier by doing that early on. Andy Dingley (talk) 17:15, 10 January 2024 (UTC)
- Strong oppose I don’t even know where to begin with this. I think the fact that it’s based on a link to a single random article— not a strong legal basis, not extensive reliable sources, not even an argument from the proposer —is a good starting point. That and the fact that it’s based on an assumption that AI will always recognizably plagiarize a certain copyrighted work or works, rather than just pull from 90% of the Internet and overlap a billion similar works into a nonspecific whole. We’re putting the cart way, way before the horse here. Dronebogus (talk) 01:45, 12 January 2024 (UTC)
- Support yep. Where should future AI get reliable stuff to learn from, if Commons is full of AI work itself ???? Alexpl (talk) 10:47, 12 January 2024 (UTC)
- This is a reason for why I've been making sure that all images made using AI tools are in some subcategory of Category:AI-generated images. You can then easily exclude and maintain them.
- It's not "full of it" if we there's a few images in 100 million files of even the most mundane things photographed thousands of times. Outright banning is a knee-jerk simplistic reaction without much thought given to it, like banning images made or modified using Lightroom or Photoshop in 2003. Didn't know people here are so anti-(novel)technology and pro-indiscriminate tool-use/images exclusion. Prototyperspective (talk) 11:48, 12 January 2024 (UTC)
- Frame it a few years in the future where AI image generators are common place. Realistically how many AI generated images being put in normal categories at that point would it take for it to become unmanageable and the project to lose all creditability as a source of accurate educational material? Because it just doesn't scale past a couple of enthusiasts who are willing to manage the images as part of personal pet project. The same can't be said for photographs that people made minor edits to in photoshop or whatever. The fact is that they just don't pose the same problems and the projects reputation will never be damaged (or at it's usefulness be rendered totally useless) by people touching up old photographs in light room like it could (and probably will be) by allowing for an infinite number of fake AI generated images of historical figures or whatever. --Adamant1 (talk) 12:48, 12 January 2024 (UTC)
- They're already commonplace. That's just hypothetical speculation and still doesn't mean there aren't other better ways to deal with that. Wikimedia Commons is a repo for freely usable media files and there's lots of illustrations and artworks in it.
- For example simply don't add them to these categories, or only AI-specific subcategories. I don't see how these images could be considered "accurate educational material", especially in the categories these are showing under but that and many other images don't get outright banned or deleted (that they don't may be a good thing and there is a certain policy that often gets cited which I get the impression people assume only refers to subjects like nudity where some removals from a site are by far not as detrimental to society and free education than general-purpose tools and more socially-relevant subjects).
- The credibility is damaged by outright banning a useful general-purpose tool as well as by creating unwelcoming environments to AI developers and potential media uploaders along with undermining its reputation as being on the forefront of free speech and the creative commons – not indiscriminately censoring/excluding/however-you-call-it free media and being at the forefront of the public domain, not working against it and marginalizing new forms of art/creative-methodologies/technology. There also is the potential for infinite number of photographs of grass, trees, or tables but still we don't ban such; in fact I think there's few if any legal potentially useful media WMC outright bans. Prototyperspective (talk) 13:53, 12 January 2024 (UTC)
- Frame it a few years in the future where AI image generators are common place. Realistically how many AI generated images being put in normal categories at that point would it take for it to become unmanageable and the project to lose all creditability as a source of accurate educational material? Because it just doesn't scale past a couple of enthusiasts who are willing to manage the images as part of personal pet project. The same can't be said for photographs that people made minor edits to in photoshop or whatever. The fact is that they just don't pose the same problems and the projects reputation will never be damaged (or at it's usefulness be rendered totally useless) by people touching up old photographs in light room like it could (and probably will be) by allowing for an infinite number of fake AI generated images of historical figures or whatever. --Adamant1 (talk) 12:48, 12 January 2024 (UTC)
- That is not the job of Commons. We have nothing to win here and you´ll be, unfortunately, proven wrong in short time. No need to further elaborate the "state of the art" etc.pp. here. Alexpl (talk) 16:36, 16 January 2024 (UTC)
- We really need a “geekography test”— if pictures of naked women objectified as computer software is somehow in a million years “educational”, what isn’t? Dronebogus (talk) 14:06, 12 January 2024 (UTC)
- I don't disagree with either one of you about the nude photos, but your comparing apples and oranges because I said "accurate educational material", not "eductional material." I'm sure you both get the difference. The problem with AI artwork is that it's inherently inaccurate due to the nature of the thing. So while it's "educational" in the sense of educating people about where the technology is at, it's not eductional in regards to the subjects that the images are reported to about. That doesn't go for nude women though, obviously. No one is going to mistake an image of a nude woman with a mushroom from Mario on it for a 15th century historical figure. Let alone put it in a category for one. Although I agree the former should also be dealt with, and it could at any point. But now it's way less likely the issues presented by AI artwork will be resolved because you've poisoned the well by going off about nude photos. --Adamant1 (talk) 15:17, 14 January 2024 (UTC)
- We really need a “geekography test”— if pictures of naked women objectified as computer software is somehow in a million years “educational”, what isn’t? Dronebogus (talk) 14:06, 12 January 2024 (UTC)
- Oppose It has been common knowledge that AI generators are trained on copyrighted works for years. Pretending it's some kind of "Gotcha" moment is quite frankly ridiculous--Trade (talk) 15:40, 15 January 2024 (UTC)
Ban images generated with MidJourney
Counter proposal since the original doesn't seem to be going anywhere, but at least IMO there's still unique issues with images created by MidJourney that deserve scrutiny outside of the wider question of allowing for AI artwork on general or not.
Anyway, per Jeff G MidJourney has been shown to generate derivatives regardless of the prompt or if users asked for them. The creators of the software have also gone out of their to intentionally train the model on copyrighted meterial. Regardless of if it leads to images that violate copyright. This leds to two issues:
1. There's a less then trivial chance that whatever images are generated by MidJourney will be copyright violations and there's no easy way know which are or not due to the nature of the thing. Let alone is it something that can be easily policed at any kind of scale. Especially without any kind of guideline in place making it so ghe images can be speedy deleted or otherwise fast tracked to deletion. This issue will also only get worse and harder to deal wity if MidJourney is ever found liable in court for violating copyright. Its much harder to deal with potential copyright violations after the fact.
2. The way MidJourney is maintained in regards to the utter lack of respect for other people's intelectual property clearly goes against the goals of the project and wider movement.
Although admittedly both can be said for other AI generators they clear aren't as brazen or problematic in other cases as they are with MidJourney. So I think it warrants a separate solution. Also, in-case anyone is going to claim we don't ban software, yes we do. MP3s and MP4s being the ones that come to mind, but I'm sure there's others. And sure its for different reasons, but this still wouldn't be unique regardless.
Also an exception to the proposal will be made in cases where the image or images are being used to illustrate MidJourney itself. Although with the caveat that it shouldn't be used in a bad faithed way to game the system.
--Adamant1 (talk) 16:23, 15 January 2024 (UTC)
- Still Oppose, because a) it hasn’t been found guilty of copyright violation, b) we still need to illustrate MidJourney itself, c) you still need to prove the number of potential copyright violations goes beyond “non-trivial” into a plurality or majority. A “non-trivial” number of human uploads turn out to be copyvios, but we don’t ban humans uploading because most of them aren’t. Dronebogus (talk) 18:47, 15 January 2024 (UTC)
- @Dronebogus: I doubt it would make a difference, but I'm more then willing to modify the proposal to have an exception for images that illustrate MidJourney itself if you want me to. Really, I assumed it would be a given. Apparently not though. --Adamant1 (talk) 19:09, 15 January 2024 (UTC)
- “Ban x” usually doesn’t imply exceptions Dronebogus (talk) 19:10, 15 January 2024 (UTC)
- I would say it does if the ban is for "reason X" and that reason wouldn't apply to the exception. We'll have to agree to disagree though. Regardless, I added it to the proposal so it's explicit. --Adamant1 (talk) 19:16, 15 January 2024 (UTC)
- “Ban x” usually doesn’t imply exceptions Dronebogus (talk) 19:10, 15 January 2024 (UTC)
- @Dronebogus: I doubt it would make a difference, but I'm more then willing to modify the proposal to have an exception for images that illustrate MidJourney itself if you want me to. Really, I assumed it would be a given. Apparently not though. --Adamant1 (talk) 19:09, 15 January 2024 (UTC)
- Oppose For the same reasons as before. Will this ever stop and aren't indiscriminate DRs against useful AI images that are often the only ones available for multiple notable subjects enough?
- It's a bad idea and a precedent not in line with the prior advocacy for free speech to ban image creation tools, this applies to Photoshop as much as to Midjourney. There is a less than trivial chance photographs or paintings are derivative works, movie stills, or similar – do we ban them all now too? It wouldn't be harder to deal with it wasn't banned but despite your speculations, Midjourney won't be liable for generally violating copyright in regards to its images which would go all that has been said and decided previously. Machines are allowed to learn from publicly visible media as much as humans are allowed to learn from them; these tools are a great boon to the public domain and are general-purpose tools that are and will be used for pretty much everything which is what WMC would ban while considering itself as some kind of pro public domain platform.
- MP3s are not software but media formats. If more is done, it shouldn't be a ban. The problems you think are exclusive to AI tools and which so far have not really manifested on WMC are much broader and concern all kinds of images where things like tineye bots or reports of checked categories that are most likely to receive derivative works would be useful. Prototyperspective (talk) 22:13, 15 January 2024 (UTC)
- despite your speculations, Midjourney won't be liable for generally violating copyright in regards to its images Not that I think you care since you can't seem to go one discussion related to this without claiming I'm making up things or don't know what I'm talking about, but I didn't just come up with that out of thin air. Legal experts seem to agree that MidJourney will probably be held liable for violating copyright with at least one of the many legal cases they are currently facing and/or will likely face in the future. Of course we will have to see if they are, but we have something called the "precautionary principle" for a reason. All we need is reasonable doubt to the copyright status of something, and I think that's been more then met when it comes artwork created MidJourney. We also defer to what legal experts have to say about a particular topic when deciding guidelines. Whatever helps you cope though. At least I'm proposing something that isn't just banning AI artwork outright, which was supposedly your whole problem with this to begin with. --Adamant1 (talk) 22:55, 15 January 2024 (UTC)
- One, you need to stop being so condescending towards Prototyperspective. Two, even if MidJourney are found liable for copyright infringement, there’s no need to ban their output right now. Or even at all. They’ll probably work to remedy this rather than throw their hands up and say “guess we’re done here, sorry folks”. Then only images up to that point would need to be deleted. Dronebogus (talk) 17:51, 16 January 2024 (UTC)
- First of all, Prototyperspective has a long well established history of misrepresenting my position and treating me like I'm making up things or don't know about the subject. So if anyone is being condescending they are. Secondly, it doesn't like MidJourney wants or has the ability to remedy things on their end since they intentionally trained the model on a large amount of copyrighted artwork and MidJourney creates duratives regardless of the prompt. There probably isn't really a way to "remedy" that outside of re-training it or completely starting over. Neither of which I think they are going to do. They can and have disabled certain keywords that lead to it generated copyrighted images, but it's not like we can realistically just delete images on the fly up to that point every time they patch or tweak something. --Adamant1 (talk) 18:18, 16 January 2024 (UTC)
- I wasn’t even suggesting that; if there was a major attempt to remove copyrighted material, then we can delete everything up to that point, not “every time they update we delete everything”. But I understand you will absolutely never budge on this or anything else related to AI; if you’ve no intention of reconsidering, ever, then please stop responding in order to argue for the sake of arguing. Dronebogus (talk) 01:54, 18 January 2024 (UTC)
- @Dronebogus: I think I budged when I proposed this as an alternative to a complete ban. I've also said a couple of times now that I support AI generated artwork that was used on other projects and/or created with models that were trained on freely licensed images, of which there are currently several. It seems like both you and Prototyperspective have a real problem with listening though since both of you seem hell bent on treating me like I'm some kind of hard line hater of AI or something when I'm not. Your the ones who aren't willing to budge though. Otherwise you would have supported this, or at least proposed something else instead of just making it about me. I have zero problem with setting some kind of reasonable standard for what type of artwork to include and what not to. You won't do that though. --Adamant1 (talk) 02:08, 18 January 2024 (UTC)
- I wasn’t even suggesting that; if there was a major attempt to remove copyrighted material, then we can delete everything up to that point, not “every time they update we delete everything”. But I understand you will absolutely never budge on this or anything else related to AI; if you’ve no intention of reconsidering, ever, then please stop responding in order to argue for the sake of arguing. Dronebogus (talk) 01:54, 18 January 2024 (UTC)
- First of all, Prototyperspective has a long well established history of misrepresenting my position and treating me like I'm making up things or don't know about the subject. So if anyone is being condescending they are. Secondly, it doesn't like MidJourney wants or has the ability to remedy things on their end since they intentionally trained the model on a large amount of copyrighted artwork and MidJourney creates duratives regardless of the prompt. There probably isn't really a way to "remedy" that outside of re-training it or completely starting over. Neither of which I think they are going to do. They can and have disabled certain keywords that lead to it generated copyrighted images, but it's not like we can realistically just delete images on the fly up to that point every time they patch or tweak something. --Adamant1 (talk) 18:18, 16 January 2024 (UTC)
- One, you need to stop being so condescending towards Prototyperspective. Two, even if MidJourney are found liable for copyright infringement, there’s no need to ban their output right now. Or even at all. They’ll probably work to remedy this rather than throw their hands up and say “guess we’re done here, sorry folks”. Then only images up to that point would need to be deleted. Dronebogus (talk) 17:51, 16 January 2024 (UTC)
- despite your speculations, Midjourney won't be liable for generally violating copyright in regards to its images Not that I think you care since you can't seem to go one discussion related to this without claiming I'm making up things or don't know what I'm talking about, but I didn't just come up with that out of thin air. Legal experts seem to agree that MidJourney will probably be held liable for violating copyright with at least one of the many legal cases they are currently facing and/or will likely face in the future. Of course we will have to see if they are, but we have something called the "precautionary principle" for a reason. All we need is reasonable doubt to the copyright status of something, and I think that's been more then met when it comes artwork created MidJourney. We also defer to what legal experts have to say about a particular topic when deciding guidelines. Whatever helps you cope though. At least I'm proposing something that isn't just banning AI artwork outright, which was supposedly your whole problem with this to begin with. --Adamant1 (talk) 22:55, 15 January 2024 (UTC)
- Support. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 16:22, 16 January 2024 (UTC)
- The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made in a new section.
no include categories for DR
- The following discussion is archived. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
- krd's bot will be able to add noinclude tags to categories in DRs going forward. The Squirrel Conspiracy (talk) 17:16, 7 February 2024 (UTC)
Is there a way on how to add categories with the no include for DRs with hot cat? I have been helping out in adding categories to DRs with hot cat but there no such category appears. Maybe there is a hidden category for it? If not, is there another solution? Paradise Chronicle (talk) 22:32, 30 December 2023 (UTC)
- In 2017 I believe a solution to the issue was requested before, but there was no answer.Paradise Chronicle (talk) 13:28, 31 December 2023 (UTC)
- @Paradise Chronicle yes, indeed no responses before archival. JWilz12345 (Talk|Contrib's.) 11:25, 4 January 2024 (UTC)
- Support. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 11:43, 4 January 2024 (UTC)
- Support Quite sensible. --Yann (talk) 11:51, 4 January 2024 (UTC)
- Support Although I'm not exactly sure what's being supported, but whatever. It sounds like a good idea regardless. --Adamant1 (talk) 12:01, 4 January 2024 (UTC)
- Strong support, so that I do not need to resort to two tedious things: copy a certain
<noinclude>XXXXX FOP cases/yyyyy</noinclude>
and paste it to DR pages while having the JavaScript of my mobile browser turned off (to avoid any issues in text formatting as the Wiki text editor seems to treat a few types of copied texts as formatted text and not plaintext). Or, in launching deletion requests, forced to select "edit source" and type the same category wiki-code. JWilz12345 (Talk|Contrib's.) 12:41, 4 January 2024 (UTC) - Support That is a technical request and thus should go into phabricator, the technical requests page and/or Commons:Idea Lab. Prototyperspective (talk) 14:15, 4 January 2024 (UTC)
- HotCat is a Javascript tool created and maintained locally at Commons. It isn't part of MediaWiki, and changes to it don't require intervention by a WMF developer. Omphalographer (talk) 05:27, 6 January 2024 (UTC)
- Phabricator isn't just for WMF developers. I just checked and indeed HotCat issues are not at phabricator. I think HotCat should be part of the default software and its issues be tracked in a proper issue tracker, preferably the Wikimedia's main one. So it seems for now it would need to be proposed at Help:Gadget-HotCat if it's to be implemented via HotCat. Prototyperspective (talk) 10:55, 30 January 2024 (UTC)
- HotCat is a Javascript tool created and maintained locally at Commons. It isn't part of MediaWiki, and changes to it don't require intervention by a WMF developer. Omphalographer (talk) 05:27, 6 January 2024 (UTC)
- Comment Just to make sure I understand: (1) any time a category is added to an individual DR with HotCat, we always want it inside of a
<noinclude>
element and (2) We can identify a page as a DR because its name begins with "Commons:Deletion requests/" and what follows that is not of the form "dddd", "dddd/dd", or "Archive/dddd/dd" (where each 'd' is a digit) or (to cover translations of Commons:Deletion requests) 'aa' or 'aaa' (where each 'a' is one of the 26 lowercase letters in original ASCII). Are there other exceptions that would need to be made besides those five forms? - Jmabel ! talk 20:29, 4 January 2024 (UTC)- @Jmabel: I don't see a use case for live cats in pages with what follows of the form "dddd", "dddd/dd", or "Archive/dddd/dd" (where each 'd' is a digit). — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 06:50, 5 January 2024 (UTC)
- I was about to answer but also afraid to show off my ignorance. Now that Jeff G. also doesn't seem to know I have some courage and admit I am afraid I can't answer you part two of the question. I am editing mainly in visual mode and even after your explanation I have no idea what "dddd/dd" means. But I would be very glad to have categories that already have the .... and are detectible with hot cat so I do not have to resort to the several editing steps similar as described by JWilz12345. Paradise Chronicle (talk) 06:59, 5 January 2024 (UTC)
- @Paradise Chronicle: I know what most of them are, I just don't see the use case. For instance, Commons:Deletion_requests/2016 appears to be a badly-named one-off, Commons:Deletion requests/2024/01 contains this month's active DRs, Commons:Deletion requests/2024/01/05 contains today's active DRs, and Commons:Deletion requests/Archive/2024/01/04 contains the DRs started yesterday and already archived because the subject page(s) were speedily kept or speedily deleted. Tracking down why pages like Commons:Deletion requests/2024/01 are categorized is an exercise best left to the reader (historically, this is because people are not as careful with noinclude as JWilz12345 is). — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 07:36, 5 January 2024 (UTC)
- @Jeff G.: do I understand that you are saying that, functionally, these exceptions are unnecessary, because it would be fine if the rule of adding a
<noinclude>
element also applied to these? That's fine with me. Might this even be OK to apply this to the language-specific pages? I think it would be. The original proposal was specific to DRs, and I was concerned with how you could technically identify a DR. But, yes, it's simplest if you can just say that anything that begins with "Commons:Deletion requests/" follows this rule. - Jmabel ! talk 19:59, 5 January 2024 (UTC)- @Jmabel: Yes, it seems so. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 01:14, 6 January 2024 (UTC)
- Then Support if technically feasible. - Jmabel ! talk 02:54, 2 February 2024 (UTC)
- @Jmabel: Yes, it seems so. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 01:14, 6 January 2024 (UTC)
- @Jeff G.: do I understand that you are saying that, functionally, these exceptions are unnecessary, because it would be fine if the rule of adding a
- @Paradise Chronicle: I know what most of them are, I just don't see the use case. For instance, Commons:Deletion_requests/2016 appears to be a badly-named one-off, Commons:Deletion requests/2024/01 contains this month's active DRs, Commons:Deletion requests/2024/01/05 contains today's active DRs, and Commons:Deletion requests/Archive/2024/01/04 contains the DRs started yesterday and already archived because the subject page(s) were speedily kept or speedily deleted. Tracking down why pages like Commons:Deletion requests/2024/01 are categorized is an exercise best left to the reader (historically, this is because people are not as careful with noinclude as JWilz12345 is). — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 07:36, 5 January 2024 (UTC)
- I was about to answer but also afraid to show off my ignorance. Now that Jeff G. also doesn't seem to know I have some courage and admit I am afraid I can't answer you part two of the question. I am editing mainly in visual mode and even after your explanation I have no idea what "dddd/dd" means. But I would be very glad to have categories that already have the .... and are detectible with hot cat so I do not have to resort to the several editing steps similar as described by JWilz12345. Paradise Chronicle (talk) 06:59, 5 January 2024 (UTC)
- @Jmabel: I don't see a use case for live cats in pages with what follows of the form "dddd", "dddd/dd", or "Archive/dddd/dd" (where each 'd' is a digit). — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 06:50, 5 January 2024 (UTC)
- Support, but is it already possible for the software to automatically add the "Noinclude" tags whenever someone adds a category via HotCat? Does this already exist elsewhere? It's extremely annoying to always have to do this manually. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 23:55, 7 January 2024 (UTC)
- @Donald Trung: I don't know. I once asked Rosenzeig and he also didn't know how to do it other than manually. He does it in source mode, where a specific tag exists, at least this is what I understood. Paradise Chronicle (talk) 11:28, 18 January 2024 (UTC)
- Deletion requests posted by the mobile app include those tags automatically btw, they have
<noinclude>[[Category:MobileUpload-related deletion requests]]</noinclude>
as part of the DR. That also automatically changes to<noinclude>[[Category:MobileUpload-related deletion requests/deleted]]</noinclude>
when I close the DR as deleted, so there must be some code somewhere doing this. --Rosenzweig τ 09:08, 22 January 2024 (UTC)
- Deletion requests posted by the mobile app include those tags automatically btw, they have
- Support. Thanks for bringing this issue here. --Ooligan (talk) 23:05, 1 February 2024 (UTC)
- The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made in a new section.
Promoting steward elections
The Steward elections are on and there were comments about a low voter turnout. I suggest to make them a bit better visible on commons, similar like the sysop elections which is quite discreet in my opinion. If not similar like the sysop elections, then a banner which is made visible between 5-10 times per cycle (a cycle is a day I was told) is also an option. The current announcement of the Steward election on commons disappears really fast in my opinion and I originally became aware of them because I had the user page of a Steward watchlisted. Krd (a former Steward) suggested that I post this here. Paradise Chronicle (talk) 10:22, 10 February 2024 (UTC)
- I don't really know what happened but now I can see the announcements I believe at all pages, or at least most of the times I open a page. So to me the issue is solved.Paradise Chronicle (talk) 09:49, 19 February 2024 (UTC)
New protection group for autopatrollers
- The following discussion is archived. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
- Consensus for implementing. Phabricator request filed. The Squirrel Conspiracy (talk) 23:38, 14 February 2024 (UTC)
Commons has long needed a protection group similar to the English Wikipedia Extended Confirmed Protection. However, the difference between Commons and a regular wiki is that with a regular wiki, one can assume a user is competent after 500 edits and 30 days, but with Commons the copyright system and licencing is so complex a manual review would be needed, which is what autopatrolled is. This is why I'm not proposing a simple 30/500 or similar protection.
That being said, this abscence of a "middle" protection has led to the increasing use of template protection and full protection as a "solution" for files with edit wars and LTAs attacking. Just look at the lists at [1] and [2]. For example, this file had to be template protected due to an LTA and the abscence of a "middle" protection.
However, template protection is simply too much for most scenarios. Not only is it only meant to be used for templates, but there are only 49 template editors plus the 187 admins, which is simply inadequate. And I doubt I need to mention the issues with fully protecting pages indefinitely. By contrast there are 7323 autopatrollers, 640 patrollers, and 325 license reviewers as of writing, many more active users.
Hence I propose a protection group for autopatrollers. Thank you, —Matrix(!) {user - talk? - useless contributions} 18:09, 23 January 2024 (UTC)
Votes and discussion
How about 100 (one-hundred) uploads and 60 (sixty) days / 2 (two) months? Maybe using uploads isn't the best metric, but we have a bot that lists all uploads of users with less than 200 (two-hundred) uploads or something, maybe something like this. Rather than manually reviewing who is worthy of "ExtendedConfirmed" users who repeatedly upload bad files could be added to a special "non-confirmed" user group. I'm just shooting ideas here, maybe someone can come up with something better. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 19:06, 28 January 2024 (UTC)- @Donald Trung: are you sure you wrote that in the right place? It seems to have nothing to do with Matrix's proposal. - Jmabel ! talk 21:01, 28 January 2024 (UTC)
- I know that the proposal is for the template, I just thought that creating an "Extended confirmed" user group wouldn't be a bad idea. To be fair, most long-term abusers tend to hover around a few files. I do Support the creation of a new template. --Donald Trung 『徵國單』 (No Fake News 💬) (WikiProject Numismatics 💴) (Articles 📚) 21:15, 28 January 2024 (UTC)
- @Donald Trung: are you sure you wrote that in the right place? It seems to have nothing to do with Matrix's proposal. - Jmabel ! talk 21:01, 28 January 2024 (UTC)
- Support Definitely needed especially for file overwriting. GPSLeo (talk) 10:39, 29 January 2024 (UTC)
- Support. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 14:28, 29 January 2024 (UTC)
- Support - much needed. Thanks --C1K98V (💬 ✒️ 📂) 17:17, 30 January 2024 (UTC)
- Support Sounds like a reasonable thing to implement. Abzeronow (talk) 17:38, 30 January 2024 (UTC)
- Support Reasonable implementation. --Minorax«¦talk¦» 11:40, 31 January 2024 (UTC)
- Support: This is the right level for protecting a file that's being vandalised. If someone's autopatrolled, that means we already trust that their edits aren't vandalism. --bjh21 (talk) 12:50, 31 January 2024 (UTC)
- Neutral I agree there needs to be a middle protection level, but I have reservations about attaching "we trust you not to vandalize" and "we trust you to understand copyright" to the same permission. The latter has a higher bar than the former. The Squirrel Conspiracy (talk) 07:33, 1 February 2024 (UTC)
- Support --Adamant1 (talk) 06:15, 11 February 2024 (UTC)
Creating a new shackle
Well, there seems to be clear consensus for this protection group. I'll link to some possible shackles below to use as an icon, but feel free to add your own below: —Matrix(!) {user - talk? - useless contributions} 15:27, 3 February 2024 (UTC)
-
Option 1
-
Option 2
-
Add your own options here (number it as option x)
Votes and discussion
- As proposer, I personally like Option 1. —Matrix(!) {user - talk? - useless contributions} 15:27, 3 February 2024 (UTC)
- Option 1 is the nicer of the two. The Squirrel Conspiracy (talk) 02:55, 4 February 2024 (UTC)
- Option 1 looks good. Thanks --C1K98V (💬 ✒️ 📂) 11:00, 9 February 2024 (UTC)
- Option 1 though it seems rather trivial for something nobody will ever notice Dronebogus (talk) 11:35, 9 February 2024 (UTC)
- The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made in a new section.
Revert policy change for "Overwriting existing files"
- The following discussion is archived. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
- Broad consensus against this proposal. The Squirrel Conspiracy (talk) 06:24, 15 February 2024 (UTC)
Limiting overwrites to autopatrollers would prevent a lot of useful contributions, as most people will give up trying to fix something, if they don't have the right to do so. Taking the other route makes it hard for users as they will have to upload it under a different filename, have to edit pages on different projects to display the new file, and add a warning on the original to indicate that the file has been replaced. Also, the average person won't try to get 500 edits, and will probably never return to this website.
More reasons as to why this should be reverted (added after this proposal was made):
- It overlaps with a statement on the main page. Commons is supposed to be "a collection of 103,222,501 freely usable media files to which anyone can contribute", but how can anyone contribute if they need to have a user group that only a quarter of the active users have?
- It ruins the purpose of wikis. Wikis are supposed to allow collaborative editing on existing pages. This policy makes it impossible for regular users to collaborate here.
- It only benefits admins. Regular users will have to do more work, as shown above.
And I know this only applies to images not uploaded by me, but still, it's not the way to do it. Talking Tom Fan 81 (talk) 03:59, 9 February 2024 (UTC)
- Editing files is not, at all, the same thing as editing a wiki page. A file is *not* a wiki; it has its own copyright owned by a particular person and is not a collectively edited work. We don't give most editors the right to delete files, yet overwriting is tantamount to that. Uploading under a different filename should be no harder. It always seems easier to simply overwrite a file, yet in far too many cases it's very destructive, and can "edit" pages on other wikipedias, Wikinews, and other situations where the change is inappropriate and possibly breaking, not to mention depriving other editors from choosing the original if they would prefer it. Choosing a different filename should not be that hard. Yes, there are probably some situations where overwrites are appropriate, but they are the minority. Uploading as a different file ends up with the same result on the wiki article, without breaking all sorts of things (and policy). The overwriting policy itself has been around for over 10 years, but ignored too often (even by experienced editors). Carl Lindberg (talk) 05:02, 9 February 2024 (UTC)
- Vandalism exists on wiki pages, and can also be destructive. File history exists and revert feature exists. Edit wars exist. Vandalism policy is also ignored too often. Talking Tom Fan 81 (talk) 05:11, 10 February 2024 (UTC)
- You seem to be thinking of files like they are wiki articles, by that response. They are not. Carl Lindberg (talk) 05:46, 10 February 2024 (UTC)
- Responding to the later elements -- 1. you do not need to be part of that user group to upload files, and participate here. Only if you want to overwrite. Most participate by uploading new files, and adding choices to be used. 2. it does not affect the purpose of wikis. Again, a file is not a wiki. They are fundamentally different, and all too many editors do not understand this. Overwriting a file is basically deleting it. We want many images to illustrate a topic for other projects to choose from -- please add another option, don't remove someone else's. Reversion is bad since that then removes the additional work the second uploader did, which may indeed be useful. The only way to have both is to have them under different filenames. Please contribute by *adding* files, not replacing them, unless you really understand the issues across projects and with copyright, and the Commons policy on what type of changes are acceptable to overwrite and which ones are not. Many don't, and don't care to learn, which is understandable but they should not be performing destructive actions. 3. It will benefit the users whose work is getting destroyed or removed. It will benefit other editors who now have a choice of images to illustrate topics. It should be little additional work to upload under a different filename; it's the same file being uploaded. Please show some examples of things you wanted to change, and why, if you think we are mistaken. It's easier to talk about concrete examples. Carl Lindberg (talk) 00:14, 11 February 2024 (UTC)
- Vandalism exists on wiki pages, and can also be destructive. File history exists and revert feature exists. Edit wars exist. Vandalism policy is also ignored too often. Talking Tom Fan 81 (talk) 05:11, 10 February 2024 (UTC)
- Oppose per the above and many discussions on Commons talk:Overwriting existing files. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 14:17, 9 February 2024 (UTC)
- @Talking Tom Fan 81: Hi, and welcome. I am sorry to inform you that you have triggered Special:AbuseFilter/290. The proposal to "Limit file overwriting to users with autopatrol rights" was accepted with many supports and one weak oppose 15:19, 23 September 2023 (UTC). After an implementation problem in phab:T345896 and testing, Special:AbuseFilter/290 went live with the Disallow action 09:35, 28 October 2023 (UTC). Please read MediaWiki:abusefilter-warning-file-overwriting. You may request COM:AP at COM:RFR when you think you are ready (once you have made more than 500 useful non-botlike edits); having that should allow you to overwrite. You may also request an exception for a particular file at COM:OWR. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 14:17, 9 February 2024 (UTC)
- Oppose obviously. Making a proposal against a Commons rule as your first edit won't lead you very far. You need to understand how Commons works first. Yann (talk) 17:57, 9 February 2024 (UTC)
- Oppose as a rule, the "average person" you are referring to will never have a good reason to overwrite a file. Even a fair number of the less Commons-experienced people who have been very sure they knew what they were doing and asked for specific permission to overwrite a particular file have ended up reverted, because their replacement was better in some ways, worse in others, and should have been uploaded under a different file name as an alternative, rather than overwriting. - Jmabel ! talk 20:47, 9 February 2024 (UTC)
- Oppose: nonsense. -- Tuválkin ✉ ✇ 22:21, 9 February 2024 (UTC)
- Strong oppose, only privileged users (perhaps confirmed, autopatrolled, admins, etc.) have the right to overwrite existing files. The users in these groups are expected to know what kinds of constructive overwriting of files are. @Talking Tom Fan 81: , allowing everyone to overwrite files may lead to inferior replacements (like badly-cropped images or badly-edited or desaturated images), and some may even overwrite existing eligible files with copyvio ones (like internet images). JWilz12345 (Talk|Contrib's.) 00:44, 11 February 2024 (UTC)
- Oppose --Adamant1 (talk) 06:16, 11 February 2024 (UTC)
- Oppose per comments made above. Bidgee (talk) 09:40, 11 February 2024 (UTC)
- Oppose I think the balance is right. Autopatrolled users show at least a minimal understanding of how to use Commons. There is no need for editors so new to the site that they haven’t got to be reviewed by autopatrol to be able to overwrite files. - Chris.sherlock2 (talk) 12:43, 12 February 2024 (UTC)
- Oppose per every comment above. --Ooligan (talk) 01:40, 13 February 2024 (UTC)
- The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made in a new section.
Require community consensus for new non-copyright restriction templates
There are many templates for non-copyright restriction (see Category:Non-copyright restriction templates) many of them like {{Personality rights}} or {{Trademarked}} are useful as they are valid in all jurisdictions. But in the last years many templates where created to warn about the usage of a file in some autocratic countries like {{Chinese sensitive content}}, {{Zionist symbol}} or {{LGBT symbol}}. These templates where created by single users without prior discussion and are added randomly to files.
This should be restricted. If we create a template for every restriction in some or even only one autocratic country we would end up with a long list of warning templates on ever file page. The Commons:General disclaimer linked on every page is totally sufficient.
Therefore I propose that new non-copyright restriction templates need to become approved by the community by proposing them on this board. This does not apply to minor variations of templates like {{Personality rights}}. The decision to keep or delete the templates created before this proposal should be achieved in regular deletion requests.
As a rough guideline for the approval of new templates I would propose that templates for countries with en:World Press Freedom Index lower than 70 should generally not be created. Exceptions are possible in both directions with templates for regions with less press freedom to be created or with templates not to be created for regions with a good press freedom situation. If created the templates needs a proper definition when and how to use them. GPSLeo (talk) 09:22, 3 February 2024 (UTC)
- 70 on the World Press Freedom Index may be a bit too high. I see, for example, that Romania is just under that, but I'd think that their restriction on images of embassies is unusual enough that we might want a template for that. - Jmabel ! talk 01:55, 4 February 2024 (UTC)
- 70 is ridiculously too high— that’s like most of the world outside of Western Europe, Oceania and upper North America. Under 40 would be more reasonable Dronebogus (talk) 02:38, 4 February 2024 (UTC)
- We could also remove this rough guideline and just say that the templates have to be approved without any further guideline when to create such guidelines. Also for countries with a good press freedom situation we should not create a template for every restriction in these countries. GPSLeo (talk) 07:30, 4 February 2024 (UTC)
- 70 is ridiculously too high— that’s like most of the world outside of Western Europe, Oceania and upper North America. Under 40 would be more reasonable Dronebogus (talk) 02:38, 4 February 2024 (UTC)
- Is WMC even available in mainland china? Dronebogus (talk) 02:31, 4 February 2024 (UTC)
- @Dronebogus: From what I have heard, not technically, but it can be accessed by those with local or global ip block exemptions and access to proxies. See also w:Wikipedia:Advice to users using Tor to bypass the Great Firewall. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 02:45, 4 February 2024 (UTC)
- If it’s de jure illegal in the PRC then we shouldn’t consider their laws in regards to anything we do. It’s like a speakeasy warning people about the no smoking ordinance. Dronebogus (talk) 02:47, 4 February 2024 (UTC)
- @Dronebogus: From what I have heard, not technically, but it can be accessed by those with local or global ip block exemptions and access to proxies. See also w:Wikipedia:Advice to users using Tor to bypass the Great Firewall. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 02:45, 4 February 2024 (UTC)
- Comment If templates for these autocratic countries continue to be created, {{South Korean Symbol}} will eventually be created for North Korean users as well. So, I agreed to be restricted at first, but I found that not all autocratic countries block access to Wikipedia and Wikimedia Commons. Ox1997cow (talk) 14:57, 8 February 2024 (UTC)
- I know Russia, Myanmar, North Korea, the People’s Republic of China, and possibly Saudi Arabia are all currently censoring Wikimedia to various extents. In Russia it’s not as bad since it’s not a total block of any or all sites but it’s gotten bad enough that Wikimedia Russia had to shut down. I think those countries should no longer be considered in Wikimedia Commons legal policy since they’re actively targeting the Wikimedia movement itself as (de jure or de facto) illegal. Dronebogus (talk) 11:52, 9 February 2024 (UTC)
- My thinking is that Zionist symbol as it exists should be deleted. The Star of David is not a good symbol to use for political sensitivities to Israel's actions. Maybe an outline of Israel should be made instead of the Star of David since it's about Israel, not Judaism. Chinese sensitive content can also be deleted since Wikimedia sites are illegal in the People's Republic of China. @GPSLeo: @Holly Cheng: Some level of community consensus on these would be good to have for these non-copyright restrictions, but we do also want to take steps that protect our users, so some balance in how we approach would be good. Abzeronow (talk) 19:56, 14 February 2024 (UTC)
- I agree with the proposal and would like to suggest a system wherein the community can flag or report templates that are inappropriate or irrelevant. This would help maintain a well-organized and user-friendly template system. 70.68.168.129 05:19, 17 February 2024 (UTC)
Restrict closing contentious deletion discussions to uninvolved admins
- The following discussion is archived. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
- There is no consensus to change Commons:Deletion requests#Closing discussions. That said, there doesn't seem to be any contention that admins closing contentious DRs in which they are involved is seen as bad form, and there is already a mechanism for handling that: starting a thread at Commons:Administrators' noticeboard to discuss that behavior to be, and if necessary initiating desysop proceedings. The Squirrel Conspiracy (talk) 22:02, 24 February 2024 (UTC)
RFCs can only be closed by uninvolved editors, but deletion discussions can be closed by any admin, even if they are heavily involved in the discussion. I propose changing "administrator" to "uninvolved administrator" in the first sentence of Commons:Deletion requests#Closing discussions. I propose adding the following sentence to Commons:Deletion requests#Closing discussions: "In cases of contentious requests, discussions should be closed by an uninvolved administrator." Nosferattus (talk) 01:55, 29 December 2023 (UTC)
- Support as proposer. Closures by involved admins feel like an abuse of power, or at the very least, a conflict of interest. There is no reason a deletion discussion can't wait for an uninvolved admin, which will always feel more fair to everyone involved. Nosferattus (talk) 01:58, 29 December 2023 (UTC)
- Comment Can you point to specific incidents that caused you to propose this, or is this a solution in search of a problem? The Squirrel Conspiracy (talk) 02:01, 29 December 2023 (UTC)
- Wasn't there a big fuzz with Yann and Eugene about this? Trade (talk) 02:16, 29 December 2023 (UTC)
- @The Squirrel Conspiracy: Here's a recent example. I can supply more if needed. Nosferattus (talk) 02:26, 29 December 2023 (UTC)
- @Nosferattus Maybe it's just me, but your example doesn't make sense to me. The discussion was closed by Jim and that seems also their only edit in the discussion. I also do not believe that I experienced that involved admins would close a discussion, maybe I did, but then they hid it really good.Paradise Chronicle (talk) 13:08, 31 December 2023 (UTC)
- @Paradise Chronicle: Please look at the 2nd discussion on that page, not the 1st. Nosferattus (talk) 15:56, 31 December 2023 (UTC)
- Thanks, got it. Didn't know a close of a discussion can be shown at the bottom as well as at the top. Paradise Chronicle (talk) 16:13, 31 December 2023 (UTC)
- @Paradise Chronicle: Please look at the 2nd discussion on that page, not the 1st. Nosferattus (talk) 15:56, 31 December 2023 (UTC)
- @Nosferattus Maybe it's just me, but your example doesn't make sense to me. The discussion was closed by Jim and that seems also their only edit in the discussion. I also do not believe that I experienced that involved admins would close a discussion, maybe I did, but then they hid it really good.Paradise Chronicle (talk) 13:08, 31 December 2023 (UTC)
- @The Squirrel Conspiracy: Here's a recent example. I can supply more if needed. Nosferattus (talk) 02:26, 29 December 2023 (UTC)
- Wasn't there a big fuzz with Yann and Eugene about this? Trade (talk) 02:16, 29 December 2023 (UTC)
- Comment My first thought is that this seems a bit overly broad, especially given the significant problem we have with deletion request listing backlogs. I've been an admin on Commons for more than 19 years. If I started a deletion request, or commented on it, I *generally* let some other admin take care of closing it. However there have been occasional exceptions - mostly when trying to clean up months old backlogs, with no new discussion for months, and no counterarguments have been offered to what seems a clear case per Commons/copyright guidelines - I might feel it is a "SNOWBALL" that since I'm there I might as well take care of cleaning it up. I try to avoid conflicts of interest, and even appearances of conflicts. Does having commented on something inherently create a conflict of interest? (Examples: 1) a deletion request is made by an anon with vague reason - I comment that 'per (specific Commons rule) this should be deleted'. Months later I notice that this listing was never closed, no one ever objected to deletion. Is going ahead and closing it per the rule I mentioned earlier a conflict of interest? 2)Someone listed an image as out of scope. I commented, whether agreeing or disagreeing. Then someone else points out that the file is a copyright violation, which nominator and I had not noticed. Should I be prohibited from speedy deleting the copyright violation because I earlier commented on deletion for different grounds?) I'm certainly willing to obey whatever the decision is; I just suggest this could be made a bit narrower, perhaps with specific exceptions? Otherwise I fear this could have an unintended side effect of making our already horribly backed up deletion request situation even worse. -- Infrogmation of New Orleans (talk) 03:09, 29 December 2023 (UTC)
- Or we could just make it so the rule only applies to DR's that have lasted for less than a month Trade (talk) 03:23, 29 December 2023 (UTC)
- @Nosferattus: given your example, I take it that you consider an admin involved if they have in any way participated in the DR? And would you apply this even when the DR has proved uncontroversial?
- Also: I certainly close clearly uncontroversial CfD's even if I started them. Are you saying I shouldn't have closed Commons:Categories for discussion/2023/12/Category:Taken with SMC PENTAX DA 14 mm f/2.8 ED IF? Because, frankly, I had been very close to making the changes in question without even starting a CfD, but I wanted to make sure I wasn't missing something. What about Commons:Categories for discussion/2023/12/Category:Spielplatz Küsnacht See, where the issue was simply to identify the subject of the category so it could be fixed, or Commons:Categories for discussion/2023/12/Category:Photos of Badagry Social Media Awards (BSMA) (open for 20 days, and no comments for most of that time so I left it open, and when someone finally weighed in it was to agree with me)? I could stop doing this if you object, but please say so explicitly. - Jmabel ! talk 05:23, 29 December 2023 (UTC)
- @Infrogmation and Jmabel: I've changed the proposal based on your feedback. Nosferattus (talk) 06:03, 29 December 2023 (UTC)
- Or we could just make it so the rule only applies to DR's that have lasted for less than a month Trade (talk) 03:23, 29 December 2023 (UTC)
- Oppose This would be a good rule if we would have enough admins but with the current amount of active admins this could increase the backlog dramatically. We maybe could implement the rule that deleting admin and the admin who declines a undeletion request can not be the same. As well as for a reopened deletion request of a not deleted file were a decline of the new request has to be done by an other admin. Both cases of course need exceptions for vandalism or the abuse of requests.
- GPSLeo (talk) 12:39, 29 December 2023 (UTC)
- Support with reservations: at the same time it's a problem when an admin doesn't participate in the discussion and doesn't directly address arguments or making rationales for deletion. This is especially problematic for discussions where there are only few votes. For example the nomination and one Keep vote (example example) that directly addresses or refutes the deletion nomination rationale as well as discussions where there is no clear consensus but a ~stalemate (if not a Keep) when votes by headcount are concerned (example). I've seen admins close such discussion (see examples) abruptly without prior engagement and so on. So I think it would be best that for cases of these two types closing admins are even encouraged to (have) participate(d) in the discussion but only shortly before closing it / at a late stage. On Wikipedia there is the policy WP:NODEMOCRACY that reasons and policies are more important than vote headcounts, especially for by headcount unclear cases but it seems like here both voting by headcount and admin authority are more important. It wouldn't increase the backlog but only distribute the discussion closing differently. Bots, scripts & AI software could reduce the backlog albeit I don't know of a chart that shows the WMC backlogs size and it wouldn't significantly increase due to this policy change.
- Prototyperspective (talk) 13:16, 29 December 2023 (UTC)
- Oppose Proposal is currently overly broad and would be detrimental in shortening our backlog. I don't close DRs that I have a heavy amount of involvement in except for when I withdraw ones that I had started. If I leave an opinion on whether a file should be kept or deleted, I wait for another admin to close. Sometimes though, I like to ask questions or leave comments seeking information that helps me decide on borderline cases. I'd be more supportive if this proposal were more limited. I can also agree with GPSLeo that deleting admin and admin who declines UDRs of the file should not be the same one. Abzeronow (talk) 16:54, 29 December 2023 (UTC)
- @Abzeronow: Do you have any suggestions or guidance for how a more limited proposal could be worded? How would you like it to be limited? Nosferattus (talk) 17:34, 29 December 2023 (UTC)
- Support This should be natural. Since it itsn't to too many Admins, it needs a rule. --Mirer (talk) 17:48, 29 December 2023 (UTC)
- Comment There are times when posters to UDR present new arguments or new evidence. If that is enough to convince the Admin who closed the DR and deleted the file, why shouldn't they be allowed to undelete? — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 18:03, 29 December 2023 (UTC)
- Oppose per Abzeronow. — 🇺🇦Jeff G. ツ please ping or talk to me🇺🇦 18:05, 29 December 2023 (UTC)
- Although I am myself in support of not closing discussions/DRs where I am involved, except as Abzeronow says, one withdrew or so, I believe our current ratio of active admins should be considered. We does not have plenty of admins like English Wikipedia has As such, I tend to Oppose. ─ The Aafī (talk) 19:18, 29 December 2023 (UTC)
- Oppose Discussions are closed according to Commons policies, not according to votes. Yann (talk) 19:39, 29 December 2023 (UTC)
- @Yann: Although I appreciate your work on deletion and your opinion here, this reply comes across as completely dismissive. No one has said anything about votes. Of course discussions are closed according to Commons policies. Do you believe that admins have a monopoly on the understanding of Commons policies? Do you understand why closing a contentious discussion you are involved in could be problematic and discourage other people from participating in the process? Nosferattus (talk) 16:29, 30 December 2023 (UTC)
- Contrary to picture contests, opinions in DRs are not votes. Participants, including non admins, can explain how a particular case should be resolved compared to Commons policies, but it is not uncommon that a DR is closed not following the majority of participants. Also, seeing the small number of admins really active, it is not possible that admins exclude themselves from closing if they give their opinions. Yann (talk) 09:57, 31 December 2023 (UTC)
- Oppose. Involved editors should not close discussions, but I'm leery of making that an absolute rule. There are times when it can be reasonable. I also do not want to encourage complaints about reasonable closures just because the closer had some involvement. Glrx (talk) 01:39, 30 December 2023 (UTC)
- Oppose - This is presented without evidence of a problem (or even articulation of one) and without articulation of thought or analysis related to potential downsides, indeed as referenced above. Additionally, reliance on--here, increasingly use of--adjectives in governing documents is terrible practise in real life and on-site. All this would do is shift post-closure disagreement from "should [Admim] have closed this" to the even more complicated "was [Admin] 'involved'" and "is the discussion 'contentious'". Alternatively stated, to the extent this proposal seeks to limit biased closures, all it would do is provide more avenues to argue such closures are within the range of discretion for interpretation of those terms. If an admin is making inappropriate closures, raise the issue at a notice board. If a prospective admin has not demonstrated an ability to use discretion and abstain when too close to an issue, oppose their rfa. Ill-considered policy changes are not the correct approach. Эlcobbola talk 17:03, 30 December 2023 (UTC)
- "Involved" means they participated in the discussion. "Contentious" means different opinions were presented. These criteria are easy to objectively determine. I added "contentious" because other editors wanted the criteria narrowed. Nosferattus (talk) 18:16, 30 December 2023 (UTC)
- They may mean that to you. They do not mean that to me, nor to others. That you so readily, and erroneously, purport to be the arbiter of objective truth ("These criteria are easy to objectively determine") is precisely the issue I explained. Эlcobbola talk 18:22, 30 December 2023 (UTC)
- "Involved" means they participated in the discussion. "Contentious" means different opinions were presented. These criteria are easy to objectively determine. I added "contentious" because other editors wanted the criteria narrowed. Nosferattus (talk) 18:16, 30 December 2023 (UTC)
Oppose I'd be for this if there were more people who could close discussions. There just isn't enough who can at this point to justify limiting the number even more by approving this though. Although it would be a good idea if or when there's enough users who can close deletion discussions to make up for the deficit.--Adamant1 (talk) 11:31, 31 December 2023 (UTC)
- Support As an admin, I have always followed this as my personal policy. It simply wouldn't feel right to me to close a discussion where I was involved substantially in the discussion, giving my own opinion. When a deletion request didn't have a lot of discussion, but I have a clear opinion on the matter, I often decide to give just my opinion and leave the discussion for the next admin to decide, consequently. I agree with Mirer and think "it should be natural". However, I have encountered admins who do this, even close their own proposals deciding that a discussion went into favor of their opinion when this isn't perfectly clear. So, making this an official policy would be a good idea IMHO. I would still allow closure of discussions where the admin's involvement was only technical. Gestumblindi (talk) 15:06, 31 December 2023 (UTC)
- Support It's a fair proposal and it would avoid discussions in the future. I actually thought this was already normal as I have never experienced an involved admin closing a discussion.Paradise Chronicle (talk) 17:59, 31 December 2023 (UTC)
- How do you define involved? I often had the case that I asked a question to the uploader and as I got no response I deleted the file. GPSLeo (talk) 18:51, 31 December 2023 (UTC)
- Of course I'd also see admins who become involved in a technical, formal way such as correcting mistakes in formatting or spelling, or ensuring that the uploader had enough time to defend their file should be allowed to close a DR. But in my opinion no admin should close a discussion in which they have voted in or presented an argument in support or oppose. Paradise Chronicle (talk) 19:30, 31 December 2023 (UTC)
- How do you define involved? I often had the case that I asked a question to the uploader and as I got no response I deleted the file. GPSLeo (talk) 18:51, 31 December 2023 (UTC)
- Support There's zero reason admins should be closing DRs they have either voted or heavily commented in. No one expects an administrator not to close a DR where they have made a benign, meaningless comment. But there's zero reason they should be able to close one if they have participated beyond that. Especially in cases where the participation shows they are invested in a specific outcome. --Adamant1 (talk) 11:36, 9 January 2024 (UTC)
- Oppose as per Yann and Эlcobbola. DRs are not a popularity contest. 1/ the DRs should be closed following our policies not to follow a majority of votes. 2/it is sufficiently hard to find administrators to look at some complicated DRs, and if in addition we prevent those "involved" administrators to close DRs, it would becomes harder to find "uninvolved" administrators who are able to digest long and long discussions containing 2 ,3 or more point of views. 3/if either some closing may be contencious, there is still various places where to raise potential issues (Village Pump, Village Pump/copyright, Adm Noticeboard, Undeletion Requests, ect...). 4/ To restreint freedom of movement for the (not enough) administrators who are trying to do well the job, is not a good thing IMO. Christian Ferrer (talk) 11:05, 10 January 2024 (UTC)
- Support: Sadly needed. -- Tuválkin ✉ ✇ 22:23, 9 February 2024 (UTC)
- Support--A1Cafel (talk) 10:33, 15 February 2024 (UTC)
- Selective case-to-case basis: selective support and oppose. Support only for deletion requests that are not about derivative works, like nominations related to COM:SCOPE, COM:PERSONALITY, COM:PENIS, privacy rights of building owners, and other issues not tied to artistic or object copyright. However, Oppose for deletion requests related to: COM:Freedom of panorama, COM:Currency, COM:TOYS, COM:PACKAGING, and other issues related to the copyright of a depicted object or public landmark. I specifically said this as there is nothing neutral when it comes to a depicted object's copyright as enforced by statutory law or case law. Once the law of a certain country says public landmarks cannot be used commercially (e.g. FoP laws of France, Ukraine, Georgia, Vietnam, or South Korea, or monumental FoP of U.S., Taiwan, or Japan), it is almost a dead end for uploaders: their nominated images are certainly going to be deleted (weighing in factors like COM:TOO or COM:DM). The laws of 100+ countries are not neutral in the context of FoP, and as an ancient maxim quotes "dura lex, sed lex". The laws of countries with no or insufficent FoP may be harsh, but those are the laws. JWilz12345 (Talk|Contrib's.) 11:50, 15 February 2024 (UTC)
- The above discussion is preserved as an archive. Please do not modify it. Subsequent comments should be made in a new section.
Chinese and Japanese characters as disambiguation?
for category titles, sometimes there are chinese or japanese names that are written in many different ways but have the same pronunciation. some pronunciations are shared by so many people that it's possible to end up with multiple people with the same occupation (such that "cat:john doe (writer)" is not enough to distinguish them).
here's what i'm pondering. can we use these names' forms in hanzi or kanji as the qualifier in parentheses? very often they are different. it also helps users navigating these categories because they can immediately identify the persons with the actual forms of the names in the native languages.
this idea obviously only applies to logograms, among which only chinese and japanese are popularly used.
examples: Category:Lu Xun (Tang dynasty) can become Category:Lu Xun (陸勳) and Category:Lu Xun (Wu) Category:Lu Xun (陸遜). RZuo (talk) 14:28, 13 February 2024 (UTC)
- I would guess that a far larger number of our users can understand "Tang dynasty" than "陸勳". Do you have reason to think otherwise? - Jmabel ! talk 20:21, 13 February 2024 (UTC)
- as i said, "some pronunciations are shared by so many people that it's possible to end up with multiple people with the same occupation (such that "cat:john doe (writer)" is not enough to distinguish them)." by using all these indirect prompts, it's hard even for me to immediately connect the category title to the person. "Lu Xun (Wu)" is a pretty well-known figure, but a first glance at this title i cant make out what "wu" means, which can refer to a dozen different states in history or a dozen different places historical or present.
again, these are just examples. there are also people of the same era that have the same occupations.
- most people who have to deal with these categories can read c/j chars.
- allowing the use of c/j chars, is not the same as requiring only c/j chars to be used as disambiguation. it's only to give one more obvious and convenient possibility of words to use for disambiguation, when strictly following latin-only titles creates unnecessary confusion.
- as i said, "some pronunciations are shared by so many people that it's possible to end up with multiple people with the same occupation (such that "cat:john doe (writer)" is not enough to distinguish them)." by using all these indirect prompts, it's hard even for me to immediately connect the category title to the person. "Lu Xun (Wu)" is a pretty well-known figure, but a first glance at this title i cant make out what "wu" means, which can refer to a dozen different states in history or a dozen different places historical or present.
- RZuo (talk) 20:45, 13 February 2024 (UTC)
- Support as long as there are redirects or disambiguation pages with explanations for those who only use Latin characters. -- Infrogmation of New Orleans (talk) 18:43, 24 February 2024 (UTC)
Tracking file usage on wikis run by Wikimedia movement affiliates
I've posted a proposal on Commons talk:Tracking external file usage that Usage Bot should be allowed to maintain galleries of files used by all kinds of Wikimedia movement affiliate, and not just Wikimedia chapters as at present. Comments over there would be welcome. --bjh21 (talk) 17:37, 28 February 2024 (UTC)