Wikipedia:Bots/Noticeboard/Archive 6

Archive 1 Archive 4 Archive 5 Archive 6 Archive 7 Archive 8 Archive 10

Job for a bot

All the foreign language articles that are interlinked from the RV Belgica (1884) article backlink to the RV Belgica article, which is due to be turned into a shipindex page once the RV Belgica (A962) has been created. Could a bot alter all the en:iwls on the foreign language articles to point back to the 1884 ship please? Mjroots (talk) 14:37, 5 October 2010 (UTC)

I believe this should be taken care of automatically by the interwiki bots. Some of which break botpol in other ways though. Rich Farmbrough, 15:20, 5 October 2010 (UTC).
How long is this likely to take? Mjroots (talk) 16:33, 5 October 2010 (UTC)
Dunno, but I took a look to see if any had moved and fixed them while I was there. I did notice one is also about a later ship, so I left that one (this might cause problems later). I may have missed that aspect in some others, I hope not, but my Walloon is not as good as it should be. Rich Farmbrough, 11:47, 6 October 2010 (UTC).

Issue with a bot

A bot keeps reverting my RfC in this talk page : talk:Special forces. It is seriously annoying as it keeps deleting every half hour or hour. Can someone please fix this ASAP. Thanks.  CET  ♔  08:37, 29 September 2010 (UTC)

It is removing the tag because you're putting it above a section with a datestamp from July. If you want to initiate an RFC, please initiate a new section with a current datestamp and clear statement that describes the dispute and what needs commenting on. –xenotalk 12:41, 7 October 2010 (UTC)

Skins and bots

(Moved from BAG Rich Farmbrough, 11:34, 7 October 2010 (UTC).)
I make this comment here in the hope of reaching enough bot programmers to get a useful answer. It concerns a proposal (see the RFC at Wikipedia talk:spoiler) to make the link to the disclaimers more prominent in the default Vector skin. I know bot programmers use (or are supposed to use) the API rather than scraping HTML, but there may be some bots and other automated tools that are still reading HTML for one reason or another.

So I want to consult bot designers and ask if this proposed change would break anything important. In the interests of keeping things together please comment on the spoiler guideline talk page whose link I give above. --TS 03:23, 7 October 2010 (UTC)

  • I personally couldn't care less if the interface was changed. Any bot that is screen scraping deserves to be affected by this. (X! · talk)  · @193  ·  03:37, 7 October 2010 (UTC)
  • Screen scraping has been deprecated in favor of the API for a long time now, and even before that there has never been a guarantee that the human interface wouldn't change in ways that would confuse bots. Anomie 04:16, 7 October 2010 (UTC)
  • Very few bots use screen scraping, it is strongly discouraged and has been replaced by the API for quite a while now so I don't think it should be a problem. - EdoDodo talk 05:21, 7 October 2010 (UTC)
  • I agree with Anomie, and I think that spoiler change (and other HUI factors) is too important to "worry" about "inconveniencing" bots. However advanced notice of this sort of thing is a boon, previous changes have simply killed masses of bots stone dead. There are things it is convenient or needful to scrape for. I'm also interested in the attitude "Any bot that is screen scraping deserves to be affected by this. " - it rather misses the point - bots don't exist for their own benefit. One can imagine, by analogy, someone dying of VRSA saying "Any anti-biotic that can be out evolved, deserves to become obsolete." <shrug> Rich Farmbrough, 11:32, 7 October 2010 (UTC).
  • I can't work out if this is a joke or not. If it isn't then I agree with everyone else. d'oh! talk 12:05, 7 October 2010 (UTC)
  • As a 'bot owner who was has used index.php, I note that we who do so are quite used to the rug being pulled out from under us. It happened quite a few times before api.php even existed, and I had to change my 'bot code to cope. Rich Farmbrough is right in that advanced notice is good. It certainly makes a refreshing change. ☺ Xe is also right that there are things that api.php still lacks. (One came up at the Wikipedia:Administrators' Noticeboard just recently, as a matter of fact.) But, on the gripping hand, it seems unlikely that any 'bot would need to parse style sheets. Uncle G (talk) 12:49, 7 October 2010 (UTC)
    • Looking at the HTML in different skins I see that there are lots of differences, and I guess I would have to look at the engine code to see where that's all happening. Bot writers who have scraped HTML are presumably aware of this. It's not all in the CSS. But that's not a huge deal here as it's presumably quite possible to hard code the proposed change in the site javascript at MediaWiki:Vector.js. On that, it now occurs to me that the script developers need to know about this proposal much more than the bot designers. --TS 13:36, 7 October 2010 (UTC)
      • It's unlikely that any 'bot needs to parse JavaScript, either. Uncle G (talk) 14:37, 7 October 2010 (UTC)
        • Ah never mind. I wasn't proposing that it should. --TS 18:55, 7 October 2010 (UTC)

Filling recent changes with pointless changes

If you have a 'bot that thinks that it knows how to spell better than human beings, please ensure that it is on the same page as the other 'bots and semi-automated tools who also think that they know how to spell better than human beings. Otherwise, as you can see, pointless back and forth results. Uncle G (talk) 10:53, 18 September 2010 (UTC)

FYI, thread archived to Wikipedia:Administrators' noticeboard/IncidentArchive639#Battle of the bots: Reflinks vs. SmackBot. — Capt'n Earwig (arr! • talk) 23:20, 19 September 2010 (UTC)

As soon as the two bots agree to use ONE of the two methods, I am okay with any of them. -- Magioladitis (talk) 23:40, 19 September 2010 (UTC)

Bots should not make gratuitous changes like that. The issue is whether to capitalize the name of a template, since it get transcluded properly either way. The answer should be: just leave it alone unless there's some other reason to edit the citation. (talk) 21:21, 20 September 2010 (UTC)
I fully agree with (Personally I prefer lowercase for temporary templates and uppercase for permanent templates, but the privilege of changing the case of existing templates should be reserved for human editors anyway.) - Soulkeeper (talk) 07:49, 21 September 2010 (UTC)
  • Like ENGVAR without STRONGNAT, they should be left alone as they were entered by human editors. –xenotalk 18:57, 22 September 2010 (UTC)

Well, IMHO, capitalizing via bot the name of a template is pretty useless. -- Basilicofresco (msg) 21:17, 22 September 2010 (UTC)

  • FYI as noted in the header, this has now moved on to ANI. –xenotalk 13:38, 29 September 2010 (UTC)

SmackBot stopped, Rich Farmbrough blocked

SmackBot (BRFA · contribs · actions log · block log · flag log · user rights)

SmackBot was stopped temporarily some hours ago, but appears to be running once more at the time of writing. X! (talk · contribs · blocks · protections · deletions · page moves · rights · RfA) has blocked Rich Farmbrough for running an unauthorized 'bot on that account, and this edit with AutoWikiBrowser. Uncle G (talk) 13:42, 29 September 2010 (UTC)

The most recent SmackBot edits have been without the problems identified; if they do to start to implement the disputed changes, it should be blocked as well. –xenotalk 13:51, 29 September 2010 (UTC)
Struck, as I didn't notice [1], which you pointed out at ANI changed the caps of the stub template. –xenotalk 13:59, 29 September 2010 (UTC)
Yes you weren't the only one not to notice. For anyone not following they were three obviously test edits on a minor task, made after an update to the bots page made it clear that that major task wasn't running. Anyone with bot experience would have known form the time-stamps (assuming they looked, which of course they are not bound to) that this wasn't a bot "run" or even a fast manual run with a four minute gap between edits. Rich Farmbrough, 15:18, 5 October 2010 (UTC).
Test edits should not be performed on live articles. Use a sandbox. Anomie 20:33, 5 October 2010 (UTC)
Wise words. Rich Farmbrough, 18:35, 7 October 2010 (UTC).

User:Magioladitis blocked

Comments are invited at Wikipedia:Administrators'_noticeboard#Block_review:_User:Magioladitis regarding a similar issue to the above. –xenotalk 18:00, 15 October 2010 (UTC)

Anyone seen a similar approach applied to this kind of challenge before?

Good afternoon (CDT),

The question regards what is to be done with 60K edits to 8K articles.

I have an overall outline of what I would like to do, but I'm a little short on details and expertise. Has anyone had to go through an exercise like this before? If so, are there artifacts from that process? A silver bullet perhaps?

Barring a Major Breakthrough, the first thing I will need is a way to download 60k of usercontribs in xml or something useful (without getting another migraine from editing the "next date" every 500 lines, copying, pasting etc) so I can load them onto mysql here at home. A web page that dumps all usercontribs, or something similar I could adapt, would be Most Welcome.

There are bigger technical challenges I must overcome.

Per my comments at the other talk page,

  • I took a look at the WP APIs and database schema. Sampled the usercontribs, then started looking around for tools. Signed up for AWB, looks like approval may take a couple of days.
  • ...
  • The bot approach may be impractical for some steps. I would like to get more information on running sql queries straight at the dumps or other offline copies. What I have found so far is enticing, but lacks substance. I will probably have to download some sql result sets and work them over on my local machine as well.

So, I am nibbling around th edges, and any examples, artifacts or advice I can get are Most Helpful.

And, as I said, the first thing I need is a way to download 60k of usercontribs so I can load them onto mysql.

Thank you for taking the time to read this.

Aquib (talk) 22:12, 10 October 2010 (UTC)

It seems to me the output generated for CCI reports would be well-adapted to this task. See here for example. –xenotalk 22:21, 10 October 2010 (UTC)
Yes, I believe it will - thanks! Aquib (talk) 23:20, 10 October 2010 (UTC)
The individual diffs have already been done. See WP:Jagged 85 cleanup#Cleanup lists. You can also see Talk:Science in medieval Islam#Misuse of sources which links to a subpage here with a sort-of analysis. Johnuniq (talk) 05:36, 11 October 2010 (UTC)
Thanks, I'll take a look. Aquib (talk) 09:16, 11 October 2010 (UTC)

Edit restriction proposal for Rich Farmborough

Please see Wikipedia:Administrators'_noticeboard/Incidents#Edit_restriction_proposal_for_Rich_Farmborough for the proposal to restrict Rich from: using AWB or any other mass-editing tool; running bots of any sort; making bot-like edits; making more than four edits per minute. - Kingpin13 (talk) 22:29, 20 October 2010 (UTC)

DASHbot ongoing false positives in Article Incubator

User:DASHBot is doing multiple false positives, often on the same files. For example, DASHbot had a false positive with the file En-derin-g-files.png[2] yesterday. The image is in use on an Incubator page at Wikipedia:Article Incubator/En Derin. I tried to shut down the bot, but the page User:DASHBot/F5 is protected. I posted to the bot owner talk page, but no response. Today DASHbot did the same false positive again, and no response from the bot owner. We request the bot be temporarily turned off, until such time as the 'unused-image' logic is rewritten to avoid false positives for files in the Incubator. Thanks.     Eclipsed   ¤     08:20, 30 October 2010 (UTC)

DASHbot is right, non-free content is allowed only in articles (pages in the article namespace), they are not allowed in the Wikipedia namespace, see WP:NFC. -- d'oh! [talk] 08:40, 30 October 2010 (UTC)
Disagree. Wikipedia:Article Incubator is for creating/fixing articles to be moved into article namespace. If there are extra restrictions on file use within the Article Incubator, then we will be hobbling the work there, and make it just that much harder for incubators to create good articles. I suggest we do things to help the incubators. I suggest the Article Incubator be off limits for DASHBot/F5 and other bots looking for 'unused' images.     Eclipsed   ¤     08:45, 30 October 2010 (UTC)
I posted a request on the Incubator talk page for Incubator editors to join in this discussion.     Eclipsed   ¤     08:54, 30 October 2010 (UTC)
You do realise that the non-free use rules are there to fit within the copyright LAWS, and not just to annoy you, don't you? Just delete the file, put a placeholder in the incubator article and if/when you graduate the article to mainspace, upload the file again. In this case, I would guess you can't use that file anyway, as I thought album covers can't be used in articles on the artists. Stop blaming the bot.The-Pope (talk) 09:21, 30 October 2010 (UTC)
Suggest all interested parties go to mediation: Wikipedia:Mediation Cabal/Cases/2010-10-30/Images in Article Incubator Thanks.     Eclipsed   ¤     09:51, 30 October 2010 (UTC)

Allmusic URL update

Hi. A Wikipedian helped update all the links at the German Wikipedia, and is requesting assistance in order to do the same thing here.

Please see Template talk:Allmusic#URL syntax has changed, and provide advice or assistance to User:Cactus26. (Please feel free to move this note, if there is a more appropriate place for it). Much thanks. -- Quiddity (talk) 21:01, 1 November 2010 (UTC)


Could Svick be approved to run the task approved for CleanupListingBot either with CleanupListingBot or SvickBot? I'm rewriting the code so that bot will be run serverside on the toolserver, and Svick has offered to run the bot.Smallman12q (talk) 16:30, 13 November 2010 (UTC)

Mobius Bot has gone berserk

Mobius Bot (talk · contribs)

Looks like someone took control of this bot's positronic brain. I've blocked it indef (with the probably not appropriate "vandalism only account" message, but time was of the essence) and notified the owner. Does anyone here have an explanation for this rather erratic behavior? Favonian (talk) 13:11, 13 November 2010 (UTC)

Well, there's no way that could have happened accidentally. Either the operator's gone rouge or someone with some programming ability got ahold of the password. - Jarry1250 [Who? Discuss.] 13:25, 13 November 2010 (UTC)
Looks like the operator might have gone rouge... It's been almost 3 days since the last message was posted to their talk page. —Preceding unsigned comment added by Barts1a (talkcontribs) 02:28, 21 November 2010 (UTC)
User:Mobius Clock hasn't edited since May and might not know anything's happened. I've sent him an email. Adrian J. Hunter(talkcontribs) 15:17, 29 November 2010 (UTC)
If his wiki account was compromised, it's possible his email account has been as well, so exercise caution. Kaldari (talk) 22:53, 30 November 2010 (UTC)

Fixing problems caused by a bot

I'm curious what the policy is regarding automatic insertion of interwiki links by bots. What happens when a bot inserts a set of false links? I have an example where a meaningless page in an obscure dialect has been linked to a set of (properly-linked) pages in other languages. The dialect page is not related to the others but one particular bot thought it was, and linked that page to all of the others, and all of the others to that dialect page. When I try to delete the link from the en page, other bots now spot that missing link and "helpfully" reinsert it.

I contacted the owner of the original bot, but the owner was completely unhelpful and said they "didn't have time" to undo their bot's edit by removing all the links. It seems that regular users are then expected to register accounts at all the different wikipedias in order to fix the problems which the bot has caused. Apart from being irritating, isn't automatic behaviour like this ripe for manipulation? Isn't it leaving an inviting target for vandals to insert links, let the bots propagate the mess over a large list of pages and then make it difficult for regular users to undo the damage? Isn't there an easier way to fix this kind of problem (without escalating a bot vs bot war of course!) Thrapper (talk) 19:10, 21 November 2010 (UTC)

The behavior you describe is how the interwiki bots work: they basically try to add any missing links in the "interwiki graph" for a topic, to make it a complete graph. So if they see an interwiki link on an article in one language, they propagate it onto all the other corresponding languages. To remove a mistaken link, someone has to remove it from all languages at once.
The response from that bot operator does not seem appropriate, bot operators here on enwiki are expected to fix problems with their bots. Where did this conversation take place? I don't see it in your contributions. Anomie 19:42, 21 November 2010 (UTC)
You're right, it's not in my contributions because the bot owner doesn't live on enwiki. But I don't want to appear to be "telling tales" on the bot owner, I just wanted to ask these two general questions: is there an easy way to undo such edits (I assume most users do not have accounts on "all the other languages", so it's not trivial without leaving IP addresses everywhere), and secondly isn't this a dangerous opportunity for abuse, tricking robots into making large numbers of edits which are quite difficult to undo? Thrapper (talk) 23:40, 21 November 2010 (UTC)
Actually, most editors have an account on all the other languages, see WP:Unified login (a.k.a. WP:SUL). Anomie 02:22, 22 November 2010 (UTC)
Well not all have their account active on 770 projects... but I do, and I have resolved a few of these interwiki problems in the past. what's the page? Rich Farmbrough, 22:15, 29 November 2010 (UTC).

BAG nomination

Hello! I invite you to comment on my BAG (Bot Approvals Group) nomination: Wikipedia:Bot Approvals Group/nominations/H3llkn0wz. Thank you. —  HELLKNOWZ  ▎TALK 11:19, 4 December 2010 (UTC)

Automatic taxobox

Hi, this might not be the most obvious place to ask but we're looking for anyone with a programming bent that might be willing to take a look at Template:Automatic taxobox to see if there are any ways that it could be improved (either in functionality or performance), before it is rolled out more widely. Comments would be very warmly received at Template_talk:Automatic_taxobox#Request_for_comments. Thanks! Martin (Smith609 – Talk) 13:16, 28 December 2010 (UTC)


Peter Karlsen has recently been blocked indef for socking, shouldn't we block his bot too? Usb10 Connected? 01:44, 31 December 2010 (UTC)

It is blocked since Dec 6. -- Magioladitis (talk) 01:52, 31 December 2010 (UTC)

Concerns about ClueBot_NG


i'm not sure if this is the right place. to summarize my thoughts from here: human editors need to spoon-feed this User:ClueBot_NG with false positives in order for the bot to work satisfactorily. i will not submit my edit to the provided link as it is not hosted on wikimedia, but on some private non-open source location. how come that this bot is allowed at wikipedia if it is not entirely managed within wm servers, nor is its functioning open to all aka open source? (talk) 23:14, 5 January 2011 (UTC)

Just a quick note, since I'm about to head off to bed, occasionally source code will not be available because it means the system can be more easy to get around (especially true for anti-vandalism bots). However, the Bot Approvals Group can still request it if they feel it's necessary. In this case it seems it can be accessed at request (but presumably only by a trusted member of the community, e.g. an established admin or BAG member). Yes, this bot is "trained" by humans, yes, it's off-wiki. This can be justified in the same way as keeping source code vaguely secret (in addition mediawiki software is not suited to the task etc). I'm sure you can request someone else submit the diff, but it's difficult (not impossible for CluebotNG) for the bot to tell the difference in a case like this (partly for this reason, I personally dislike having bots revert section removals etc, but it's something Cluebot and CluebotNG seem to do). You will want to read Wikipedia:Bots/Requests for approval/ClueBot NG to really understand this bot and how it works. Sorry I don't have time for a more comprehensive comment at the moment. - Kingpin13 (talk) 23:31, 5 January 2011 (UTC)
while i like bots and think they are extremely useful, i extremely dislike the fact that some (or maybe only this one, i don't know much about others?) are working in the 'grey area of openness' which actually isn't open at all. (talk) 23:46, 5 January 2011 (UTC)
The source code for ClueBot NG is available, its just not published. See User:ClueBot NG#Source Code. Mr.Z-man 00:10, 6 January 2011 (UTC)
This is quickly getting off-topic, but bots don't need to be open in any sense. DYKUpdateBot, for example, has full source code published onwiki. Meanwhile, its cousin DYKHousekeepingBot doesn't have source code published and could be under full copyright. For that matter, most of toolserver (the servers where most Wikimedia bots are based) runs on a Solaris operating system, which is (again) not open. Shubinator (talk) 04:07, 6 January 2011 (UTC)
And some editors use Microsoft OS and browsers which aren't open. Zomg! Rich Farmbrough, 02:15, 12 January 2011 (UTC).

CorenSearchBot editing from

Is this really CorenSearchBot? It seems (and claims) to be, but is the address approved for this bot? Can someone confirm its status and/or identity? Feezo (Talk) 01:44, 13 January 2011 (UTC)

I'm pretty sure it's CorenSearchBot: It's quite easy to do this mistake and it's very unlikely somebody would start doing the bot's work right after it stopped. And no, bots have to be logged in in order to edit. I have asked the bot's operator to fix it. Svick (talk) 22:41, 14 January 2011 (UTC)


This user has bots running and issues are being raised on his talkpage - however they do not appear to be active - last edit was Nov 10. Should bots be running if the user is not around to respond to issues? That would seem to be against the Bot Policy. Exxolon (talk) 03:42, 15 January 2011 (UTC)

I'm wondering what issues are you talking about? I'm actively following both User talk:Misza13 and the progress of the bots. The bots are running fine to 99,9%, but with some minor bugs. Many users are actively following the talk page and help such users that hasn't configured the bot correctly. If is against some Bot Policy, then what type of action or saction do you have in mind? --Kslotte (talk) 13:04, 15 January 2011 (UTC)
The policy section in question is Wikipedia:Bot_policy#Good_communication. Bot's shouldn't be running if their operators are not around to respond to queries and fix issues. Exxolon (talk) 18:01, 15 January 2011 (UTC)
The spirit of that policy is that if there's a major malfunction in the bot, the bot operator should respond. If there's a minor occasional glitch in the bot, but it's still performing very useful tasks with no objections to its running, then it should be allowed to continue. I believe the appropriate policy essay to correspond to that is WP:Wikilawyering. (X! · talk)  · @801  ·  18:13, 15 January 2011 (UTC)
That's not a policy, that's an essay. Exxolon (talk) 18:26, 15 January 2011 (UTC)
Corrected. Regardless, the point still stands. (X! · talk)  · @847  ·  19:20, 15 January 2011 (UTC)
To quote that very section: "At a minimum, the operator should ensure that other users will be willing and able to address any messages left in this way if they cannot be sure to do so themselves." That seems to be satisfied here. Anomie 19:45, 15 January 2011 (UTC)
  • Misza's got a very popular set of archiving bots; there's really only a few "real" issues that are related to the lack of ongoing maintenance of the bot (most notably, handling of blacklisted urls which results in a less-than-graceful failure to archive). And Kslotte is right in that there's a good number of helpful talk page stalkers that respond on Misza's behalf for the rest of the stuff (which is usually attributable to user error). –xenotalk 23:35, 15 January 2011 (UTC)

Best practice for notifying users?

Last week I made some changes to SuggestBot's handling of its regular users, switching to using a template/userbox similar to what we've been doing on the Norwegian Bokmål Wikipedia since early December. We've already got some new users signed up using this system and things are running smoothly, so I now want our existing regulars to start using it too.

This has left me wondering what the best practice of notifying our (currently ~350) users of the change is, previously people signed up by adding their name to User:SuggestBot/Regulars, it's now a redirect. I looked through the noticeboard archives and failed to find anything resembling this problem being mentioned. So far I've come up with a few options:

  • Add a section where we explain the change to our usual post suggesting articles to edit, and changing the edit comment to reflect this.
  • Have SuggestBot post a message about the change to each user.
  • Post the message using my own account.

Perhaps the main reason for asking is that we'd like to try to minimize the possibility of a user who wants to stay subscribed not getting the message, which is why having the bot (or me) post this message separately is on the list of ideas. On the other hand, I'm not wanting to step on any toes here. Since I'm a researcher, I'm tempted to try all three approaches and see if any works better, but again want to make sure I'm not going against WP policy.

Would appreciate suggestions on how to proceed. Cheers, Nettrom (talk) 17:19, 12 January 2011 (UTC)

No comments? Maybe I'm asking these questions in the wrong place, and WP:VPM is more appropriate? Cheers, Nettrom (talk) 15:59, 14 January 2011 (UTC)
Maybe you could try getting MessageDeliveryBot (talk · contribs) to send a message to all the users. I have never used that bot, so I do not know how to do it. EdoDodo (talk · contribs) is the owner. Reaper Eternal (talk) 16:04, 18 January 2011 (UTC)
That sounds like a good possibility, I'll look into that, thanks so much! Cheers, Nettrom (talk) 15:58, 20 January 2011 (UTC)

1000 AWB edits/hour -- bot flag?

Does 1000 edits/hour on AWB require a bot flag? --Kleopatra (talk) 04:00, 26 January 2011 (UTC)

No See my response here. Simply put, I was editing fast, but all edits were by hand and all lists were constructed by me without the use of any external tool other than AWB or any special rights in addition to the standard ones that come with AWB permission. —Justin (koavf)TCM☯ 05:37, 26 January 2011 (UTC)
17 epm or 4 seconds per page usually cannot receive enough human attention. However, this would not necessarily imply the edits fall under a bot task, as the edits may actually require human supervision. —  HELLKNOWZ  ▎TALK 10:23, 26 January 2011 (UTC)
I'm not sure I understand your answer. Obviously this is editing too fast to get human attention, and considering the number of posts on the user's page about problem edits, it is clear he/she failed to give it the human attention.
However, the question is, is a person who is using AWB at this rate required to have a bot flag? This appears to be the case under the bot and AWB rules. Do you know the answer to this question? --Kleopatra (talk) 14:33, 26 January 2011 (UTC)
I was saying that someone editing at 1000 eph cannot give due diligence to the edits they make, therefore such editing is most likely semi-automated. Even with a bot flag and the editing designated as a manual task, this would exceed human capability. A bot flag by itself would not solve that issue. In reality, any 1000 eph task would need a bot flag (as pointed out below), because such editing would clearly be a semi-automated/automated task. What I was implying is that merely having high edit rate does not warrant a bot flag. Approved tasks do. High edit rate warrants a question if the edits are instead semi-automated, for which a bot account is required. —  HELLKNOWZ  ▎TALK 16:27, 26 January 2011 (UTC)
I see. You were elaborating on the policy details in regards to various methods of high edit counts. Thanks. --Kleopatra (talk) 16:45, 26 January 2011 (UTC)
Yes; any single task of 1000 edits should really have bot approval and a bot flag. This is true even if the edits will require some manual review. Making 1000 edits at a rate of 4 seconds per page is, fundamentally, automated editing. I am doubtful that anyone has the mental endurance to make a detailed check of the contents of that many pages in a row at that rate; if there were going to be uncommon errors, they would just be missed. So we need to treat the job as something that will not have manually-reviewed edits: a bot job. — Carl (CBM · talk) 12:06, 26 January 2011 (UTC)
This seems to be the case. However, the user above appears to be a member of the bot approval group, while you appear not to be. Your interpretation, however, is mine: this rate is automated editing and automated editing on wikipedia requires a bot flag. Thanks for commenting. --Kleopatra (talk) 14:33, 26 January 2011 (UTC)
Who is a member of BAG and commented above? Koavf is not on the bot approvals group. — Carl (CBM · talk) 15:56, 26 January 2011 (UTC)
Hellknowz. I did not respond to Koavf, as his answer is not to my question. --Kleopatra (talk) 16:13, 26 January 2011 (UTC)
Yes, 1000 edits per hour should run on a bot account with the bot flag to reduce impact to other editors. Should also be approved and have the consensus of the WikiProjects that are being acted upon, I see numerous complains from various WikiProjects at Koavf's talk page. There are dedicated WikiProject tagging bots available, if a project wanted a bot to tag for their project, they could surely engage one (perhaps one that would also auto-assess and save, rather than make, work). –xenotalk 14:44, 26 January 2011 (UTC)
What is the proper course of action to stop this editor? I just spend an hour correcting 14 (around) of his edits. I need him to revert all of his algae and protist page edits that have not been edited by someone else and stop using AWB. How do I go about this? Thanks, everyone, for the help. --Kleopatra (talk) 16:13, 26 January 2011 (UTC)
(1) I think he has already stopped biology articles, and his edit rate now seems more manual. (2) If the Wikiproject involved wants the tags removed, we can easily run a bot to remove them. The project just needs to come to a consensus and make the request. — Carl (CBM · talk) 16:22, 26 January 2011 (UTC)
I want him/her to revert all of his edits, so that I don't have to manually check 1000 articles. It took me over an hour to check 14 of them. I don't have that much time, and he/she made a sufficient number of errors (about 50% of the ones I checked were wrong) that leaving his/her edits until they can all be checked is not good enough. WikiProject Algae already has a bot that tags its articles, although there is no reason that this editor could not add banners, also, at the request of the project, or with knowledge of what he/she is doing. That is not the case, though. The user has done this before and been blocked for it, again, although it is 4 years ago since the prior blocks, the user already knows that editing at this rate is not allowed. AWB is a tool that requires an agreement for proper usage. From this post/board, I would like the bot owners group members to warn this user on his/her talk page (for other editors/admins to reference) that this rate of editing requires a bot flag. That way, if he does it again, he can be immediately blocked, and hopefully caught before he/she does as much damage and makes as much work for others as he/she has done this time. --Kleopatra (talk) 16:43, 26 January 2011 (UTC)
Wow 1000 an hour is generally more than even bots are allowed to do. So most definitely a person using AWB should not be doing this many. -DJSasso (talk) 16:18, 26 January 2011 (UTC)
I just noticed that. Maybe I read his edit rate incorrectly? --Kleopatra (talk) 16:43, 26 January 2011 (UTC)
  • I've disabled Koavf's access to AWB pending a satisfactory explanation of the above and a resolution of the errors that have been pointed out. –xenotalk 14:00, 27 January 2011 (UTC)

Mjbmrbot broken


Mjbmrbot is broken. I notified it's owner here, but the bot is still running, apparently. It is -according to the owner- a standard pywikipediabot, but it is making a mess of interwiki-links. Can someone take appropriate action? (Not just en.wikipedia is affected!) Thanks Buzz-tardis (talk) 13:29, 26 January 2011 (UTC)

Blocked the bot until issue is fixed. Better contact meta for global block. -- Magioladitis (talk) 13:34, 26 January 2011 (UTC)
The bot is not using standard interwiki settings, they have cosmetic changes enabled. ΔT The only constant 13:40, 26 January 2011 (UTC)
Searching the right place on meta... Buzz-tardis (talk) 13:45, 26 January 2011 (UTC)
Requested a global block here... Probably the wrong place, but could not find a more appropriate one, on such short notice. (bot still editing...) Buzz-tardis (talk) 14:09, 26 January 2011 (UTC)
Request withdrawn. Buzz-tardis (talk) 14:34, 26 January 2011 (UTC)

Bot has approval by Wikipedia:Bots/Requests for approval/Mjbmrbot. I m not expert in pywikipediabots. Can someone indicate a bunch or bot's false edits? -- Magioladitis (talk) 13:46, 26 January 2011 (UTC)

See the link I gave earlier... examples are there. Buzz-tardis (talk) 13:49, 26 January 2011 (UTC)
Its moving interwiki links around the page, moving it below stub templates, which goes against the current standard. Interwiki bots should not use that feature. ΔT The only constant 13:50, 26 January 2011 (UTC)
These "features", why they exist? -- Magioladitis (talk) 13:52, 26 January 2011 (UTC)
Its there because it was originally a use for this, it just has not really been maintained due to different projects changing their formatting. I would like to see the results of their results. I doubt that they are updating their pywikipedia framework regularly. ΔT The only constant 14:02, 26 January 2011 (UTC)
I update the bot everyday, but when Buzz-tardis told me there is a problem with my bot i updated it immediately again, if there was a problem with my bot should be solved but if still has problem, this is not mine, should contact pywikipedia authors Mjbmr Talk 14:03, 26 January 2011 (UTC)
Its not a problem with pywiki, I know it very very well. Can you run in your pywiki folder and post the results? ΔT The only constant 14:05, 26 January 2011 (UTC)

First step is that the bot stops editing in mainspace. Then, we ll find a solution. -- Magioladitis (talk) 14:07, 26 January 2011 (UTC)

Results of :
Pywikipedia [http] trunk/pywikipedia (r8816, 2010/12/31, 12:49:51)
Python 2.7.1 (r271:86832, Jan 4 2011, 13:57:14)
[GCC 4.5.2]
use_api = True
use_api_login = True
unicode test: ok Mjbmr Talk 14:11, 26 January 2011 (UTC)
What setting do you have for cosmetic_changes in your user-config? ΔT The only constant 14:14, 26 January 2011 (UTC)
Yes, i am using cosmetic changes and i always use cosmetic changes as many bots uses that i there wasn't any problem before, please check bot last edit there is no problem with that anymore Mjbmr Talk 14:17, 26 January 2011 (UTC)
That looks ok now... But what changed to make it so? (In other words: is it a fluke, or is it really fixed?) Buzz-tardis (talk) 14:24, 26 January 2011 (UTC)
No, seems the bot is fixed since updated even in other wikis Mjbmr Talk 14:28, 26 January 2011 (UTC)

mjbmr, can you please turn cosmetic changes off? We have many problems caused by them. -- Magioladitis (talk) 14:32, 26 January 2011 (UTC)

I switched cosmetic changes to off for all wikis Mjbmr Talk 14:35, 26 January 2011 (UTC)
Unblocked. Please run without the cosmetic changes because we have many complains about them. Moreover, they heavily depend on the project and you are running in many projects. Please, also consider fixing the iw order where it was messed. Thanks and happy editing. -- Magioladitis (talk) 14:40, 26 January 2011 (UTC)
Ok, i won't do cosmetic changes since you told me, and i don't know where it missed up, as far as I could i'll, thank you Mjbmr Talk 14:43, 26 January 2011 (UTC)

Thanks all, and sorry for the hassle. Buzz-tardis (talk) 14:47, 26 January 2011 (UTC)

no problems, It was obligatory Mjbmr Talk 14:51, 26 January 2011 (UTC)

What is this?--Амба (talk) 01:17, 27 January 2011 (UTC)

Please take a look at this Mjbmr Talk 08:37, 27 January 2011 (UTC)


How were the last 15,000 expand templates replaced according to the holding pen instructions (if they weren't just removed) in such a a short space of time? Does anyone know? Rich Farmbrough, 02:17, 28th day of January in the year 2011 (UTC).

I would ask User:Magioladitis first. — Carl (CBM · talk) 02:23, 28 January 2011 (UTC)
Already have. Rich Farmbrough, 04:53, 28th day of January in the year 2011 (UTC).


{{Run-page-shutoff}} has been nominated for deletion. You may be interested in this issue. (talk) 05:14, 30 January 2011 (UTC)

Mjbmrbot broken still/again

It's at it again: see? Block, someone? -- Buzz-tardis (talk) 05:11, 31 January 2011 (UTC)

Global lock requested here. -- Buzz-tardis (talk) 05:24, 31 January 2011 (UTC)

Another example. -- Buzz-tardis (talk) 05:27, 31 January 2011 (UTC)

And yet another. -- Buzz-tardis (talk) 05:40, 31 January 2011 (UTC) (Edit: Strike. See below. -- Buzz-tardis (talk) 20:28, 31 January 2011 (UTC))
etc. -- Buzz-tardis (talk) 05:44, 31 January 2011 (UTC)
Note that we here can't do anything about any of those edits, as none are on enwiki. I don't know whether or not any admin will block here without evidence that it is sorting wrong here (it seems all your examples are in the "default" iw sorting category). Anomie 05:54, 31 January 2011 (UTC)
Note that the bot has an history of getting broken, at least this has occurred on Wikinews before too. Diego Grez (talk) 06:15, 31 January 2011 (UTC)
This one is on en.wikipedia... -- Buzz-tardis (talk) 06:11, 31 January 2011 (UTC) (Edit: Strike. See below -- Buzz-tardis (talk) 20:28, 31 January 2011 (UTC))
I stopped the bot, until the problem get fix Mjbmr Talk 07:47, 31 January 2011 (UTC)
Ok. I withdrew the request for global lock. Please make sure the bot is /really/ fixed this time, before you start it up again. Also, could you please try clean up after it? -- Buzz-tardis (talk) 08:04, 31 January 2011 (UTC)
Not try, but do. Per BOTPOL, it is the bot operator's responsibility to correct any bad edits by the bot. —  HELLKNOWZ  ▎TALK 08:45, 31 January 2011 (UTC)
If the bot is stopped, for more than an hour by now, then why do new edits show up here? Mjbmr? Please explain! -- Buzz-tardis (talk) 09:16, 31 January 2011 (UTC)
That one seems to match m:Interwiki sorting order#By order of alphabet, based on local language, which is the one that is listed as to be used for enwiki. What's wrong with it? Anomie 12:06, 31 January 2011 (UTC)
Ow... I was just looking for ones where the order changed. My bad. Did strike the ones which were correct after all, above... Apparentjy only the ones involving rue:-interwikis remain. -- Buzz-tardis (talk) 20:28, 31 January 2011 (UTC)
I guess the bot is running Pywikipedia and it's running an old version. Please update, this is a known (and fixed) bug. multichill (talk) 09:46, 31 January 2011 (UTC)
Not all of these listed diffs are showing a bug. As Anomie said en-wiki has opted to a alphabetical sorting order not the language code and it is bot's normal behavior to sort interwiki links as requested. Like Multichill I guess the bot is deprecated. The wrong sorting order for rue-wiki indicates it is a pre-pyrev:8893 revision. Xqt (talk) 13:55, 31 January 2011 (UTC)
If the changes are frequent, is there a way to force bots to update prior running? -- Magioladitis (talk) 14:03, 31 January 2011 (UTC)
Fixed, i am running it again Mjbmr Talk 12:31, 1 February 2011 (UTC)

Automate stock information through RSS feeds

Wikipedia:Village pump (proposals)#Automate stock information through RSS feeds - I have proposed using a bot (or bots) to automate stock information through RSS feeds, specifically in company infoboxes. This is not a bot request, I just thought that bot owners might have something of value to add to the discussion, as it could eventually lead to a bot request. Thanks! ▫ JohnnyMrNinja 07:57, 3 February 2011 (UTC)

AN/I discussion about 27,000 automated edits

This discussion at AN/I may be of interest to bot owners. User:Plastikspork runs a bot and owns an alternate account, User:Plasticspork. When 27,000 edits to taxoboxes were requested here, Plastikspork volunteered to make the edits. However, instead of using his bot account, Plastikspork used AWB and his alternate account to make the edits. After I asked him/her to link to the RfBA for the task, I then noticed his bot was not making the edits, and crossed out my question.[3] Noticing that the category of 27,000 articles ha plummeted to some 21,000, I went back to my discussion on this talk page and noticed that he was making the edits semi-automatically. This was when I found the alternate account and the edits, rather than him/her informing me of the existence of this account when I raised the issue.

I post this here because I believe that this number of automated edits might require a bot. Please discuss any non-bot specific aspects of this at the AN/I discussion. --Kleopatra (talk) 16:36, 2 February 2011 (UTC)

I have replied at AN/I (my experience of AN/I being that they like to have relevant points CC'ed to them, as it were). In essence, I do not feel that a RfBA was necessarily required for this task, although obvious that is a minimum and not a maximum bar. I also haven't checked the rate of edits being made, and they may indeed mandate a a RfBA, I don't know. Regards, - Jarry1250 [Who? Discuss.] 20:38, 2 February 2011 (UTC)
The stretch that I looked at in his edit history showed approximately 3 edits per minute, which if I'm not mistaken is below the max threshold for bots. Bob the WikipediaN (talkcontribs) 03:16, 3 February 2011 (UTC)

Plastikspork speculates there may be a bug in his browser or a conflict with one of the plug ins. I have posted at AN/I about this issue. However, if there is some action that bot-owners usually do under such circumstances (meaning a potentially corrupted bot account), this is to notify the board that this may be an issue.

"The only reason that I can come up with for why I didn't see the message earlier is some javascript bug in my browser, or a conflict with one of the plugins (e.g., noscript or greasemonkey)."

--Kleopatra (talk) 23:27, 5 February 2011 (UTC)

Frescobot and Commons categories

I just found this bot editing articles on my watchlist to add commons category boxes per Wikipedia:Bots/Requests for approval/FrescoBot 6. Those boxes aren't always placed in articles on purpose based on the editorial decisions of the articles' editors. One good reason: all of the photos in the category are already in the article. Why entice a reader with the promise of more content, and then show them everything that they've seen in the article? Just something to consider since I've had to remove the box from 13 articles. The two articles where it was left in place each had one additional photo in the category. I'm waiting for the bot to hit the end of the alphabet, but I expect more edits to undo yet. Just some food for thought. Imzadi 1979  09:26, 1 February 2011 (UTC)

We don't promise more content, we just say we have content at Commons. Some of this content may already be used in the article. Maybe now the categories don't contain a lot of images, but over time more images will end up in these categories. You shouldn't remove the links. multichill (talk) 10:04, 1 February 2011 (UTC)
Many times they weren't put there on purpose originally, and now a bot is adding them. If the category box adds nothing to the article, it is pointless to leave it in place. Such an action should have been announced at a wider venue for input than a bot request. Besides, in some topic areas, there's no guarantee of additional content. M-554 (Michigan highway) was only 0.7 miles (1.1 km) in length before it was decommissioned in 2005. What's the use in plopping a category for one photo that's in the article now? (The category has been emptied and tagged for deletion, btw.) Imzadi 1979  10:28, 1 February 2011 (UTC)
Perhaps, the bot can only add the commons box if Commons contains images that are not found in the article? That would make perfect sense. Otherwise I have to agree that linking to the same images that the article already contains is misleading, even if the box's wording implies there may not be any. —  HELLKNOWZ  ▎TALK 10:33, 1 February 2011 (UTC)
And if that's the case, at a future date, if there are 10 photos in the category, but only 3 in the article, the bot will add the box. The article's editors will be alerted to the additional photos by the addition of the box and then they can look through the category for better content to swap in and out of the article. Sounds like a win-win to me. Imzadi 1979  10:42, 1 February 2011 (UTC)
The commons box should always be there because its a form of interwiki link. We don't remove interwiki links if the content on another wiki is the same (albeit in another language). -DJSasso (talk) 12:48, 1 February 2011 (UTC)
And while I agree with normal interwiki links, the commons box is rather obtrusive. When I opt to include a commons link in my articles I usually use the inline variant. --AdmrBoltz 16:25, 1 February 2011 (UTC)

I agree with DJSasso: it's important to extend these interwiki links. The template may be a bit obtrusive, but it's just a cosmetic issue (well... change it). There is nothing wrong with placing the link when there are no additional images on Commons: first of all the number of (well categorized) images on Commons is going to grow, secondly the link is useful to check for new pictures. The category, likewise its link, has not to contain new images in order to be useful. Moreover imho a bot should not remove or add the link to Commons everytime someone add/remove a file on Commons or within the article... -- Basilicofresco (msg) 18:21, 1 February 2011 (UTC) I almost forgot it: last but not least, categories on Commons could contains subcats. These subcats will likely contains media related to the article, so even a category without images is a good target for a link. -- Basilicofresco (msg) 18:28, 1 February 2011 (UTC)

Yeah removing commmonscats is not a good idea. It is a interwiki link and removing those is a bad idea. If you do not like the layout you can fix it. If there is an image on en-wiki and we move the file to commons the commonscat help the bots find the proper category. If you remove the commonscat the file will most likely not be categorized. --MGA73 (talk) 11:05, 6 February 2011 (UTC)

Bot gone wild by IP

Please see Seems like a bot gone wild. -DePiep (talk) 01:01, 11 February 2011 (UTC)

e.g. this. -DePiep (talk) 01:04, 11 February 2011 (UTC)
Bot in question is Zorglbot (talk · contribs), op by Schutz (talk · contribs), and was notified by Plastikspork (talk · contribs) at User talk:Schutz#Bot logged out --AdmrBoltz 02:08, 11 February 2011 (UTC)
All OK by me. I was wondering if some Bot had stopped breathing. Then what?     -DePiep (talk) 02:14, 11 February 2011 (UTC)

Luckas-bot is continually adding irrelevant interwiki links to articles

User:Luckas-bot is continually adding an actor's article from the Swedish wikipedia (sv:Leslie H. Martinson) to an article about a book (Lad, A Dog).[4] This is not the first time it (adding irrelevant links) has happened.[5] Can someone please block the bot to get its operator to notice the problem and fix it. Jappalang (talk) 15:29, 11 February 2011 (UTC)

This should be fixed now [6]. The reason was "Lad: A Dog", when surrounded by square brackets will lead you to the article "A Dog" at the As such, when an user created the article at sv. containing such a link [7], the bots did their thing and spread the false-positive interwiki link. –xenotalk 15:40, 11 February 2011 (UTC)

WP 1.0 bot: co-maintainer wanted

I'm looking for a co-maintainer for User:WP 1.0 bot. This is the bot that tracks WikiProject article assessments. It is also closely involved with the Wikipedia:Version 1.0 Editorial Team that produces packaged DVDs of selected Wikipedia articles. The bot itself runs on the toolserver, as does the web interface that allows users to query the assessment data. It's written in PERL at the moment, but the data is stored in a proper database on the toolserver where it can be accessed by tools in any language.

What I'm looking for is someone who is interested in contributing to the WP 1.0 project or being a co-maintainer of the bot. The bot code was rewritten about a year ago, and is stable, but there are many features and improvements that could be made. I'm happy to give commit access to anyone who wants to contribute to it, and in particular I'm looking for a co-maintainer to share admin access to the bot's account. I think it's not ideal to have such a key bot dependent on a single bot operator.

This is a big project - there are over a thousand WikiProjects that rely on the bot, and the bot is one of the few that has made over 1,000,000 edits. I have always found it interesting and satisfying to work on, but it's grown enough that I think additional maintainers would be helpful. I would be happy to mentor and help new maintainers learn how the system is designed. — Carl (CBM · talk) 18:55, 21 February 2011 (UTC)

RIP Mobius Bot?

As discussed in an archived thread, Mobius Bot (talk · contribs) went berserk late last year, and its owner, Mobius Clock (talk · contribs), hasn't been heard from since long before that. I emailed Mobius but got no response. Is there any way the bot can be restored or replaced? Or could some other bot(s) take over Mobius Bot's functions? It seems a dreadful shame to lose a useful bot over something like this. Adrian J. Hunter(talkcontribs) 10:00, 9 February 2011 (UTC)

Should be fairly trivial to code. ΔT The only constant 13:31, 9 February 2011 (UTC)
Here's the source: (X! · talk)  · @609  ·  13:36, 9 February 2011 (UTC)
*bump* Anyone willing to do the deed? Considering time required:utility gained, this would surely be an incredibly time-efficient way to help the 'pedia. Adrian J. Hunter(talkcontribs) 13:53, 24 February 2011 (UTC)
It may be worthwhile to transplant this thread to Wikipedia:Bot requests. –xenotalk 14:06, 24 February 2011 (UTC)
Ah, hadn't thought of that... Will do, thanks. Adrian J. Hunter(talkcontribs) 14:08, 24 February 2011 (UTC)

A surprising find

I recently discovered this page with several users in the top five with shockingly high edit numbers. Far higher than a user should have without a bot flag. I also looked at the contributions pages and most have edit counts far exceeding what has been explained to me to be appropriate. I also reviewed the list of bots and do not see these user accounts listed on there although several do maintain bots that have been approved for use. Since server resources, bot edit policy and the filling of watchlists has become such a significant area of concern I wanted to bring these to someones attention. --Kumioko (talk) 19:55, 24 February 2011 (UTC)

How to opt in to the edit counters?

Hi. I have an account for my bot. The policy appears to be that I may not login as the bot and edit manually in its name. However, I'd like to opt in to the edit counters, which is done by creating a subpage in the bot's userspace and can only be done by the account holder. What do I do ? - Richard Cavell (talk) 05:34, 5 March 2011 (UTC)

That would be fine to do, The rules that you are referring to are about doing content edits and such whilst logged into your bot account. Peachey88 (T · C) 05:41, 5 March 2011 (UTC)
Or you can get an admin to do it, I've opted the bot in for you anyway. If you meant some tool other than X!'s, feel free to do it yourself from the bot, no one is going to kick up a fuss about that. Also, I know that at least one of the old counters checks the last edit to the opt in page was by the user themselves, but X!'s doesn't seem to mind that, - Kingpin13 (talk) 06:29, 5 March 2011 (UTC)
Thanks for your replies, and for creating the page. - Richard Cavell (talk) 06:42, 5 March 2011 (UTC)

CfR discussion related to Category:Living people

Any bot operator whose bot deals with any BLP-related issues should be aware of this CfR discussion, as its results may result in renaming categories which the bots handle. עוד מישהו Od Mishehu 13:26, 6 March 2011 (UTC)

User:RebelRobot updating ISBNs

I noticed in this edit that User:RebelRobot was "fixing" ISBN formats, which caused a link to an image to be broken. I left a message about that, which I'm sure can be fixed for the future. However, due to volume of edits, it's impossible to humanly check them all for broken links. Also, I wandered if a bot requires community approval to do this type of reformatting, and if such approval has been given. --Rob (talk) 11:01, 8 March 2011 (UTC)

I do not see approval for this task at all. Wikipedia:Bots/Requests_for_approval/Archive_4#User:RebelRobot is the only approval and that's for interwikis. —  HELLKNOWZ  ▎TALK 11:42, 8 March 2011 (UTC)
Looking around for discussion I found Wikipedia:Bots/Requests for approval/RjwilmsiBot 6, which approves a different bot, to do a similar ISBN reformatting. There's also some discussion at the end of Wikipedia talk:ISBN. However, I really don't see many people discussing the issue, particularly using the bot to make the change, and don't see a clear consensus. I think any editing on such a large scale should really have more people involved, and there should be a consensus. Rob (talk) 18:37, 8 March 2011 (UTC)
The bot was already temporarily blocked and operator notified of necessity for consensus and BRFA. —  HELLKNOWZ  ▎TALK 18:41, 8 March 2011 (UTC)

Hi guys. The bot made a blunder in Dianetics: The Modern Science of Mental Health simply because the image title contains ISBN1403105464 and neither the original developers of the script I'm running nor me imagined that a image title will at some point contain an ISBN code. Should you have not pointed this mishap in time, it would have done the same for the other images named similarly on en.wp.

  1. File:Isbn-0670028967.jpg (bot would probably change it to ISBN 0670028967.jpg in articles)
  2. File:Cipolla La Epopeya ISBN-9875560405.jpg (bot would probably change it to ISBN 9875560405.jpg in articles)
  3. File:Cipolla IKA la aventura ISBN-9875560065.jpg (bot would change it to ISBN 9875560065.jpg in articles)
  4. File:Konarski Czerwone Maki-ISBN8388736531.jpg (bot would change it to ISBN 8388736531.jpg in articles)
  5. File:Mindbenders-1971-cover-ISBN0854350616.jpg (bot would change it ISBN 0854350616.jpg in articles)
  6. File:Isbn-0670028967.jpg (bot would change it to ISBN 0670028967.jpg in articles)
  7. File:Silences of Hammerstein pp 82 87 isbn 978-1906497224.pdf (bot would change it to ISBN 978-1906497224.jpg in articles)
  8. File:Isbn 84-297-4074-0 Pep Trujillo.jpg (hosted at commons; bot would change it to ISBN 84-297-4074-0.jpg in articles, although that image is currently used only on es.wp, hy.wp and ko.wp so it's irrelevant to this discussion)

Those 8 files are used in 7 articles so technically all I need to watch for is those 7 articles. Apart from these cases, there is no possibility that the bot would create broken links in articles and I wouldn't say watching over 7 articles would be that difficult on me.

The bot performs three types of edits:

  • fixes ISBN: number/n u m b e r/n-u-m-b-e-r to ISBN number/n u m b e r/n-u-m-b-e-r (explanation ISBN: number does not work in wikicode; see example)
  • changes ISBN n u m b e r to ISBN n-u-m-b-e-r (explanation both work, but in the latter its easier to distinguish the digits, see example)
  • fixes cases where the digits are separated by – or — (explanation digits separated by n/mdash do not work in wikicode, see example)

Please note that the bot does not convert ISBN number into ISBN n-u-m-b-e-r (as shown here). It merely performs a part of what SmackBot was doing in 2006. Considering it's not making any substantial changes and that the community already approved of these at some point, I see no reason to open a new discussion about it. So far the bot edited some 6000 articles (! –> Focke-Wulf Fw 187). Given that en.wp has 6,177,202 articles, I wouldn't really label this as editing on such a large scale. --Rebel (talk) 05:58, 9 March 2011 (UTC)

Another bot being approved for a task at some point does not mean that you can do it with your bot without a BRFA. It may be that consensus has changed since the other bot ran, it may be that there were special concerns or restrictions on the other bot that you are not aware of (and that you would be violating), or it may just be that we don't find it necessary for two bots to be doing that particular task. "Editing on a large scale" is considered by absolute number of edits, not by the fraction of articles edited. Anomie 13:55, 9 March 2011 (UTC)
6k is a lot. As anomie already says, bot tasks are not measured relative to total number of articles but in absolute. WP:BOTPOL does not mention anything about it being OK for bots to do the same tasks as already approved for some other bot. Which means you need to follow the same WP:BRFA practice as other bots. —  HELLKNOWZ  ▎TALK 16:25, 9 March 2011 (UTC)
I think it is only fair to tell Rebel that an admin might stop his bot running even if he does get approval at Wikipedia:Bots/Requests for approval for the task. Mr Stephen (talk) 17:56, 9 March 2011 (UTC)
An admin may stop any bot from running if it breaks policy, consensus, or goes rogue. I don't find that fact to be any more relevant in this particular case though. —  HELLKNOWZ  ▎TALK 23:07, 9 March 2011 (UTC)

API categorymembers Issue

When using the API's categorymembers to retrieve the contents of a large category, I'm not getting all the members returned. I've double checked it's not a bug in my code by using AWB and the api via the browser. For example, when I query Category:Wikipedia good articles, the page Talk:Stargazy pie is not returned, but the category is listed on the talk page and the talk page is listed in the category. Has anybody else seen this issue? Am I missing something? Also, if it is an API bug, does anybody know the best place to report it? Thanks. -- JLaTondre (talk) 21:07, 12 March 2011 (UTC)

If it is a bug, then [Bugzilla. File a new bug under MediaWiki -> API. Might it just be a caching issue? Maybe wait and see tomorrow if it's listed (unless today is tomorrow!) :) - Jarry1250 [Who? Discuss.] 21:17, 12 March 2011 (UTC)
[8] here it is listed. Not a bug with API as far as I can tell. What query are you using? —  HELLKNOWZ  ▎TALK 21:21, 12 March 2011 (UTC)
Try it without the descending sort: [9] & [10]. It doesn't show up in those. -- JLaTondre (talk) 21:30, 12 March 2011 (UTC)
Oh, OK, weird. I guess it's a bug then. —  HELLKNOWZ  ▎TALK 21:34, 12 March 2011 (UTC)
Bug 28013 submitted. -- JLaTondre (talk) 21:57, 12 March 2011 (UTC)
Yup, that had already been added here, so I've (attempted to) mark yours as a duplicate. - Kingpin13 (talk) 22:02, 12 March 2011 (UTC)
Okay, thanks. -- JLaTondre (talk) 22:09, 12 March 2011 (UTC)
Perhaps you can find your category under the sortkey name. is showing many categories which does not exist and i think sortkeys are returned here and not the category names as expected. Merlissimo 00:38, 13 March 2011 (UTC)

Mark my edits by minor as default

The following bots have the preference, which has been hidden on the English Wikipedia, set (locally):

  • SmackBot
  • VolkovBot
  • CmdrObot
  • H3llBot
  • Numbo3-bot
  • Melonbot
  • VoABot
  • Wilbot

For obvious reasons they should not be relying on this functionality, and, in fairness, probably aren't. This is merely a notification that the bot's preference will be automatically switched to false for you on the English Wikipedia shortly and you may need to adjust your code appropriately. Regards, - Jarry1250 [Who? Discuss.] 16:53, 13 March 2011 (UTC)

Thanks for heads-up! Although, as you say, personally I did not rely on this. —  HELLKNOWZ  ▎TALK 11:55, 24 March 2011 (UTC)

What should I call my botcode?

Hi everyone. I have written User:RichardcavellBot, a fully functioning bot in C, using libcurl. I open-sourced it on the off chance that someone might find it useful. To my genuine surprise, someone has joined the project and contributed code. Now, the thing has to run in an account, and when I run the bot on my machine, I will call it RichardcavellBot. But I want to rename the code and project to something that doesn't include my name. What should I call it? It's a generic framework written in C, highly portable, using standard libraries and libcurl. - Richard Cavell (talk) 14:17, 23 March 2011 (UTC)

Don't ask me, mine is named "AnomieBOT::API". ;) I suppose you could copy "pywikipedia" and call it "cwikipedia", or copy Peachy and name it after a fruit that begins with C. Anomie 00:41, 24 March 2011 (UTC)
How about User:Sirlancabot. --Kumioko (talk) 01:24, 24 March 2011 (UTC)
I like the pun of Sirlancabot. What I'm thinking is: Autobot (a Transformers reference), Koalabot, Taipanbot, Octopusbot. What do you think? - Richard Cavell (talk) 01:33, 24 March 2011 (UTC)
Lancelot? Then, if it is accepted, I'd go for User:Spamabot. -DePiep (talk) 02:05, 24 March 2011 (UTC)
I like Koalabot too...I always liked the Kwicky Koala cartoons when I was a kid. --Kumioko (talk) 02:23, 24 March 2011 (UTC)
There is a line "The account's name should identify the operator or bot function" in the WP:BOTPOL, but afaik no one bothers to enforce or check this. As long as it has "Bot" in the name. —  HELLKNOWZ  ▎TALK 11:57, 24 March 2011 (UTC)
I'm talking about the framework, not the account. I intend that when I run the bot on my computer, it will always operate as User:RichardcavellBot. But others may choose to operate it under their own bot accounts. Obviously they'll need to make separate BRFA requests. - Richard Cavell (talk) 13:20, 24 March 2011 (UTC)
OK, so no accountname asked. Then, suggestions like "pywikipedia" and "cwikipedia" stand. Any hint from its features? Nice experience while programming C? From your dog's name? That's the way is works, I think. (Keeping my gest alive, it could be "cpamalot", see Spamalot, or Spam. But this might be an insiders joke). -DePiep (talk) 17:54, 24 March 2011 (UTC)
Yeah, I'm not sure that "spam a lot" is a good name for a bot framework which is designed to gain the trust of the community. - Richard Cavell (talk) 23:31, 24 March 2011 (UTC)

meta:Talk:Interwiki sorting order#Proposal: Storing interwiki sorting at local system message

I have made a proposal for creating a central place for computer readable messages containing the interwiki sort order. At the moment each framework uses it own config file which has to be updated manually after changes to the list of wikis. Please respond to it on metawiki. Merlissimo 17:19, 31 March 2011 (UTC)

DASHBot is broken - partially - maybe

Tim1357 (who's been gone for a few weeks) programed DASHBot to automaticly resize fair use images placed in Category:Non-free Wikipedia file size reduction request. DASHBot has been running it's other tasks, but not this one. This means one of two things, a) the bot isn't working properly, or b) that function was turned off and either no one was told or it was announced somewhere I didn't see it. Please keep me in the loop. If he can be contacted to fix it, that would be preferable to me doing it manually. If someone else can fix it, I suppose that would be a good secondary option (but I'd rather not piss Tim off by treading on his toes.) If neither of those pan out, a new bot would be... nice. Sven Manguard Wha? 06:56, 3 April 2011 (UTC)

Sandbox bots

It seems that the sandbox bots are no longer operative, and Cobi and X! are not active at the moment. So could other bots take over ? They reset the sandboxes, listed at Template:Template sandbox. I'd also request that they reset the Wikipedia:Introduction, so that we can leave it unprotected or with PC, because readers should be able to see the 'edit' button. Cenarium (talk) 16:57, 27 March 2011 (UTC)

Due to the nature of Wikipedia:Introduction, a faster reset time would probably be appropriate (sandbox is 12hrs, if I'm not mistaken?). If the sandbox bots are confirmed to be off, I can create a sandbox bot fairly quickly. Noom talk contribs 15:52, 28 March 2011 (UTC)
See here for Chzz's sandbox bot. Noom talk contribs 20:01, 31 March 2011 (UTC)
Working on it; I'll post more at the BRFA within days.  Chzz  ►  15:00, 5 April 2011 (UTC)
Now operating; keeping the heading in place and sweeping. See BRFA, ChzzBot II (talk · contribs), User:ChzzBot II/doc. Please let me know if any changes are desirable.  Chzz  ►  15:21, 7 April 2011 (UTC)

BAG nomination

I'm required by BAG policy to notify this noticeboard of my nomination for BAG member. Headbomb {talk / contribs / physics / books} 07:44, 20 April 2011 (UTC)


Can somebody take a look at the thread at User talk:Wikitanvir#nobots. It has been an uphill struggle to get even a grudging partial acceptance of responsibility for WikitanvirBot's edits. I am still not convinced the user is on-message about this. I think someone from BAG should have a word. SpinningSpark 19:35, 21 April 2011 (UTC)

Im former BAG and I just left him a nice trouting. ΔT The only constant 21:19, 21 April 2011 (UTC)
Thanks for that. Hopefully he will start to listen now. SpinningSpark 21:48, 21 April 2011 (UTC)

Urgent: Vandalism Bots are Down

So Cluebot NG has been down for over a week, the old Cluebot has been down for months, and there are hours where no one is on Huggle. This is very bad. What can be done in the mean time? Can old Cluebot be brought back online while new Cluebot is getting... (whatever reason it's not online) rectified? Sven Manguard Wha? 04:25, 23 April 2011 (UTC)

This is nothing new. Cluebot-NG has been inconsistently running for several months now. Given that the original ClueBot was operated by the same folks -- I'd say there is little chance of bringing one online without the other. Granted, I appreciate all the ClueBot folks have done and are doing -- but some clear answers about what is going on and their future intentions would be nice. For one, I could begin to operate the metadata-detection algorithm of my WP:STiki tool in anti-vandal bot fashion, do the BAG approval, and begin automatically reverting egregious instances. Cluebot-NG has a much more robust system, though, and I wouldn't want to waste the time just to have them re-appear again. West.andrew.g (talk) 06:39, 23 April 2011 (UTC)
The anti-vandal bot situation is very precarious because it depends on the goodwill of a few people who run/create these bots. Given the importance of having good anti-vandal bots, this is a big problem. Perhaps running anti-vandal bots on the toolserver instead of on people's private servers would give more stability?
I feel like people underestimate the problem of vandalism and see it as something that is 'solved', but it is really still causing a lot of damage. In particular the following pattern causes damage:
  • A vandal replaces a good paragraph with vandalism.
  • A reader notices it, and removes the vandalism by editing it out of the page, but does not perform a revert (because they don't know how to).
  • Net result: a good paragraph is lost. Since the vandalizing edit is no longer the top revision, nobody notices, and it is gone forever.
I have personally found of number of instances of this pattern in article histories, and there must be lots of them that have gone undetected.
A good anti-vandalism bot can help prevent this by reverting the vandalism properly, before a reader stumbles onto the page and performs a flawed revert. What I mean to say is that anti-vandalism-bots do more than remove vandalism 'extra fast', they also help to prevent good content getting lost.

Arthena(talk) 16:54, 23 April 2011 (UTC)

There are periods when no one is on the automated tools watching. After about midnight Eastern Standard I see a huge drop-off. The East Coast people are going to sleep, the Brits haven't woken up, and I donno about the Aussies, I guess they don't Huggle. That's when ClueBot being down hurts, when no one else is there. Sven Manguard Wha? 22:01, 25 April 2011 (UTC)

"Good articles" on

Hello, in case someone mantains bots related to the template {{Link GA}}, I wanted to inform you that now also has activated the "Good Article" category. The articles having this status will be put under subcategories of it:Categoria:Voci di qualità per argomento. Bye, Gengis Gat (talk) 17:14, 26 April 2011 (UTC)

Snotbot breaks proper hierarchy of page

I have rarely checked up on bot actions, normally assuming they do the right thing or they wouldn't be allowed to run.

However, Snotbot just found an error in a page that i created, and Snotbot broke the hierarchical structure while attempting a repair.


It is important that section titles 1906 split, 1908 split, and 1924 split be on the same level, since the article treats them as equivalent level events. That was the case in the article before the Snotbot edit, even though there was a structural error. Snotbot effected an improper repair, which put these three section titles on different levels.

I will put a notice on Snotbot's talk page that i'm flagging this situation here. Richard Myers (talk) 00:31, 27 April 2011 (UTC)

I spotchecked about five other pages on my watch list that Snotbot had visited, and did not see any problems with those edits. Richard Myers (talk) 01:28, 27 April 2011 (UTC)
The headings were bad before the bot edited. They were bad (in a different way) after the edit. The bot's edit got attention to the problem, and now it's fixed. I don't see what exactly the problem is. Headbomb {talk / contribs / physics / books} 06:27, 27 April 2011 (UTC)
That is a misunderstanding of the situation. The headings were incorrect, but while this might have been visible to any editor looking at the code, it was invisible to the reader of the article, because the menu hierarchy displayed correctly. After the edit by the bot, the menu hierarchy was broken, even to the casual reader.
Of the two situations, the article was in a better situation before the bot edit, than after.
And the point is, the bot has executed an improper edit on this article, and so may be doing so in an equivalent situation to other articles. Chance discovery of improper bot behavior does not suggest that we should ignore improper bot behavior, especially if the edits make the article worse than it was (for the reader). Richard Myers (talk) 08:18, 27 April 2011 (UTC)
Visually, the article was in a better situation. However, while markup->html parser handles heading hierarchy errors gracefully, the problem would have been apparent to anyone with a screen-reader and it would have skipped right to "1924 split" as first <h2> header. Visual style should not override accessibility. In the end, the bot brought up the issue and it has now been fixed to accommodate both regular and tool-assisted users. —  HELLKNOWZ  ▎TALK 08:29, 27 April 2011 (UTC)
Hi, thanks sincerely, but could you please give me the long version of what you are saying here? It seems that you're explaining this in a technical manner, and what you're saying seems important, but i don't quite understand. Apologies, all the density is on this end. Thanks, Richard Myers (talk) 09:32, 27 April 2011 (UTC)
When a screen-reader reads a web-page, it goes through the heading in their hierarchical order (first article title <h1>, then ==second level== <h2>, ===third level=== <h3>, etc.) If the heading are out-of-order on the page, then the screen-reader will also read and process them out-of-order. There is no way for the person to know what is the actual "visual" layout on the page. Here's a YouTube video linked on the bot's BRFA that explains this quite well. —  HELLKNOWZ  ▎TALK 09:56, 27 April 2011 (UTC)
The headings were only 'right' in the TOC, they were wrong visually. The code before the bot's edits was
==== Eastern Wobblies, western Wobblies ====
===1906 split===
===1908 split===
====Overalls brigade====
===IWW east, IWW west===
====The Detroit IWW====
====The Chicago IWW====
=====Seeds of another split=====
== See also ==


So '1924 split' was at a different level to the other two before the bot's edits, and rendered in a different font. It seems to me to be incorrect to assert that "it was invisible to the reader of the article". Mr Stephen (talk) 11:19, 27 April 2011 (UTC)

  • Richard Meyers, I'm quite confused by your complaint. Firstly, I don't know why you couldn't have discussed this with me on my talk page first rather than bringing it to a noticeboard. Secondly, the bot didn't touch any of the "1906 split", "1908 split", or "1924 split" section headings. Check the diff you posted above and you'll see they weren't changed. The only thing the bot did was change the first heading, "Eastern Wobblies, western Wobblies", from a level 4 to a level 2, because the first heading of every article should always be level 2 (the real first heading of every article is the title, which technically is a level 1 heading, and then subsequent headings should not "skip" levels thereafter, so level 2 is the only applicable first heading). The bot has not made an error here. Am I reading your complaint correctly or am I missing something? —SW— yak 14:03, 27 April 2011 (UTC)
Have a look at the TOC before & after the bot's edit. I think that's his issue. Mr Stephen (talk) 14:16, 27 April 2011 (UTC)
So the first heading was level 4 (inappropriately) and the second heading was level 3, and the mediawiki software resolves the issue by showing the level 4 and level 3 headings at the same level in the TOC (because really, what else could it do given such strange input). And so, what you're saying is that this article was relying on this particular "feature" for proper display of the TOC? Even without bringing up the accessibility issues raised by this practice, I still maintain that this complaint has no grounds. The bot's task is to fix inappropriate section headings, and there is nothing more that it could reasonably have been expected to do with this article, especially considering that the 1906, 1908, and 1924 split headings were not at the same level (except for the way they displayed in the TOC). I won't be making any changes to the code in response to this complaint. —SW— speak 14:41, 27 April 2011 (UTC)
Mr Stephen is correct, i was referring to the TOC as a menu, imprecisely, it seems. He may also be correct about the lower part of the TOC, i saw the errors at the beginning of the TOC, and used that as a basis for reporting. I am not insisting on a change to bot code, only on an evaluation, based upon what i judged to be an inadequate fix. Richard Myers (talk) 19:43, 27 April 2011 (UTC)
Just my two cents, all the bot did was hightlight an existing screwup (which I do not understand how it worked in the first place). the page was messed up both before and after the bot edited due to no fault of the bot, but rather humans. Instead of complaining about the bot try to ensure that the pages you work on do not end up that messed up in the first place. ΔT The only constant 19:47, 27 April 2011 (UTC)
I think (based upon Mr Stephen's comments) that the TOC worked, just because the hierarchy was correct (i.e., individual section titles, although incorrect, still had the proper relationship to each other.)
However, in the article itself, the section titles weren't what they should have been. So one part of the article (the TOC) didn't reflect the structural problems until the bot revealed them. (The original structural problems resulted from a lot of section swapping during article creation.)
In any event, i'm satisfied with the explanations/answers here, thanks all for the input. And special thanks to H3llkn0wz for the video link, that provided something important that i'll keep in mind in the future. Richard Myers (talk) 23:25, 27 April 2011 (UTC)

Category:Football biography using deprecated parameters

Hi guys, a long term aim of WP:FOOTBALL will be to empty Category:Football biography using deprecated parameters by converting a multitude of old infoboxes into the correct code found at {{Infobox football biography}}. However, with nearly 50,000 infoboxes to be converted, this will take a helluva long time - would it be possible for a bot to do this instead? Regards, GiantSnowman 16:53, 17 April 2011 (UTC)

Three responses come to mind, but I'lltry to keep them short! :) Firstly, you may wish to cc WP:BOTREQ. Secondly, it may be possible to do this with AWB, which could be set to bot mode, thus allowing an AWB- (but not bot-)savvy user to fix the pages. Thirdly, and perhaps most importantly, you will want to be able to demonstrate that their is a significant, visible benefit to these edits. If not, you may wish to get them rolled into the work of a different bot editing the same group of pages, or vice versa. Regards, - Jarry1250 [Who? Discuss.] 18:06, 17 April 2011 (UTC)
Thanks for your answers. I've posted the same request over at BOTREQ. I'm not bad with AWB, but this is way out of my league, and as for the benefit, infoboxes have to use the correct coding, something to do with accessibility for users who use screen readers I think. Regards, GiantSnowman 18:30, 17 April 2011 (UTC)

I'll do this as soon as I have my bot account unblocked. It had this already in mind and have approval to perform tasks like this. I fixed all infobox names in order to start doing this task. -- Magioladitis (talk) 20:28, 17 April 2011 (UTC)

Fantastic, thanks very much. GiantSnowman 20:36, 17 April 2011 (UTC)
Starting now. -- Magioladitis (talk) 10:55, 22 April 2011 (UTC)

Hi. It turned out I can't do it with my bot. I am OK with Petrb doing it. Thanks! -- Magioladitis (talk) 13:43, 28 April 2011 (UTC)

Concerns/complaints about bot tasks and practices

Please be understanding about my attempt to explain my complaints; although i've been editing on Wikipedia for more than five years, these particular issues are a little outside my comfort zone.

I have created a fair number of in-depth articles, some with one hundred to two hundred footnotes. My experiences with citations in each of these articles have generally been good, right up to the point where someone with an automated process comes along and compresses all of the footnotes into a different format. Much of the subsequent editing on the given article suddenly goes from simple and straightforward, to complex and (sometimes extremely) frustrating.

This has happened to dozens of articles that i've been working on, including in the middle of my edits. Permission has never been asked, and in my experience the changes have never been explained or discussed or even noted on the TALK page by the bot owner, either before or after edits. The standard edit summary rarely contains enough info for the average editor to appreciate what has just happened, or why.

Lack of information isn't the worst of it, however; rather, in my experience the impact of this type of edit can be very harmful. Sometimes such edits have made it nearly impossible for someone only familiar with the normal citation methods to do subsequent editing of articles, without losing and misplacing footnotes. This is because compressing footnotes retains cite data only in the first footnote occurrence. What if one wants to move a lower paragraph above the paragraph with that first occurrance? Suddenly, editing tasks become a puzzle with only the most cryptic of clues.

An example—this footnote has all of necessary information:

<ref>William E. Bohn, The Survey: social, charitable, civic : a journal of constructive philanthropy, Volume 28, "The Industrial Workers of the World", Charity Organization Society of the City of New York, 1912</ref>

After a bot runs on the page, all but the first appearance of this reference will be converted to this:

<ref name="autogenerated1912"/>

How do i look at that, and comprehend what it refers to? I have to jump through hoops (or, well, extra pages and links) to find the source information that i need while editing.

The "1912" is a grab of the last bit of data, whether that's a page number, a date, or whatever, so not only is the key data missing, there's no data consistency to the new format.

Now suddenly, almost all footnotes on the page appear nearly identical to each other. When there are a hundred plus footnotes, that can be very intimidating for someone in the middle of creating a worthwhile article.

And what happens if the first entry is deleted from the page during normal editing? All of the reference data for that source is lost for the entire page!

Now, i don't have a problem with someone who wishes to write their footnotes in this manner to begin with. But when a bot automatically converts the footnote style from a more basic style, it creates chaos for the editors who had been editing that page.

Not only that, but it is a violation of Wikipedia:Citing sources, which states:

  • How to write citations: Each article should use the same citation method throughout. If an article already has citations, adopt the method in use or seek consensus on the talk page before changing it.

Another aspect of this problem for the editor inexperienced with bots, it is difficult to know whether it is considered proper to revert the edits of a bot. I truly wish that i had reverted every bot that did this to the articles i've edited, but i didn't know if i would be violating some difficult to discover Wikipedia policy.

I've complained about this issue numerous times in years past, but apparently not in the right forums to have it seriously considered. I hope now will be different.

I have another complaint that is only indirectly related. One of the bots (or is it a semi-automated editor?) is apparently controlled by Betacommand (talk · contribs), but if i follow an edit to that editor's page, i don't find anything on that user's user page, nor the talk page to discover an actual user ID that i can convey to others. I had to search through ten archives of the talk page to locate "Betacommand". This creative nomenclature is certainly clever, and may be fine for ordinary users, but for someone who traipses across thousands of wikipedia pages in a short period of time, shouldn't there be ID information and bot info that is more readily accessible on one of their pages?

Please note that Betacommand isn't the only citation scrambler i've encountered, and that i don't really have anything against bots or automated tools, so long as they perform helpful functions. However, footnote scrambling is not helpful, it is very harmful to users who frequently feel ambushed by it, like me. Richard Myers (talk) 09:02, 24 April 2011 (UTC)

On your first concern: There is a bot to revive orphaned named references. -- Magioladitis (talk) 11:06, 24 April 2011 (UTC)

The relevant policy/guideline pages are WP:REFNAME, WP:CITEFOOT, and WP:NAMEDREFS. Grouping duplicate references is highly preferable to leaving them ungrouped. —SW— comment 16:03, 25 March 2011 (UTC)

Merging duplicate refs is standard practice. Also my name is not Betacommand, I am not a bot which is why you had difficultly locating my old nick. Merging dupe refs makes reading the article a lot easier so that the ref section is not just 100+ items for 12 actual references. (I have seen similar things happen) ΔT The only constant 11:51, 24 April 2011 (UTC)

There is a discussion at VP: Wikipedia:Village_pump_(proposals)#Bot_to_reduce_duplicate_references. While there is strong support towards merging dupe refs for articles that already use named refs, there are a few opinions the other way for general cases. There is a BRFA (Wikipedia:Bots/Requests_for_approval/Snotbot_4) in progress. Admittedly, this is being done by at least several tools/bots already as "uncontroversial". —  HELLKNOWZ  ▎TALK 12:06, 24 April 2011 (UTC)

(ec) I agree with the sentiment of this complaint, that established referencing systems should not be messed with, although on this particular point I have long since conformed to the style because it's too troublesome to fight. Yes, there is a bot that will fix orphaned references but this still does not detract from the inconvenience the editor is being put to. Unless there is something in the guidelines that prohibits duplicate references then I would support reversion in these cases. It is one thing for an editor working on an article to change the reference style prior to an expansion to something they are more comfortable with (although even that is against guidelines), it is quite another for someone else, bot or human, to turn up at the article and obstruct the editors actually doing the work by forcing them to work with an unfamiliar system. As an aside, an often overlooked advantage of duplicate references is that they keep refs in numerical order making them easier to find manually. Also, personally, I try not to combine references while I am actually constructing an article because it often happens that a particular passage requires additional pages or authors added as the work progresses and uncombining is a real pain to do in a long article. SpinningSpark 13:02, 24 April 2011 (UTC)
Thanks all for the enlightening comments. The situation described by some here is, frankly, somewhat abhorrent to me, because in spite of the fact that merging dup refs is considered "preferable" or "standard" or "non-controversial", the procedure by which this practice is implemented is broken. I have quit working on some articles when the citation scheme has been abruptly changed by someone else. I expect that some other editors have done the same. I have struggled through some other articles which i thought were worthwhile in spite of the challenge. But that situation is not conducive to an improving Wikipedia.
It seems to me, from some of the comments here, that some editors interested in making Wikipedia more efficient may not have significant experience with creating very robust articles on a given topic, else they would not take for granted that merging dup refs is a positive for incomplete articles.
I suggest, this situation calls for a creative solution, one of a sort that hasn't yet been mentioned in this thread. I actually am not against eliminating duplicate references in articles which are stable, and in some state of completion. The current method, however, frequently ambushes some of Wikipedia's most dedicated editors in the middle of article creation, and i ask you to put yourselves in our place. The current method of merging dup refs has a much more negative, than positive, impact on us.
Here's how (and why) it impacts work process. Suppose i am working on a new section of the article. I typically am working from the same source(s) with which i created the previous sections. I can find the source i need very easily, probably one or two paragraphs above what i'm working on. I copy the ref, and change the page. Simple.
With merged dup refs, suddenly it becomes necessary to search the entire article to find the reference i need, and now there is only one such reference in the entire article to be found. That is a considerably more arduous, almost needle-in-haystack sort of task, because i now must leave the section i'm working on, interrupting workflow.
But i also cannot swap paragraphs the way i once could, without studying in depth to determine which of the references will be rendered inoperable. Where once i had complete freedom to take some simple editing step, now it has become an arduous task.
Now, for the creative part. Why not develop a procedure or policy which either makes merging dup refs wait until an article is stable, or allows for a temporary revert until the article is in a state of completion, and promulgates that option to editors on the talk page as a condition of making such drastic changes? Granted, i now know where to look to appreciate the impact of a revert, i know that stated policy supports a revert in spite of what has been discussed here, and i am less afraid to do so on the articles that i edit. Yet for the past three years, the thought of reverting such a dramatic and misunderstood mangling of the article has seemed very intimidating to me, thus i've quit editing some articles, and struggled with other articles. Other editors must be in a similar quandary.
Again, thanks all for the input. I don't think solving this problem to suit the expectations and requirements of all editors needs to be adversarial, i think it simply needs to be considered with thought and creativity. Richard Myers (talk) 17:52, 24 April 2011 (UTC)
Thanks, Spinningspark, for your openmindedness on this question, and for conveying my concern at Village Pump. By chance, moments after i offered comment at Wikipedia:Village_pump_(proposals)#Bot_to_reduce_duplicate_references attempting to explain my opposition to the existing proposal, i discovered that one of the articles i have been working on (most recent edit was yesterday) was just hit by someone using AWB: Anti-union violence. Please note that the ONLY information provided was the edit summary, and that said only, clean up using AWB
I reverted this time, and i will continue to revert. The article is, in my view, only about half-way to a state of completion. The compression of notes interrupts my process. There is also a merge proposal on this article which, if adopted, would be much more difficult to accomplish after merging dup refs.
I stand upon Wikipedia:Citing sources, which states:
  • How to write citations: Each article should use the same citation method throughout. If an article already has citations, adopt the method in use or seek consensus on the talk page before changing it.
Until that changes, i will continue to revert. My apologies to all for tone, i'm a little irritated right now. Richard Myers (talk) 19:42, 24 April 2011 (UTC)
A workaround is to make sure that none of the references has a "name=" parameter. The AWB/bot magic that you are objecting to is only enabled in an article that already has at least one named reference. See here. -- John of Reading (talk) 20:18, 24 April 2011 (UTC)
With hindsight, the test coded into AWB could be improved. If an article has one or more named references, but none are currently used more than once, then the article currently has no instances of the "a b" reference backlinks and the general fixer should not change the citation style by merging references. -- John of Reading (talk) 20:26, 24 April 2011 (UTC)
  • I really don't understand what is so confusing and frustrating about grouping references. Does your browser have a "Find" function where you can search through the text on a page? If not, then it's time to get a new browser. If I'm editing an article and I find a reference like <ref name="autogenerated1912"/>, and I want to find the original reference, I hit control-F (i.e. "find") and I type in "autogenerated1912". Then I hit control-G to step through all the different instances of that particular reference, as well as the full ref. It probably takes me a whole 2.5 seconds to find the original ref. And if I can think of something more descriptive than "autogenerated1912", then I'll generously take another 10 seconds out of my life to rename the references, again using the Control-F/Control-G feature to find them all. If the person who originally added the same reference 10 times took the extra 1% effort required to descriptively name the reference instead of just recklessly copying and pasting the ref all over the article, then we wouldn't have the problem of auto-generated names.
See the Manual of Style, which tells us that "Named references are used when there are several cases of repetition of exactly the same reference..." It doesn't say that named references may be used if you want to, it says that they are used. Nowhere is it implied that this is a stylistic choice or that it is optional. I am aware that the MOS is a guideline and that exceptions are possible, but those exceptions should only be for good reasons, and under unique circumstances. Exceptions should not be made for "every article that User:XYZ edits, because he doesn't like grouped refs". That is not a valid reason for an exception to following the MOS. —SW— comment 16:41, 25 April 2011 (UTC)
I don't know what browser you're using, but if I'm editing a section, Ctrl-F on my browser won't find a reference that's defined in another section. I don't see that it's unreasonable to ask that bot-generated reference names don't suck. Bots are supposed to make things easier for editors, editors should not have to go out of their way even "1%" to make things easier for bots. Mr.Z-man 17:42, 25 April 2011 (UTC)
Naturally, if you're only editing a section then the browser will only be able to search text in that section. You'd need to edit the entire article. Also, if you have any suggestions for how an automated process could create references names "that don't suck", I'm sure they could be implemented. Would it be more acceptable if the bot tried to find the last name of the author and the year of publication, and make the name something more like "Jones2004"? This would be fairly straightforward to implement. —SW— confess 17:51, 25 April 2011 (UTC)
I've been editing Wikipedia, with a preference for in depth articles, for more than five years. Any situation or protocol that encourages or compels me to edit entire articles rather than sections of articles creates a workflow that will cause an increase in edit conflicts with other editors. This is not a reasonable expectation. (I acknowledge the comment was in response, and doesn't necessarily state an expectation...)
To the question about more acceptable named references, yes, absolutely that would be preferable. Please be aware, however, that the first thirty or so articles that i edited, i routinely and erroneously put the author name after the book title. Other editors may do the same, so identifying the author's name will be non-trivial for an automatic process, i fear. Richard Myers (talk) 18:37, 25 April 2011 (UTC)
According to this AWB documentation, the new reference name is currently derived "by use of the author name, year & page where available, otherwise title, otherwise publisher, otherwise website of URL, otherwise fields from the Harvard family of templates, otherwise full reference if short, otherwise a generic reference name of 'ReferenceA' etc.". So if you're using the citation templates you will get better quality names. -- John of Reading (talk) 18:41, 25 April 2011 (UTC)
I don't have any specific algorithm suggestions, but the way I see it, if it can't do it halfway decently, it shouldn't do it at all. Mr.Z-man 20:02, 25 April 2011 (UTC)
That's not helpful. We have to define what "halfway decent" is before anything can be improved. A name like "autogenerated1912" is halfway decent for me, but apparently not for you or others. —SW— confabulate 20:22, 25 April 2011 (UTC)
In the past another bot added titles obtaining data from html headers and added a comment like "Bot generated title". A 1-2% wrong title is worse than a non-descriptive ref name which isn't visible to editors but still the task was approved. Since the references are grouped it's easy for editors to update the ref names. -- Magioladitis (talk) 20:25, 25 April 2011 (UTC)
How is "autogenerated1912" useful? I suppose if you only have one reference from 1912 and all the rest are from some completely different time period it might be. But if the article has say, 2 references from 1912, the usefulness pretty much drops to zero. I would say that it should be a reasonably unique identifier that can be clearly tied to a specific citation, ideally including at least 2 pieces of the citation. Mr.Z-man 20:55, 25 April 2011 (UTC)
Fairly clearly AWB will not reuse a ref name for a different ref. On another point if someone removes the key ref, AnomieBot will resurrect it (impressive!). Rich Farmbrough, 21:36, 25 April 2011 (UTC).
  • "But i also cannot swap paragraphs the way i once could, without studying in depth to determine which of the references will be rendered inoperable. Where once i had complete freedom to take some simple editing step, now it has become an arduous task" - I don't see that there is anything here that stops you swopping paragraphs.
  • The enhancement that would be really useful is a less hackish method of dealing with different page references to the same work - something that would give, perhaps,
... res ip sur lociter5a nortum respot5b caveat lector....
5. The Restless Sea, Wiedenfeld and Nicolson,,London, 1226, (a^) page 12, (b^) pages 13-22, 96 and 123.

Rich Farmbrough, 21:47, 25 April 2011 (UTC).

I did a simple empirical test, and it appears that you may be correct about moving paragraphs.
At one time, it seemed to me that the long version of the named reference was required to be the first entry. Now i see that it is just preferred that it be the first entry. Perhaps the page i was working with which gave me a cite error some years ago had something else wrong.
Thanks for clarifying, it will be helpful to know this. Richard Myers (talk) 23:57, 25 April 2011 (UTC)

Just a couple of notes. There hasn't been much mention of the the reader. Which scheme is better for the reader? In my opinion, each method has drawbacks.

  • If the refs are grouped, it's very difficult to find your place back after checking a ref. Yes there are little letters for each cite, but which letter is yours?
  • On the other hand, the refs are grouped I can easily see what refs are used and how often each ref is cited, which in my opinion is useful and imporant information to know about an artice.

As to editing, it's also a no-perfect-solution situation as there is an advantage for editors when refs are grouped: it there are lot of refs, it can be kind of difficult to follow the flow of the article text when editing. Grouping makes this easier. Herostratus (talk) 14:07, 28 April 2011 (UTC)

Redlink counter bot

Hi everyone. I've had it suggested to me to create a bot that will count redlinks and identify which terms are most frequently redlinked (ie they're linked to but there is nothing at the destination). How would such a bot work most effectively? Would the bot comb through all the pages in a category, or all the pages in 'what links here'? I don't own a server or domain name so would it be best if the results were posted on the requesting user's talk page? Or within the bot's own userspace? If the bot only posted in its own userspace, would it need BRFA approval at all? - Richard Cavell (talk) 06:23, 7 April 2011 (UTC)

This would probably be best done using a database dump. If a bot edits only its own userspace, and is not otherwise disruptive, it does not normally need a BRFA. Anomie 21:59, 7 April 2011 (UTC)
This can also be done with a Database report. It's running now and the 200 mostly linked, non-existant articles (from the article space) will be saved here. The report should be done in a matter of hours. Tim1357 talk 22:10, 17 April 2011 (UTC)
  Done (See a formatted version) Tim1357 talk 01:25, 20 April 2011 (UTC)

Pretty sure this already exists (most wanted pages?) . Rich Farmbrough, 06:55, 26 April 2011 (UTC).

Wikipedia:Most wanted articles does indeed exist, but was last updated in December 2010. User<Svick>.Talk(); 20:12, 29 April 2011 (UTC)


There’s been a recent spur of nominations for certain bot activities of User:Lightbot (operator: User:Lightmouse) documented in a discussion here by User:MBisanz. Cross-posting for reference. :| TelCoNaSpVe :| 06:49, 4 May 2011 (UTC)

accessing Special:NewPages from a bot

I don't want to patrol new pages, but I do want a list of new pages within the last few days. Obviously Special:NewPages isn't implemented in the API. What methods do people use to get this data? I'm guessing the toolserver DB could be used, what else? (I really enjoy the flexibility of non-toolserver bots, I don't want to fuss with a tunnel to the toolserver DB if I don't have to). tedder (talk) 02:13, 2 May 2011 (UTC)

Special:Newpages is implemented in the API. For instance, this query gives you the last 50 articles created in the article namespace, and this one shows the latest 50 which haven't yet been patrolled. —SW— prattle 05:45, 2 May 2011 (UTC)
Thanks Snottywong! That's what I ended up using. tedder (talk) 19:12, 15 May 2011 (UTC)

Unregistered bot

I'm not familiar with the bot policy. User:Cboursnell is mass-creating draft articles inside his/her userspace. See also User:PfamWikiBot (inactive). Marcus Qwertyus 16:26, 31 May 2011 (UTC)

Users may create drafts in userspace, even many, if their purpose is to improve Wikipedia. If these articles need to be manually reviewed before moving to mainspace, then bot-creating them en masse is how it's often done. After all, we are not paper enc. Of course if there are very many or their notability is disputed, this should have been discussed first. —  HELLKNOWZ  ▎TALK 16:44, 31 May 2011 (UTC)
He doesn't seem to be doing anything terribly fishy. Just strange. Still, communication would be nice, so I've left him a message on his talk page. Cheers. lifebaka++ 18:51, 31 May 2011 (UTC)


Hi. Is there a script (Phython) that inserts, for example, the image data graph "image" template {{taxobox}} from the English Wikipedia's article en:Smearwort in the column "şəkil" template Takson in the article (interwiki) Girdə zəravənd Azerbaijani Wikipedia?

  | image = Aristolochia rotunda.jpg --->

  | Şəkil = Aristolochia rotunda.jpg 

Sincerely,Vago (talk) 07:26, 1 June 2011 (UTC)

regex: finding the lede of an article?

I'm trying to find the approximate lede of an article. It's just for scoring/categorization, so it doesn't have to be perfect. My first thought was to strip the infobox and any other {{templates}} first (not using regexes, since it's hard to "count" or deal with nested templates). Then I realized there could be very useful information in an infobox.

So my second thought is to just grab all text until a "\n\n" or a "==".

Has anyone done this or something similar? Or has anyone thought about this enough to tell me why my second technique is a bad idea? tedder (talk) 19:16, 15 May 2011 (UTC)

Surely the lead extends until the first h2 heading, not until a double linebreak? Sorry, tangent I know. It really all depends what you want to do with the data you're trying to collect. - Jarry1250 [Weasel? Discuss.] 19:49, 15 May 2011 (UTC)
I would take the lead section as all text until the first heading, which ought to be a level 2 heading, but might not always be. Rjwilmsi 20:17, 15 May 2011 (UTC)
A proper lede is to the first heading. I'm trying to score articles so projects know if they are of interest, and the existing spec says "give double points to the lede" but doesn't explain what is used for the lede. Given these are brand-new articles, that it's for bonus points only, and the lack of a full spec definition, I'm thinking the first paragraph+infobox might be best; most new articles aren't sectioned anyhow. tedder (talk) 20:23, 15 May 2011 (UTC)

Just ask the API for section 0, [12]. — Carl (CBM · talk) 19:23, 8 June 2011 (UTC)

Unauthorised bot:

Unregistered user apparently undertaking a bot-related research project: [13] and [14]. Mephtalk 11:33, 8 June 2011 (UTC)

Logged out SDPatrolBot. Mephtalk 11:41, 8 June 2011 (UTC)

Dead link reporting -- approval needed?

Hi everyone. As part of an ongoing research project, I am parsing out all external link additions to to do some machine-learning to detect link spam (see [15]). As part of this processing, I obtain the HTML source of the URLs being added. Occasionally, I get a 404 error (file not found), and it would seem beneficial to report these somewhere so people could manually investigate, if desired.

The question is whether or not this needs BAG approval. My program would only post to a single page, and not in the article-namespace (probably something like WP:STiki/Dead_Links). Thoughts? West.andrew.g (talk) 01:20, 14 June 2011 (UTC)

Unless you are editing only in your own user space you will need approval. Peachey88 (T · C) 01:56, 14 June 2011 (UTC)
To clarify: WP:BOTPOL#Requests for approval states "any bot or automated editing process that affects only the operator's or their own userspace (user page, user talk page, and subpages thereof), and which are not otherwise disruptive, may be run without prior approval." But I do hope you're working from a database dump rather than trying to load the entire list of of external links from the API. Anomie 02:07, 14 June 2011 (UTC)
FYI: Are you aware of User:COIBot which monitors all external link additions? Johnuniq (talk) 02:24, 14 June 2011 (UTC)
Hmm, it's not important that it reports to a sub-page of WP:STiki (which I operate) -- having it dump to a sub-page of my own user-page would be the moral equivalent -- so I'll proceed down that route. This does not concern the issue of link rot (in which case a DB dump would be appropriate), as I am already monitoring link additions in real time as part of WP:STiki project, so this was just going to be a trivial modification for the community's benefit. I am aware of User:COIBot, but the issues here seem orthogonal. Thanks, West.andrew.g (talk) 02:39, 14 June 2011 (UTC)
What I sometimes do for this kind of task is to edit a use subpage and then transclude the page to WP:STiki/Dead_Links with a template. Tim1357 talk 04:09, 14 June 2011 (UTC)
Otherwise I'd speedy-approve it for you. Tim1357 talk 04:13, 14 June 2011 (UTC)

Hard to edit Greek letters

User:Yobot and its owner Magioladitis have decided to ignore my complaint that changing Greek letters from html entities such as &nu; to the equivalent Unicode character ν makes articles hard to edit. While an editor skilled in the topic of the article can usually tell the difference when reading the article, it is often difficult to tell in edit mode, and an editor who is, perhaps, cleaning up vandalism but is not too skilled in the article topic doesn't have a prayer. Therefore I am contesting Magioladitis's decision in this forum and request that his bot be forbidden from converting the html entities for Greek letters to Unicode characters. (Just in case you think it's easy to tell "v" from "ν", try telling "A" from "Α".)

I have also added {{nobots}} (with Yobot in parameter list) to Equation of time which has been repeatedly disturbed by this action. Jc3s5h (talk) 13:45, 6 May 2011 (UTC)

This discussion should be in the Manual of Style. Where in the manual is written what you claim? -- Magioladitis (talk) 14:16, 6 May 2011 (UTC)
The only relevant statement in WP:MOS I can find is this:
Keep markup simple
Use the simplest markup to display information in a useful and comprehensible way. Markup may appear differently in different browsers. Use HTML and CSS markup sparingly and only with good reason. Minimizing markup in entries allows easier editing.
I contend that converting Greek letter HTML entities to Unicode makes editing harder and thus runs against the MOS advice. I further contend this is only a problem when bots edit-war with human editors over this, and therefor this is the appropriate discussion forum. Jc3s5h (talk) 15:38, 6 May 2011 (UTC)
OK but I really think you should start a discussion in the manual of style somewhere. This is something AWB does anyway. Not only Yobot. If we have specific instructions of the matter I think it will be helpful. -- Magioladitis (talk) 16:17, 6 May 2011 (UTC)
I have mentioned this issue at Wikipedia talk:Bot policy. I believe that, as a matter of policy, changing Greek HTML entities to Unicode should be considered a context-sensitive task and not be performed by bots. I consider it context-sensitive because it makes it hard to recognize and edit isolated Greek letters, but it is obvious to a human editor that in a passage of Greek text, it is highly probable that all the letters in the passage are Greek rather than Roman. Jc3s5h (talk) 10:32, 8 May 2011 (UTC)
Thanks. Let's see if we get any feedback. -- Magioladitis (talk) 10:35, 8 May 2011 (UTC)
  • I donno about policy, but I think that whenever possible, avoiding the &random-letter-random-number-soup stuff is actually best. I'd put it in Greek, with a <!--This formula uses Greek characters.--> or <!--This formula uses Greek characters, do not insert English characters into the formula.--> next to it. Sven Manguard Wha? 16:53, 27 May 2011 (UTC)
    • I think that the proper conclusion of this discussion would bne to discuss the issue of Greek letters vs. Unicode on an appropriate MOS page, and that Yobot not do any more replacements of this type until such discussion is finished. עוד מישהו Od Mishehu 03:55, 1 June 2011 (UTC)
How exactly does changing &nu; to ν make the article "hard to edit"? Headbomb {talk / contribs / physics / books} 08:14, 25 June 2011 (UTC)

Free bot to new home (paper trained & has all shots)

Any botops have room for one more? HBC Archive Indexerbot is broke because its server has died. Krellis is offering it to whomever would like to give it a new home here.
⋙–Berean–Hunter—► ((⊕)) 02:13, 14 June 2011 (UTC)

Krellis has it running again...he's keeping it. :)
⋙–Berean–Hunter—► ((⊕)) 21:14, 25 June 2011 (UTC)

Bot needed

I am wanting a bot to add a link to pages from The American Society of Health-System Pharmacists to all Template:Drugboxs for medications. User:Boghog has offered but is exceedingly busy. The lisinopril article is an example of where this was done. Wish to look at the effect this link on all Wikipedia medication articles has on internet traffic. Have been discussing with a number of organization the possibility of them becoming involved with donating content or time to improve Wikipedia (ie. collaborting with us) and need some data to show that Wikipedia "does matter". Doc James (talk · contribs · email) 21:27, 24 June 2011 (UTC)

Is there consensus for what some would call "spamming" these links across Wikipedia? Anomie 07:26, 25 June 2011 (UTC)
If there's no prior discussion about this, I suggest you try WP:MED.Headbomb {talk / contribs / physics / books} 08:16, 25 June 2011 (UTC)
Yes consensus is here [16] [17] and it is WP:PHARM. The drugbox was recently rearranged based on the discussions to specifically include "The American Society of Health-System Pharmacists" and "MedLinePlus" similar to how the infobox disease has "MedLinePlus" and "eMedicine" in it. Emedicine's drug info is very poor which is why we are using the AHFS. Doc James (talk · contribs · email) 16:18, 25 June 2011 (UTC)
  Needs wider discussion. I see no discussion of the issue in the section linked by the first link, and discussion between just you and one other user at the second link. I'd suggest starting a discussion at WP:VPR, and advertise it at Wikipedia talk:WikiProject Pharmacology and possibly WP:ELN, specifically framed as "Should we have a bot add links to to most/all articles using {{drugbox}}?".
To understand why we're being careful about this, you may want to read through this discussion and the discussions linked from it regarding a previous bot that effectively added links to one site to many articles, although to be fair much of the controversy there stemmed from one now-banned user. Anomie 16:51, 25 June 2011 (UTC)
I think you're a bit overly cautious here Anomie. It's a link to a reliable source on drugs. There's no reason why it shouldn't be present in the infoboxes (especially since the drug infobox already does this with other reliable medical databases). A notice at WP:MED/WP:PHARM saying there's been a bot request for a bot to add them links automatically would suffice, since whoever codes the bot will have questions about this and will benefit from the extra eyes to make sure the bot will work as intended and not make avoidable mistakes. Headbomb {talk / contribs / physics / books} 17:02, 25 June 2011 (UTC)
Well, I wouldn't approve without an attempt at wider community discussion, since bot-adding of links to many articles has historically been controversial even if the individual links themselves seem acceptable. Anomie 17:39, 25 June 2011 (UTC)

Alright, then how would someone know which link to add to the drugbox? Simply take {{Drugbox/Foobar}} and add | to it? Are there cases where this would create mistakes, or dead links? Headbomb {talk / contribs / physics / books} 16:53, 25 June 2011 (UTC)

There has been discussion. The reason why there are not many people involved is that their are not many of us actively involved at WP:PHARM and WP:MED. A group of us recently published this paper but have not seen a stream of people joining yet.
  • Posting at WP:PHARM asking for input here in May [18]
  • Discussion occurred here [19]
  • Have posted at WP:MED here just now [20]
I am not sure on the exact technical details. Thus I have posted here for help. User:Boghog has some suggestions on his talk page.
Doc James (talk · contribs · email) 20:30, 25 June 2011 (UTC)
Here is what Boghog states "Writing a bot script to add fields to a infobox is easy. And while I have not done this before, I think it will be relatively straight forward to script the creation of a special purpose drugbox template for each drug article and transclude this back into the article. The hard part is compiling the data to populate the new fields in the drugbox templates. It appears that the trade names and link can be obtained from Drugbank download data. However I still need a mapping of the "INN drug name" → "Medline Plus accession number" (e.g., Linezolid → a692051). It is a bit messy, but it appears that I can capture this from the html sources of the Medline Plus tables of contents. I am very busy at the moment, but I may have time in a few weeks to awake BogBot from its hibernation to complete this job. Cheers." Doc James (talk · contribs · email) 20:44, 25 June 2011 (UTC)
OK, if you can wait that long, I will have time next weekend to complete the job. As Jmh649 (talk · contribs) stated above, we have already asked input from WP:PHARM to add extra links to the {{drugbox}} template here and there were no objections raised. Furthermore the drug infobox has been updated to include these additional fields. Hence I do not see any controversy populating fields that have been previously discussed and have already been added to the infobox. Concerning transcluding special purpose drug box templates back into drug articles Beetstra (talk · contribs) has expressed some concern about doing this. Hence I suggest that we proceed with populating the new fields but hold off on the transcluded templates. Is this OK with everyone? Boghog (talk) 21:42, 25 June 2011 (UTC)
Thanks Bog that would be perfect. If I get something published and you wish your name on it will make sure that happens. Doc James (talk · contribs · email) 21:53, 25 June 2011 (UTC)

Unregistered bot

  Resolved: Blocked by Mifter. Singularity42 (talk) 23:55, 26 June 2011 (UTC)

Bslwikicorporation (talk · contribs)

Claims to be a bot on the account's userpage; some edit summaries indicate it is a bot. Not registered. Singularity42 (talk) 19:18, 26 June 2011 (UTC)

BRFA needed for new feature?


I recently added a feature to SuggestBot so that it's capable of replacing an existing set of suggestions, rather than simply append them. Would this kind of feature require a separate BRFA and some trials before I add it to the bot's documentation, or is it not such a critical piece of functionality? Cheers, Nettrom (talk) 19:03, 30 June 2011 (UTC)

Better to beg for forgiveness than ask for permission. A BRFA tends to be a magnet for editors who love to argue for the sake of arguing. The change you made seems pretty minimal, I think you'd be fine to implement it without a BRFA and see if there are any complaints. It's pretty much up to your discretion, per WP:BOTPOL: "Should a bot operator wish to modify or extend the operation of a bot, they should ensure that they do so in compliance with this policy. Small changes, for example to fix problems or improve the operation of a particular task, are unlikely to be an issue, but larger changes should not be implemented without some discussion." —SW— confer 19:12, 30 June 2011 (UTC)
I don't think a new BRFA would be needed, especially as this is an opt-in change for an already opt-in bot. –xenotalk 19:15, 30 June 2011 (UTC)
Thanks for your comments. A good reminder that I should study WP:BOTPOL again, just to refresh my memory. As xeno mentions, it's an opt-in change, the default behaviour of the bot hasn't changed. Once I've read the bot policy again I'll probably go ahead and add the documentation for it. Thanks again for your help, both of you, much appreciated! Cheers, Nettrom (talk) 23:19, 30 June 2011 (UTC)

How to tell if the bot flag is set for an edit

I notice that the "m" flag is shown in a bot's contributions, but the "b" flag is not, although both flags do appear in my watchlist. Is there a way to tell that a particular edit (e.g., this one) was marked as a bot edit? 28bytes (talk) 18:06, 27 June 2011 (UTC)

In revision table only the minor flag is stored. The recentchanges table stores minor flag AND bot flag. So on queries related to the revision table like article history only the minor flag is shown and on queries using the recentchanges table like watchlist or special:recentchanges both flags are shown. The entry at the recentchanges table will be deleted after 30 days. Merlissimo 20:07, 30 June 2011 (UTC)
Ah, interesting. Thanks! 28bytes (talk) 06:42, 2 July 2011 (UTC)

Using own functions with

Hi folks,

I came from huwiki to gladly tell you that I have written a HOWTO on using functions in your See hu:user:Bináris/Fixes and functions HOWTO. This is a mostly undocumented feature of and will improve your fixes (for those who are a bit familiar with Python programming). Useful for complicated replacement tasks. Enjoy! Bináris (talk) 11:22, 3 July 2011 (UTC)

Python Unicode Bug

Python 2.7.2 has been released at Sun, 12 June 2011. This release does no longer trigger unicode bug 3081100, which happened for characters with multiple accents (for example on hak-, hi-, cdo- and sa-wiki). I guess it is highly recommended to migrate to this new release if the local version has this bug. Xqt (talk) 13:19, 3 July 2011 (UTC)

Would this be possible?

Could there be a bot that requests pages be protected at wp:rpp if an article is vandalized more than a certain amount of times (like 3) in a certain time (like 24 hours)? The bot could also automatically protect the page without requesting.Heyitsme22 (talk) 12:45, 11 June 2011 (UTC)

Hi, I was thinking about a bot that reports pages for protection, much like one reports pages at WP:AIV and WP:UAA. I would support a bot that adds pages to WP:RFPP, but I would not support a bot that automatically protects the page, pages still need to be checked manually by a human admin, initially at least as "vandalism" in a bot's eyes could be false positives. The Helpful One 15:51, 11 June 2011 (UTC)
The bot wouldn't necessarily need to detect vandalism, only find edit summaries that identify the reverted edits as vandalism several times (huggle messages, twinkle messages, cluebot message, common expressions like "rvv", "revert vand", etc.). In addition checking only non-anon, autoconfirmed editors would give almost no false positives (and a quick BRFA ;)). You could also ping ClueBot operators to see if they would be interested in doing this. —  HELLKNOWZ  ▎TALK 16:11, 11 June 2011 (UTC)
If everyone is OK with it, I can poke Cobi to try and add this in to ClueBot NG - Rich(MTCD)T|C|E-Mail 13:28, 7 July 2011 (UTC)

Please let me know if there is a public code for such a bot, because I would like to implement it in huwiki. Bináris (talk) 16:25, 10 July 2011 (UTC)

Hi! Work request

Hi! I've been editing Latin American people's names to include the proper accent marks. During the last days I've included thousands of accent marks, but the task is showing itself to be too huge to do it manually. Would some of your bots care to help me with this, please? I have a first list here for your orientation on what to do. When disambiguations are no provided for common names, it is safe to add the accent marks to those names in sports articles (I've checked it out). Please let me know if you decide to give me a hand and do this. Thanks you! Againme (talk) 17:56, 10 July 2011 (UTC)

  • As the notice at the top of this page says, this is not the place for requesting work to be done by a bot. I'm sure you'll get a better response to your request at Wikipedia:Bot requests. --R'n'B (call me Russ) 13:29, 11 July 2011 (UTC)

User:Mathbot reverting my categorisation edits

User:Oleg Alexandrov operates User:Mathbot. Mathbot has been reverting my re-categorisation of some Wikipedia namespace articles. They are visible in Category:Indexes of articles and Category:Mathematics-related lists but they should only be in Category:WikiProject Mathematics. Oleg does not seem to check the Mathbot talk page regularly and he is yet to reply to my request on his talk page. How can I get Mathbot to stop my reverting my edits? -- Alan Liefting (talk) - 08:42, 11 July 2011 (UTC)

I changed it. — Carl (CBM · talk) 11:58, 11 July 2011 (UTC)

DASHBot is not doing image resizing again


Hi there. DASHBot is supposed to clean out Category:Non-free Wikipedia file size reduction request nightly, at least of all of the images (95% of what's there). It went for over a week not doing it, then ran on July 6, but failed to run yesterday. (Task here)

I can keep that category trimmed, but it sucks up lots and lots of time. I'd like for either DASHBot to be brought back up, or some other solution to be found. Please? Sven Manguard Wha? 07:50, 8 July 2011 (UTC)

Procedural: Tim1357, the bot's owner, was notified. Sven Manguard Wha? 19:03, 8 July 2011 (UTC)
The stupid crontab is being stupid. I'm trying to figure out why it keeps giving up. Gimme a minute. Tim1357 talk 23:55, 8 July 2011 (UTC)
Could nice be killing it? Tim1357 talk 23:55, 8 July 2011 (UTC)
Aannyways I started it again manually and changed the crontab so that it no longer uses "nice" and runs at a different time so that the server isn't so loaded. Tim1357 talk 00:20, 9 July 2011 (UTC)
Thanks, saw it run. Will it run every day again? Also, if you're looking for a slow time, it's about three hours and 20 minutes from this timestamp ( 03:38, 9 July 2011 (UTC) ). Too early for the Brits to be editing, after midnight for the US West Coast, after 3AM for the US East Coast. IRC channels for Wikipedia, at least, go dead around then. Sven Manguard Wha? 03:38, 9 July 2011 (UTC)
Yep, it looks like it ran on its own today [21] so I think it should work from now on. Just for the record, I was worrying about the toolserver (where DASHBot is hosted) being overloaded, not the wikipedia server. Thanks for bringing this to my attention.   Tim1357 talk 03:02, 10 July 2011 (UTC)
No problem. If the bot ever croaks, I get stuck with doing the resizing manually, so I check up on it from time to time. Sven Manguard Wha? 08:18, 10 July 2011 (UTC)
Two things, firstly, if nice was killing it, doesn't that raise serious issues in itself?
Secondly, when I consulted the log, it said something like "identified 4 files" when there were more than 4 in the category...? - Jarry1250 [Weasel? Discuss.] 10:29, 10 July 2011 (UTC)
Jarry: I haven't gotten any emails saying that it was killed because it was using too many resources so I have no idea what was up. Tim1357 talk 03:40, 13 July 2011 (UTC)
It can't tackle sound or video files. A dozen of those have been in the category for over a month, and the video has been there as long as I've been checking that category, so going on a year. It does, however, get all the images. Sven Manguard Wha? 01:13, 11 July 2011 (UTC)

User:SmackBot tasks => user:Helpful Pixie Bot tasks and bot-flag

  Resolved: Flag granted to bot clone. –xenotalk 13:19, 12 July 2011 (UTC)

Further to User talk:Rich Farmbrough/Archive/2011Jul#Bot's name and an email from Phillipe@WMF, I have created a new account with a more "friendly" name. I plan to transition SmackBot's activities to this account, once a bot flag is set.

Can BAG please make the necessary arrangements with the 'Crats?

Rich Farmbrough, 13:10, 12 July 2011 (UTC).

All set. –xenotalk 13:19, 12 July 2011 (UTC)
Then I would suggest that you remove bot flag from SmackBot. -- Magioladitis (talk) 13:22, 12 July 2011 (UTC)
Sure; as soon as Rich confirms that it is standing down. –xenotalk 13:25, 12 July 2011 (UTC)
I was going to say there need to be a transition, but actually there probably doesn't. Rich Farmbrough, 14:13, 12 July 2011 (UTC).
OK transitioned. Thanks. Rich Farmbrough, 14:13, 12 July 2011 (UTC).
Removing the bot flag would screw the statistics systems. — Dispenser 14:22, 12 July 2011 (UTC)
Could you explain? –xenotalk 15:06, 12 July 2011 (UTC)
Methods of excluding bots are based on the presences of the bot flag. Having a separately maintained list across all wikis is too much effort. And while unflag a bot with less than 400 edits has little effect SmackBot has nearly 4,000,000. If we're talking about security, we ought to talk about unflagging retired/inactive/non-e-mailable admins. But we already know the results of that. — Dispenser 18:41, 12 July 2011 (UTC)
Perhaps you have not been keeping up-to-date on recent events?xenotalk 19:12, 12 July 2011 (UTC)
Renaming is impossible due to the number of edits and having an inactive account set as bot is not a good idea. -- Magioladitis (talk) 14:26, 12 July 2011 (UTC)
Not impossible, but would require developer intervention (and they might say no =). –xenotalk 15:06, 12 July 2011 (UTC)
Umm... Category:Inactive Wikipedia bots. — Dispenser 18:41, 12 July 2011 (UTC)
Indeed, I've only just recently deflagged some bots that hadn't edited since 2007. Since there's clearly no rush, we can explore potential problems that would crop up by de-flagging SmackBot before it's done. –xenotalk 19:12, 12 July 2011 (UTC)
Seems there is already a (quasi-)central list for this purpose: Wikipedia:List of Wikipedians by number of edits/Unflagged bots. –xenotalk 17:24, 13 July 2011 (UTC)
Submit a request for the rename to be completed in bugzilla: and someone should do it. Peachey88 (T · C) 04:25, 13 July 2011 (UTC)

Questionable behavior of XLinkBot

I recently ran into a case where XLinkBot reverted the edits of a user because these edits added a link to Facebook (see diff). I find this behavior of the bot questionable. There is a lot of potential of removing a lot of good work because of a single link that violates WP:ELNO#10. In my opinion such a task should NOT be handled by a bot. A link to facebook is discouraged, but perhaps not nearly as harmful as the eventual loss of the editor. There should be a careful review before such a reversion should take place, and the way in which XLinkBot handles this is more than unfortunate (reverting all subsequent edits of the user). This behavior of XLinkBot needs to be changed in my opinion. Toshio Yamaguchi (talk) 08:59, 14 July 2011 (UTC)
I notified the bots operator User:Versageek of this discussion on his talk page. Toshio Yamaguchi (talk) 09:10, 14 July 2011 (UTC)

Every action on Wikipedia might cause a problem: reverting dubious links added by new users might be bitey, and not reverting such links might lead to an avalanche of promotional or inappropriate edits. XLinkBot leaves an excellent message (example), and any new editor who takes the trouble to read the message would understand the situation and not feel bitten. The advantage of having XLinkBot doing this work is that new editors get a fast response regarding a common problem: a new editor who starts by adding links to dubious sites needs to learn that the practice is not encouraged, and that such actions are monitored. Johnuniq (talk) 09:57, 14 July 2011 (UTC)
It is acknowledged that XLinkBot makes some mistakes in reverting (as does every anti-vandalism bot). One questionable revert of facebook is not a reason to change the bot, it might be cause for review of the facebook reverts. Maybe we should do that and see (I did that once for myspace links, and found one out of 30 which I personally would not have removed - though I would certainly not have added it either in that case).
That being said, links like facebook are hardly ever suitable external links, there are not that many subjects which have a sole and most important internet presence a facebook, generally there are others, generally there is an official page (and if that facebook is important, it is very often (like here) linked from the official homepage). We're not a linkfarm etc. etc. I would call this a good removal (though it also reverted other parts, see below).
Unfortunately it is technically impossible to remove only the external links which are a problem, as that would very often lead to vandalism by the bot - there are so many ways of adding external links, that where removing the external link only in one case would leave still quite some 'spam' (for a true spam link), while in other cases it leaves a broken page. On the other hand, reverting the whole edit (or all edits) generally does not leave a broken page, but may revert good info as well (reverting only one edit is another option, but it is found that that leaves more often broken pages, as newbies sometimes need 2-3 edits to get the external link correct). All in all, reverting all edits by the editor, and notifying the editor of the revert is generally the better option.
The bot has several safeguards in place, it is significantly soft in 'warning', forgets quick, it does not revert established users, it does not re-revert, it does not edit-war, etc. etc. Moreover, it is not tagged as a bot, which makes the edits show up in the recent changes (for other editors easier to check).
All this being said, I did spot a bug here, which I resolved (the user should not have gotten a first-level warning, it should have gotten a good-faith remark). --Dirk Beetstra T C 09:58, 14 July 2011 (UTC)

is this a bot

Hi. Is this a bot or a living person? - User:WPCbot - Off2riorob (talk) 21:44, 16 July 2011 (UTC)

My guess is human. Certainly not an approved bot, at any rate. I see it is already listed at WP:UAA. Anomie 03:28, 17 July 2011 (UTC)
Ah, thank you for looking, it did seem to be "thinkin" excessively to be a bot- regards. Off2riorob (talk) 09:34, 17 July 2011 (UTC)

Image placeholders

Back in 2008 there was a centralized discussion that concluded that image placeholders are not needed but nobody ever came to remove the existing placeholders. In the following years some editors kept adding placeholders and some other removing them. As User:WOSlinker wrote on my talk page "If image place holders were ever needed, a better place would have been to add them at the template level rather than on articles then it would have meant that all articles without an image set would get a placeholder image instead rather than just a few of the articles, but since they are not, better to be consistent and remove them all. There is not that many really, although it's a few to many to do by hand but not a big job for a bot."

Infobox standardisation enables us to treat the question uniformly. In the last months we put a lot of effort to standardise the code and the visual outcome of all infoboxes. My bot, Yobot, started removing placeholder infoboxes from articles yesterday in the frame of this standardisation and did 3-4k edits. There is a tracking category, Category:Infobox person using placeholder image, which is common for all infoboxes about persons. At the moment there are like 12k pages more. I thought I had to leave a note here since I got a couple of questions on the matter from editors who not disagree but would like to know that the community is informed on this change. -- Magioladitis (talk) 10:09, 17 July 2011 (UTC)

Well actually I was one who asked you one of those questions, and I do disagree. I find image placeholders useful and appropriate for living scientists. The result of the centralized discussion, way back in 2008, was 18 in favour out of 32 participants–and that can hardly be called a "consensus". I do not think you can say that you have any form of current consensus to do this. I understand that standardisation has a certain appeal, the appeal of wrapping everything up neatly and doing away with uncertainty. But too much standardisation becomes pernicious. There should always be a bit of creative ferment, disagreement and uncertainty at the perimeter of what is allowable. Without that, the energies that can lead to improvements can be strangled, and Wikipedia will be the loser. --Epipelagic (talk) 10:53, 17 July 2011 (UTC)
As I said we can form a consensus to add placeholders to all infoboxes for scientists for instance. But when this consensus is formed we won't need to add image placeholders to each page separately but we could just change the infobox's code. Right now placeholders are just randomly put in random pages. -- Magioladitis (talk) 10:58, 17 July 2011 (UTC)
It was the matter of about a minute to find Wikipedia:Bots/Requests for approval/AmeliorationBot 2 and Wikipedia:Village pump (proposals)/Archive 58#Time to remove placeholders?. There does not seem to be consensus for a widespread removal of these images based on that old discussion. Feel free to start a new discussion somewhere. Anomie 13:18, 17 July 2011 (UTC)
I remember this discussion. The difference is that we now have better infobox code, tracking categories and we are already down to 12,000 articles. As I said the consensus was not to add the images and there was the problem or removing the existing ones or not. This is different of what Epipelagic says who supports the use of this image. -- Magioladitis (talk) 14:10, 17 July 2011 (UTC)
I think it should really be an all or none situation for image placeholders. If it's an all then they could be set in the template rather than the article. If it's a none then they shouldn't be on the article. So either way removal from articles text is the sensible option. -- WOSlinker (talk) 14:23, 17 July 2011 (UTC)
All the more reason to start a new consensus discussion. WP:VPR is that way. Anomie 17:58, 17 July 2011 (UTC)

Bot in question

I had made User:Porchcorpterbot as my bot account. It got approved as a bot. See User talk:Porchcorpter#"Bot" account. Frank says that it is not a bot, and it is a violation of the username policy. But I created it as my bot account for AWB purposes. Would anyone be willing to give their opinion on this? -Porch corpter (talk/contribs) 10:13, 21 July 2011 (UTC)

Well on the one hand it is not really a bot account, just an AWB alternate account (I have one, as do many others; and none of them formally approved as "bots"). I'm not sure why you went to BRFA? On the other hand "meh"; the name doesn't seem to be the end of the world. Perhaps note that it is your AWB alternate account (rather than an approved bot) on the user page. --Errant (chat!) 10:38, 21 July 2011 (UTC)
It was created to be my bot account for the AWB guideline "Don't edit too quickly; consider opening a bot account if you are regularly making more than a few edits a minute.". If it was approved to be an alternate account, I'm not sure why. I had created it to be my bot account for that guideline. -Porch corpter (talk/contribs) 10:44, 21 July 2011 (UTC)
You haven't actually used it though? Editing fast enough to require that sort of approval with AWB is actually quite hard :) (if you are manually checking edits) so you probably needn't have worried. That is more talking about if you are using AWB to make a specific string of changes very quickly. --Errant (chat!) 11:07, 21 July 2011 (UTC)
I'll explain my thought process approving that. I considered it a semi-automated alternative account, not technically a bot. I said there that "You will not be marked as a bot on AWB checkpage [...], and you won't be flagged as a bot, since you aren't one." I thought I'd just mark the BRfA as approved though, to make things easier, and as I thought nobody would kick up a fuss about something like this or get confused. I didn't intend my approval to be saying "yes, you're now an approved bot", more "you're not a bot and don't really need a BRfA but marking as approved as you can use this account". But now it seems that Frank is upset about the name, another user was declined AWB access partially because they removed the bot template from the user page, and the bot even got moved into the bot section of the AWB/CP. All of that is down to everybody getting a little bit confused about what has happened, and why I took the action I did back then (the only reason I approved was because I didn't want cause more trouble for Porchcorpter by hassling him with having to create a different account and having an unapproved BRfA to his name). Again, my approval was not approving as a bot, it was saying that it technically didn't require BAG approval as it was not a bot. To clear things up, here's what I'm going to do: I will revoke the approval. The bot will be marked as an unapproved bot and Porchcorpter should no longer use that account. If you still wish to use have a semi-automated alternative account, then create something called PorchcorpterAWB (or similar) and I will be happy to give it AWB access. However, do not bother starting a BRfA, since (as I said) it is not technically required for that. - Kingpin13 (talk) 11:23, 21 July 2011 (UTC)
Two more cents: I'm not upset, per se, but concerned that Porchcorpter is under a topic ban from participating at WP:UAA and months later, still thinks that his alternate account is a bot. This does not seem to be supported by Wikipedia:Bot policy or Wikipedia:Username policy, but I don't intend to start a debate here. I thought it was bright-line obvious that we are NOT discussing a bot account because there's no script or code behind it, no approval process, no stated goal, and indeed, only one edit made with the account. If my interpretation is incorrect, then so be it. If we get past that, and we agree to call it a bot, Porchcorpter has previously seemed so attached to the concept of operating a bot account that he's placed an "emergency shut-off" notice on its page, which clearly isn't required for a 1-edit account. When another editor removed that (before I did), Porchcorpter disagreed. It's worth noting the logic used in that message: if it was not intended to be a bot, it would be blocked for violation of the username policy, in which Porchcorpter seems to be asserting that it's a bot because nobody has blocked it for not being a bot. I have found over the course of months observing and interacting with Porchcorpter (née Porchcrop) that being as explicit as possible is the best course of action for all concerned, and given that his desire is (at least) to work with UAA, it seems a little inappropriate to me.
However, let me be clear: I am not hopping up and down requesting action, I'm not "upset", and I'm not considering this a big deal. It was meant as a small course correction as part of an overall shepherding process for Porchcorpter; if there isn't wide agreement with my point of view, tehwiki won't break. Finally, I was unaware of a relatively large number of accounts in Category:Unapproved Wikipedia bots. If Porchcorpter understands this "bot" account should only be editing in its own space, as noted in the current infobox on the top of its page, I think there's nothing more to see here.  Frank  |  talk  12:27, 21 July 2011 (UTC)
  • Since the account wasn't/isn't editing, this is all rather academic. However, my thoughts are that an account with 'bot' in the username should only be doing specifically approved tasks (not "just whatever with AWB"), as people may afford the edits less scrutiny (i.e. assuming they've been approved by BAG). –xenotalk 12:28, 21 July 2011 (UTC)
  • Cheers Kingpin. But note that the fact that this was made to be a bot account, and hasn't been editing manually, it definitely does not violate the username policy. And also, some bots were only used for AWB purpose, and these bots are used {{bot|operator|awb=yes}} on their userpage. The bots only made to run for AWB don't need have any code or script behind it. And I am famaliar with UAA, when the ban expires (which is two months from now), see if there are still problems with me at UAA. The reason of topic bans is because I was a bit incompetent, and the community could not stand my disruption. -Porch corpter (talk/contribs) 09:25, 22 July 2011 (UTC)

BAG candidacy

Since it looks like we need more active BAG members, I volunteered to help out; feel free to leave comments. On a related note, if any of you are experienced editors with good tech skills when it comes to bots + would like to make it a haunt, please feel free to open one up as well. --slakrtalk / 11:21, 24 July 2011 (UTC)

Another BAG candidacy

I've nominated myself for BAG membership; comments, questions, and !votes are welcome at Wikipedia:Bot Approvals Group/nominations/Hersfold. Thanks. :-) Hersfold (t/a/c) 18:35, 8 August 2011 (UTC)

RfBA contradiction in instructions and BAG knowledge

Discussion can be found here. Please discuss there, not here, should you desire discussion. Thanks. -- (talk) 21:26, 10 August 2011 (UTC)

RFC Bot going on a rampage of sabatoge

Not really much to explain; in fact, the history of WP:Requests for comment/History and geography basically explains its problem. LikeLakers2 (talk) 04:15, 11 August 2011 (UTC)

Yes? Sigma is taking/took care of it so why the notice here? Headbomb {talk / contribs / physics / books} 04:21, 11 August 2011 (UTC)
Because I think its better to perhaps full-protect the page Sigma linkked to for now and and wait for some time, so as to perhaps void the chances of the RFC Bot vandalising exploding on the page every half-hour. LikeLakers2 (talk) 04:34, 11 August 2011 (UTC)
If Sigma's on it, then why does the page need full-protection? At worse {{Bots|deny=RFC Bot}} would be fine as a temporary measure.Headbomb {talk / contribs / physics / books} 04:35, 11 August 2011 (UTC)
I guess we could add {{Bots|deny=RFC bot}}. Could you add it for me? As my phone's web browser gets a exception when I try to edit the editbox. LikeLakers2 (talk) 04:59, 11 August 2011 (UTC)
Done. As a general note, if you need a bot to be blocked or whatever, try WP:AN over WP:BON. You'll get a quicker reply from people who can actually block bots. Some admins watch this page, but I wouldn't be on a quick reply for pressing matters. Headbomb {talk / contribs / physics / books} 05:03, 11 August 2011 (UTC)

RFC on identifiers

There is an RFC on the addition of identifier links to citations by bots. Please comment. Headbomb {talk / contribs / physics / books} 15:51, 15 August 2011 (UTC)

Login problems

I can no longer seem to log in using my homemade scripts, which always used to work - either under my bot account or my normal user account. When I give the correct username and password, I just get served up with the main page, and I'm not logged on. Has there been a change of requirements as to the user agent string or something? Thanks for any pointers,--Kotniski (talk) 08:24, 16 August 2011 (UTC)

How are you logging in? API? —  HELLKNOWZ  ▎TALK 08:38, 16 August 2011 (UTC)
By brute force - getting the login form and sending a request back with all the required data. It always used to work, with an invented user agent string ("Agent.Kotbot"), but now I discover it's not working - well, it worked once today, for the bot account, but not for my own user account; and then after I made a few attempts with a different user string (a standard Mozilla one), it no longer works even for the bot and even with "Agent.Kotbot". So unless I've made some stupid error, I'm assuming there's been some change as to the requirements for the agent string.--Kotniski (talk) 08:50, 16 August 2011 (UTC)
I've never done it this way, may be someone else might help you. I also don't follow changes to MediaWiki, so I wouldn't know if they changed anything. Perhaps your POST data lacks some newly added parameter? Perhaps your script isn't accepting cookies properly? Perhaps you are getting a Captcha? I'm puzzled, because you say it worked once, suggesting it may be a Captcha. May be you have a bad referral string value (if that even matters)? Unless your user agent is blacklisted, I don't think it matters as long as it is not empty. —  HELLKNOWZ  ▎TALK 08:58, 16 August 2011 (UTC)
Update: sorry, it seems that the bot account is continuing to log in as normal, so it's only my regular user account that's causing the problem. Are there different agent string (or other) requirements for regular accounts than for bot-flagged accounts?--Kotniski (talk) 10:29, 16 August 2011 (UTC)
I don't think there is any difference—my bot logs into its non-bot-flagged account just fine. However, if you fail the login 3 times with the standard user form, you will be presented with a captcha, and if you fail the API login 5 times, all subsequent requests will be dropped for 15 minutes. (Just use the API; it's a lot easier! :P ) Reaper Eternal (talk) 10:39, 16 August 2011 (UTC)
Thanks for all the suggestions, but exactly the same thing's happening today: my script can log on to the bot account and perform an action without any problem; but when I change the bot name and password to my personal username and password, it's not getting logged on (it's receiving the Main page instead of the you are/are not logged on page). This is despite my having no problems logging off and on again in my browser, without any Captcha. Any more ideas? I'll try asking at the technical VP as well.--Kotniski (talk) 12:42, 17 August 2011 (UTC)
Perhaps it is because you are already logged in via your browser, so when you try to re-loggin, MediaWiki just ignores that request and redirects you back to the last page you visited (or Main Page, since your bot script didn't come from any page). —  HELLKNOWZ  ▎TALK 13:59, 17 August 2011 (UTC)
Hmm, yes, that's a possibility (though it must be a recent change) - I'll try it another day from a different IP address without opening the browser first, and see if it makes any difference.--Kotniski (talk) 14:16, 17 August 2011 (UTC)
There's zero chance of us helping debug this without more info that "homemade scripts" :) What language are they written in? What OS are you executing them under? etc. Could be all manner of causes --Errant (chat!) 15:25, 17 August 2011 (UTC)
Python, using the standard Python library functions (urllib and so on) to retrieve and submit forms. Windows 7. Though I doubt it has much to do with that side of things - the only thing I'm changing (to go from working to not working) is the account login credentials.--Kotniski (talk) 15:41, 17 August 2011 (UTC)
Does your bot have ipblockexempt? - Jarry1250 [Weasel? Discuss.] 13:32, 19 August 2011 (UTC)
No, it doesn't seem so. Anyway, all the accounts can log in through the browser, so presumably there's no IP block at work.--Kotniski (talk) 13:45, 19 August 2011 (UTC)
Oh, acutally all bots have it these days. And that would depend how you run your scripts -- straight off your home computer? Or do you upload them to a server? But either way, why can't you just grab the HTML content of whatever response you get to the login attempt? - Jarry1250 [Weasel? Discuss.] 14:06, 19 August 2011 (UTC)

(outdent)I run them from home. The response I was getting to the (unsuccessful) login attempts was the Wikipedia main page.--Kotniski (talk) 14:32, 19 August 2011 (UTC)

Ah, my apologies... I seem to be getting the main page as the response in both cases (successful and not). Unless I decide to send the wrong password - in that case I get the "login unsuccessful" page.--Kotniski (talk) 14:57, 19 August 2011 (UTC)
Hmm, interesting. And you're sure you didn't change anything about your setup immediately before the problem started appearing? How long a gap was there between the last successful login though your user account and the first unsuccessful one? Could you even suggest dates, perhaps? - Jarry1250 [Weasel? Discuss.] 15:24, 19 August 2011 (UTC)
Quite a long time - several months, I think. In fact I remember I did have to change the code at one point to accommodate a change in the login form; maybe that was in the intervening interval (though I can't understand why it would have affected one account and not the other). OK, since no-one seems to recognize this issue, I'll go back and look at the forms, maybe there's something going on there (or I'll reprogram it to use the API...)--Kotniski (talk) 16:04, 19 August 2011 (UTC)

Would this need new approval?

User:CommonsNotificationBot currently monitors speedy tagged files on commons and posts article talk page notices here to keep people informed. I have had a request to do the same, but for images tagged "F6" here on (see here and my talk page). Does this need new approval? It adds probably less than 20-30 edits per day (I think) to the bot, and doesn't do anything "new" except change the category being monitored. --Errant (chat!) 21:45, 21 August 2011 (UTC)

Yes, but I see no reason it couldn't be speedied. But why only F6 and not other CSD types? Anomie 22:57, 21 August 2011 (UTC)
Hmm, no reason - except I was specifically asked about F6. It could cover all easily. Will open a BRFA, thanks :) --Errant (chat!) 00:40, 22 August 2011 (UTC)

General notice to bot owners about edit summaries

I often come across edit summaries by bots which do the "bare minimum" when it comes to our requirements on informative edit summary and open communication. I'm not interested in doing a big "crackdown" on the edit summaries, but I would like to remind everyone that verbose edit summaries are better than "terse" ones. For example, a bot updating the parameters in a template might have the edit summary

  • "Updating parameters of WikiProject banners."

But it could also have much better summary by having something like

So it would be great if everyone would

  1. Consider ways to make their edit summaries more explicit / verbose
  2. Consider ways to give the rationale for the bot's edit in the edit summary; if the rationale is too long, consider giving links to policy/guideline/whatever pages, or to an explanation of the bot's edits on your bot's user page (or something equivalent like its BRFA)
  3. Include a link for complaints and suggestions in the edit summary
  4. Proper grammar and spelling is sexy, especially sexy to those who do not speak English natively

For a "real life" example of where this makes a difference, compare [22] with [23] and ask yourself what is clearer.

I'll defer to individual bot owners on best to achieve good communication for whatever bot / bot tasks they are running, but it would be nice if everyone would at least review their bot's edit summaries and ask themselves if it's possible to make them better (suggestions 3 & 4 should be particularly easy to follow). Thanks. Headbomb {talk / contribs / physics / books} 03:46, 16 August 2011 (UTC)

  • Endorse this strongly. Summaries like "Fixing infobox" or "Repairing links" are not descriptive and a good portion of bot summaries are still truly basic even if BOTPOL prescribed differently. I understand that it is cumbersome to write them out when editing manually, but for a bot all that's required is a one-time coding. My preference is that bot summaries say exactly what they did and link to the task's description. —  HELLKNOWZ  ▎TALK 16:20, 16 August 2011 (UTC)

I completely agree that a summary of the form 'fixing infobox' should be improved. I have two comments:

  1. If comments should go the bot's user talk page, or the page it redirects to, then I don't see a reason to include that link in the summary as well. Any Mediawiki web interface that shows you the edit summary is likely to have a link to the bot's talk page already.
  2. There is at least one circumstance when changing the edit summary is not possible with current Mediawiki. When a bot (or anyone else) creates a new section, it is not possible to also set a custom edit summary. This is because the summary field is re-used by the software as the title of the new section (e.g. as documented at mw:API:Edit, or you can just click 'new section' and see there is no box for an edit summary). Bots that post messages to talk pages should do so by posting new sections, because this is guaranteed to be edit-conflict free and independent of any other edits, unlike any other method of posting to a talk page. An example of a bot edit of this form is [24].

— Carl (CBM · talk) 11:30, 17 August 2011 (UTC)

The point is mostly to explicitly encourage people to report errors and make suggestions for improvements. The link doesn't really *have* to be there, but it makes it clear where comments should be left and people don't have to wonder if they should leave it on the bot's talk page, operator's talk page, or some kind of subpage etc... So if space consideration allowed for the link to be there, why not? Newbies might very well leave an error report on the article's talk page, and then the operator would probably never hear of it.
As I said, I'll defer to individual bot owners one how best to make the edit summaries clear for their bots. A specialized bot that works on WikiProject subpages (i.e. a bot that updates WP:JCW) has much less of a need for a clear edit summary than a bot that removes fair use files from articles when they lack a proper license. Fun to have a nice edit summary in both cases, but "Updating report" in the former case is good enough, while "Removing file" in the later case isn't. Headbomb {talk / contribs / physics / books} 14:19, 17 August 2011 (UTC)
Yes, but there's a limit to summary length, and sometimes less is more. The distinction Headbomb draws is a good one - choosing the edit summary is a matter of programmer discretion, as well as time and tools available. For example with custom code it's possible to build up a blow by blow edit summary (though arguably this duplicates "diff"), whereas with AWB there are specific things (the most useful) you can do, and that's it. Rich Farmbrough, 17:18, 21 August 2011 (UTC).
diff shows what was done, not why it was done (who authorised it, and the rationale behind it). Linking to eg. WP:MOSNUM is good but its scope is large and I think the "blow-by-blow" is useful in narrowing the specific area/action taken being undertaken (and only if it was actually undertaken in this specific edit). —Sladen (talk) 20:40, 21 August 2011 (UTC)

I am always open on suggestions of how to improve my bot's edit summaries. You always how to deal with edit summary length and lack of inspiration. Check also most of non-bot editors' edit summaries, the situation is not better. I can try add dynamically some stuff for awb users but in the past when we tried it we had complains on performance. The only thing I find unnecessary on Headbomb's suggestion is the link for suggestions. This link is supposed to be the talk page in most cases. PS after this discussion I improved my bot's edit summary a little bit. :) -- Magioladitis (talk) 22:48, 21 August 2011 (UTC)

Good to hear about the enhanced edit summaries. On the summary length topic, I think this is more of a worry than in reality; truncation (or ellipsing with '…') is just fine—it's still infinity better than the general minimal habit at the moment. The increased detail in the first 100 characters is likely to provide considerable reassurance in the common case. —Sladen (talk) 07:00, 22 August 2011 (UTC)

Whilst I won't go in-depth on what I would like to see or not see in the summaries, The small change I would like to see for people running multiple tasks from the one account is to include something like "Task #" in the edit summary then on the bot's userpage to provide a list of tasks so people can easily look up more information on it. Peachey88 (T · C) 01:07, 22 August 2011 (UTC)

A long time ago, I started adding details. The irony was the demand for detail increased in proportion to the amount of detail provided. I'd mention A, B, C, D, and E and would say "You didn't mention F" or "You didn't mention the discussions, the guidelines, the BRFA". With the article title and section heading, a summary can easily be three lines for some readers. Long summaries got complaints that it was incorrect, succinct ones didn't. The problem was worse if I customised summaries and forgot to update it. It's probably time to have another think about it (thanks to comments by Headbomb, Sladen, and others). I've always wanted to be able to provide an automated summary from the diff as Rich suggests, but I've never been able to do it. Does anyone know how to do it for AWB custom modules? Lightmouse (talk) 18:34, 24 August 2011 (UTC)

People should be able to find most of the details by reading the BotRFA, so if that is linked to the edit summary, along with a very basic summary, should that not be enough? Sven Manguard Wha? 20:49, 24 August 2011 (UTC)

Read-only admin bot

I would like to edit my bot to allow read deleted versions of a page in a semi-automated manner (only when I ask it to). Would I need approval for that? It would log in as me, read the file, and immediately log out. Magog the Ogre (talk) 10:59, 24 August 2011 (UTC)

No. You should be fine to run it under your main account. As a general rule read-only bots don't need approval, unless they're doing very high rates of scraping etc (and then its normally better to do bots like that with the toolserver databases) --Chris 11:22, 24 August 2011 (UTC)

Status of MiszaBot

Note sure where to post this, but hopefully someone here can assist. I tried User talk:Misza13 already to no avail. It appears that this bot has ceased running all tasks other than Wikipedia:Mediation Cabal/Cases. In particular, the edits to the sub-pages of User:MiszaBot/Trackers appear to no longer take place. Is anyone still running the bot, and if so can this function be restored? If the answer to either question is "no", please advise and I will boldly get rid of the templates and remove any transclusions of same. --After Midnight 0001 18:47, 26 August 2011 (UTC)

The archiving MiszaBots still run, so I believe this only applies to the tracker pages. –xenotalk 18:57, 26 August 2011 (UTC)
The tracker pages are probably redundant to the progress boxes {{Orphaned articles progress}} for example. Rich Farmbrough, 11:19, 28 August 2011 (UTC).
I don't think so, most of these are more for showing a weeks worth of image categories for deletion, example User:MiszaBot/Trackers/CAT:ORFU. But if you see replacements for these also, please advise. --After Midnight 0001 12:02, 28 August 2011 (UTC)
Orphaned non-free use Wikipedia files
13 October 2020 63
14 October 2020 63
15 October 2020 61
16 October 2020 52
17 October 2020 70
18 October 2020 60
19 October 2020 44
20 October 2020 16
Undated articles 0
All articles 429

Rich Farmbrough, 03:08, 1 September 2011 (UTC).

Looks good. Am I correct that I can just clone this for any others, or is there some magic workings of a bot behind the scenes that would need to be modified? --After Midnight 0001 15:38, 1 September 2011 (UTC)
Is there a reason that it is only showing August currently instead of both August and September? --After Midnight 0001 16:44, 4 September 2011 (UTC)

Bot operation

Hi there, I am Kudu, an experienced Wikipedian who's very familiar with *nix systems. I just wanted to leave a notice here to say that I have access to several shell servers, and for bot creators who don't want to request a Toolserver account yet or who want a backup instance, I'd be glad to run their bots. This can be done by the bot owner sending me their code or me making them a shell account. If you're interested, leave a message on my talk page or email me. — Kudu ~I/O~ 23:05, 8 September 2011 (UTC)

Checkwiki errors fix

Yobot won't be running for the next eight-nine months. We need someone to fix Checkwiki errors. Here's the list of errors fixed by Yobot: User:Magioladitis/AWB and CHECKWIKI. -- Magioladitis (talk) 00:48, 24 September 2011 (UTC)

Discontinuing all my bots

I do not have time to maintain my bots anymore, so effective immediately I am discontinuing the following bots:

People have come to depend on these bots, so I would like to have as little service disruption as possible. Anyone who is interested in running these bots should email me so that I can send them the most recent scripts, the crontab, the database dumps, and the bot passwords. You will also be taking care of bot maintenance from here on out. hare j 22:57, 23 September 2011 (UTC)

I'll have AnomieBOT take over "Removes Category:Open Wikipedia bot requests for approval from closed BRFAs" from One bot, it fits well with the other BRFA maintenance it does. I'll also look at the other tasks to see if there is anything I feel up to taking on, if no one else claims them first. Anomie 00:37, 24 September 2011 (UTC)
I'm interested in helping out with RM bot; I'll send you an email. --Hard Boiled Eggs [talk] 04:39, 24 September 2011 (UTC)
Thank you HardBoiledEggs for your interest. RM bot is now in your hands. hare j 06:58, 24 September 2011 (UTC)
I can take over the rest (unless someone else wants to take over another one, I'm happy to share). The code all seems very familiar anyway ;) --Chris 07:06, 24 September 2011 (UTC)
I'm glad you volunteered, actually. I'll e-mail you. hare j 07:12, 24 September 2011 (UTC)
I'm willing to help with code maintenance if necessary; I'm familiar with bots using PHP. Ucucha (talk) 12:53, 1 October 2011 (UTC)

User:Wikinews Importer Bot

Wikinews Importer Bot (talk · contribs) is adding full URLs to wikinews links. The owner Misza13 (talk · contribs) has been notified, but has yet to act and has not edited since May. I propose that the bot be blocked until the issue is settled, and for all of the full URL additions to be reverted.--William S. Saturn (talk) 23:53, 30 September 2011 (UTC)

Done for two weeks (can be modified for longer if need be). Looks like the protocol-relative URLs have broken its regexes or something. Still, its creating a large mess at the moment, so has become utility-negative, hence the block. Absolutely happy for another admin to come along and modify the block/unblock as long as they understand the situation :) - Jarry1250 [Weasel? Discuss.] 12:10, 1 October 2011 (UTC)

reorganize categories

Hi, I m an admin in wiki ar. I m just wondering if in wiki en do you have a bot that organize categories order depending on the name of the article ? I mean if for example in the Article Canada, the category Canada will appear automatically the first one below the article in the catogory section. Do you use a bot ? Can you help us to do the same in wiki ar ? --Helmoony (talk) 21:29, 28 September 2011 (UTC)

I don't think we have a bot doing that. Ucucha (talk) 15:01, 3 October 2011 (UTC)
So in wiki en, all categories in articles are organized manually. That s a lot of job! --Helmoony (talk) 18:31, 3 October 2011 (UTC)
I suppose you could ask that the folks over at WP:AWB consider adding category organization to their general fixes (they currently do alphabetize interwiki links, after all), but it isn't really a big deal what order categories appear in, is it?. --Philosopher Let us reason together. 19:28, 3 October 2011 (UTC)
AWB won't (can't) reorganize categories from how they were set by humans as they may be ordered alphabetically, or in a conceptual hierarchy. –xenotalk 19:41, 3 October 2011 (UTC)
Huh, hadn't thought about that. Thanks. --Philosopher Let us reason together. 19:43, 3 October 2011 (UTC)
I think this is not talking about ordering of the categories on the page, but of the pages in the category, like changing [[Category:Canada]] to [[Category:Canada| ]] on Canada, so that Canada appears at the top of Category:Canada. I don't see offhand why a bot couldn't do that (in a restricted set of circumstances). Ucucha (talk) 21:05, 3 October 2011 (UTC)
Actually, I'm making that up, re-reading Helmoony's post. Still, it might be a useful bot task. Ucucha (talk) 21:16, 3 October 2011 (UTC)

Bot needed to replace DASHBot for task 9

DASHBot task 9 has been inactive for over a month now. The backlog it keeps clear is at over 1000 items. Not only that, but this is not the first time that the task has gone dark under mysterious circumstances. To be blunt, a replacement is needed. I can neither do resizings nor run bots off my computer right now, so I'm asking for someone to code and run the bot. I'm not sure how toolserver works, but I'd be willing to try and run the program through TS using Svenbot if someone is willing to code the bot but not run it. Sven Manguard Wha? 06:20, 9 October 2011 (UTC)

I don't think the BRFA tells the whole story. For example, rather than rescaling to a width of 325px, its last 100 uploads seem to have resized images to contain approximately 160000 pixels (400x400 for a square image). If you can get the source or current specifications and issues to watch out for, I'll take a look at cloning it. Anomie 16:16, 9 October 2011 (UTC)
I don't know why but cron seems to always reject this job. I just got it running again, but it only lasts for about a month at a time. Tim1357 talk 16:37, 9 October 2011 (UTC)
How strange. What exactly does it do when it stops working, and what do you have to do to fix it? Anomie 17:04, 9 October 2011 (UTC)

bot-related issues local scope...

So as a fairly newbie-bot question - it makes sense to me to test things before releasing them - is it good/common practise for bot developers to grab a recent wiki-dump and use it to set up a local mirror of wikipedia for bots to be tested in? Or are there any subtle disadvantages? Failedwizard (talk) 18:19, 8 October 2011 (UTC)

That is certainly one way to do it, although I don't know that it is common practice. You could also set up a dummy installation of MediaWiki and just import a few articles or create dummy pages for testing. You could also have the bot read the live site and just log to your local filesystem what it would do, and you can make edits in your userspace or the bots userspace without prior approval (as long as the edits are not disruptive). Anomie 00:40, 9 October 2011 (UTC)
Thank you! Much appreciated :) Failedwizard (talk) 12:04, 10 October 2011 (UTC)

BAG Nomination

As per WP:BOTPOL, I am required to notify this board regarding my nomination to the Bot Approval Group which can be found here. I welcome any and all comments regarding this. Thank you. + Crashdoom Talk 06:46, 11 October 2011 (UTC)

Another BAG nomination

Please see Wikipedia talk:Bot Approvals Group#BAG Nomination: Snottywong if you're interested. Thanks. —SW— gossip 17:58, 11 October 2011 (UTC)

Review of Wikipedia:Bots/Requests for approval/Snotbot 8 and my deny

Input here would be nice, thanks. --Chris 02:42, 15 October 2011 (UTC)

SoxBot III

I have visited User talk:X! with a message about what I am about to say. He has not replied so I will say it here: I was looking at a bot (SoxBot III), which reverts pointless wikitext additions like [[File:[[File:Example.jpg]]<ref></ref>]]<nowiki>Insert non-formatted text here</nowiki>, and noticed that it had not edited since 2009, but I think that this is a brilliant idea. I think that the bot should be running again, and if X! doesn't want it himself I would be happy to take over the running myself. There are a lot of new editors, particularly IPs, who just mess around clicking buttons and saving and this needs to be reverted. If this is the case, can somebody please find me the source code and we can take it to BRFA. (Although I would like a renaming to RcsprinterBot.) Thanks, Rcsprinter (talk) 17:01, 4 October 2011 (UTC)

X!, sadly, had drifted off. Unless he came back in the last two weeks, that is. You might be able to reach him over the IRC, #wikipedia-en, where he goes by Yetanotherx. Anything of his that we can absorb, we should. Sven Manguard Wha? 06:15, 9 October 2011 (UTC)
So, can I? I'd like to but I'm not really sure how to transfer operators or rename a bot. And I'll need the source code and login details. I shall also try to get in touch with X! over IRC too. Rcsprinter (talk) 10:59, 9 October 2011 (UTC)
FYI, there's already a bot that does this. 28bytes (talk) 09:49, 28 October 2011 (UTC)


Can someone explain throttling? I'm hitting a wall with it, and I'm not sure why. -- SatyrTN (talk / contribs) 20:58, 23 October 2011 (UTC)

What sort of throttling? Do you mean when you try to log in and the API returns a "Throttled" error code? — Carl (CBM · talk) 21:48, 23 October 2011 (UTC)
Correct. -- SatyrTN (talk / contribs) 22:52, 23 October 2011 (UTC)
If an IP address tries to login to an account (successfully or unsuccessfully) too frequently, the system will block logins for a time. The MediaWiki default is 5 logins in 5 minutes triggers throttling; I don't know whether this is changed for enwiki. When you get a "throttled" result from the API, the response should also contain a "wait" element telling you the number of seconds to wait before trying again.
The way to avoid this is to store the login cookies across runs, so you don't have to log in so frequently. Anomie 23:07, 23 October 2011 (UTC)
On the flip side tokens expire (or they did - I'm not seeing the same behaviour on Wikisource that I used to). Any insight into this? Rich Farmbrough, 10:29, 22 November 2011 (UTC).
Edit tokens? It appears Mediawiki stores the token in PHP's $_SESSION, with no additional expiration logic; IIRC Wikimedia wikis use memcache to store $_SESSION, so the expiration actually depends on the memcache settings. Anomie 12:27, 22 November 2011 (UTC)
Ah. I sometimes get a bad token response. That would explain why it can be unpredictable. Rich Farmbrough, 15:33, 22 November 2011 (UTC).

Undocumented task of ClueBot III?

ClueBot III runs a task with the edit summary "Fixing links to archived content.", such as here. Why does its userpage at User:ClueBot III not contain any mention of this task? Should it perhaps be added? I think it's a useful task. Toshio Yamaguchi (talk) 14:39, 23 November 2011 (UTC)

I believe this was due to a bug in the config block that was given as an example on peoples talk pages, once resolved the bot just fixed the previously generated links. Not really a task just self correcting, see User_talk:ClueBot_Commons/Archives/2011/November#Inconsistent_numbering. - Damian Zaremba (talkcontribs) 15:08, 23 November 2011 (UTC)
It's mentioned on the BRFA: "fix backlinks pointing to archived sections". Anomie 22:59, 23 November 2011 (UTC)
Thanks. I will submit an edit request, since the page is protected and I cannot edit it myself. Toshio Yamaguchi (talk) 23:08, 23 November 2011 (UTC)

Bandwidth constraints

I've re-written my bot (MichaelkourlasBot) such that it is now completely automated, and can be left alone for an indefinite period of time. However, I do not have enough bandwidth to actually keep it operational; my ISP cuts me off at 60GB a month, and the bot looks like it would use about half of that, based on some preliminary calculations. Would anyone be willing to host it, or give me suggestions as to how to cut down the amount of bandwidth used? (By the way, just so you know, the bot monitors the recent changes list to find user-blanked pages, then marks them with a CSD tag (db-blanked). It's written in C# using DotNetWikiBot.) Thanks! --Michael Kourlastalkcontribs 04:13, 24 November 2011 (UTC)

Can it run on toolserver? tedder (talk) 04:32, 24 November 2011 (UTC)
No, I don't think so - it runs on Windows, and it's written using the .NET framework. --Michael Kourlastalkcontribs 05:19, 24 November 2011 (UTC)
I assumed it would run under mono. How many lines of code is it? tedder (talk) 05:26, 24 November 2011 (UTC)
It's quite short - see User:MichaelkourlasBot/Source code. --Michael Kourlastalkcontribs 06:10, 24 November 2011 (UTC)
You (by virtue of the DotNetWikiBot implementation) are being pretty quite inefficient on bandwidth. First, you are making a call from the HTML version of Special:RecentChanges rather than getting the same information from the API (e.g. mw:API:Recentchanges). Secondly, your check for "page has been blanked" is to download the current version of the page and test if contains nothing but whitespace. Ideally you should check the page size first and not download page larger than 50 bytes (or something like that). The information on current page size can be acquired from the RecentChanges API without the need for any additional queries. Lastly, you get a page history list from another HTML query that could also be replaced by an API call.
I don't personally work with the DotNetWikiBot framework, but it seems to me like there are many opportunities to reduce the bandwidth you are generating. I'm not sure how easy the changes would be to implement though. Dragons flight (talk) 06:48, 24 November 2011 (UTC)
These all sound like great ideas, but I don't think there's a way to implement them through DotNetWikiBot - as far as I know, you can't call directly to a MediaWiki API; everything goes through HTML. I also don't think there's any way to check page size either. What would you suggest I use instead of DotNetWikiBot? Is there anything else that works with .NET?--Michael Kourlastalkcontribs 07:00, 24 November 2011 (UTC)
Back to the source- it shouldn't be too hard to reimplement in groovy or java. I might do that, but I'm headed out on vacation/holiday. tedder (talk) 15:48, 24 November 2011 (UTC)
I rewrote the code using some more bandwidth-friendly techniques, and it now uses .1MB/30 sec instead of .1MB/10 sec or less... But could it be run on toolserver through mono?--Michael Kourlastalkcontribs 02:17, 25 November 2011 (UTC)
Don't load the user talk page until after checking that the page is empty. Also, you might consider adding a timer to ensure that you don't query recentchanges more than once every second or so. I'm not sure how fast your script is looping, but it's possible you are pulling from recentchanges more quickly than people edit. Dragons flight (talk) 03:30, 25 November 2011 (UTC)
Also, does exclusionList update correctly when you add to it in checkExclusion, but the updated list doesn't seem to be explicitly passed back? (It may be fine, but it is the kind of thing that will work in some languages but not in others, and I'm not personally familiar with .NET) Dragons flight (talk) 03:55, 25 November 2011 (UTC)
Yeah, it's pass by reference for that case.--Michael Kourlastalkcontribs 04:34, 25 November 2011 (UTC)

ClueBot NG

See here. Rcsprinter (whisper) 20:33, 29 November 2011 (UTC)

Bot running without permission (and doing the wrong thing); owner inactive since mid-October

KuduBot (talk · contribs)'s task was to create the Category:Wikipedia files missing permission subcategories, but, due to inactivity on part of the owner, the request was marked as expired. Even so, the bot has continued to make these categories, and it is doing them incorrectly for the single-digit days (see what it created and where the files are actually located), so it should probably be blocked. It would be nice to have a bot that actually does this correctly, though... Logan Talk Contributions 15:31, 2 December 2011 (UTC)

That's a relatively trivial bot to write, but I cant. ΔT The only constant 15:32, 2 December 2011 (UTC)
I agree, that's pretty trivial. I can have 28bot do this... do I need to open a formal BRfA for an additional task? 28bytes (talk) 16:55, 2 December 2011 (UTC)
Bot blocked. Yes, but I'll speedy approve it. --Chris 17:38, 2 December 2011 (UTC)
BRfA added. 28bytes (talk) 19:02, 2 December 2011 (UTC)

Looking for bot operator

I am looking for a bot operator who is willing to take over some tasks from VeblenBot and PeerReviewBot. These run on the toolserver, so someone with a toolserver account would be able to just copy the code and it would work. The code itself is very stable and has been running for years without trouble. I simply want to reduce the number of bot tasks I am responsible for.

The tasks are:

  • Archive old peer reviews, and notify the Peer review project when the page WP:PR is too big
  • Make lists of category contents - these are used by the Peer Review and Good Article projects
  • Post notices on the village pump about pages that are marked as policies or guidelines

If you might be interested in taking over one of these, please let me know. — Carl (CBM · talk) 14:00, 1 December 2011 (UTC)

I could ... I have a couple of non-admin bot accounts that are inactive. Since User:7SeriesBOT is always running, no harm in running another ... (talk→ BWilkins ←track) 14:14, 1 December 2011 (UTC)
If Bwilkins doesn't want it, I'd take the updating of WP:PERTABLE and WP:SPERTABLE. Anomie 18:05, 1 December 2011 (UTC)
Do I need to submit any BFRA notification to change this to one of my bot accounts? (talk→ BWilkins ←track) 20:18, 2 December 2011 (UTC)
Yes, but if it's the same code or just minor changes it would be speedyable. Anomie 15:52, 3 December 2011 (UTC)

Wikispecies needs help!

I already brought it up to the discuss page at wikispecies but well there aren't many bot-master around to help the issue as someone has said. There are plenty of bot-master here at wikipedia so i think some of them should go to wikispecies to take over the issue. The issue here is there are many missing links connect wikispecies to wikipedia. Plus there are still missing tons of Vernacular names in the Vernacular names section. The Vernacular names can be copy from the links of each other language if they have one. So hope someone will consider this and inform bot-master to go to work over there. Thanks!Trongphu (talk) 02:47, 7 December 2011 (UTC)

Skeptical; you can't just mindlessly copy titles and claim they are vernacular names. Choyoołʼįįhí:Seb az86556 > haneʼ 07:06, 7 December 2011 (UTC)
Well i don't really get what you mean by that? Ok let put it this way. Let say there is a link to Vietnamese Wikipedia for cat [[vi:mèo]] then "mèo" is a vernacular names for Vietnamese language and could be put in vernacular name section. That's what i meant. Plus that's not the only thing, the links needed to be update. Some of them is missing quite a bit of links to other wikipedia languages that already exist.Trongphu (talk) 23:54, 7 December 2011 (UTC)
Won't be accurate for all languages, and will thus produce many wrong entries. Choyoołʼįįhí:Seb az86556 > haneʼ 10:58, 8 December 2011 (UTC)
What do you mean won't be accurate for all languages? Why? We are only looking at those that already exist in other languages of Wikipedia. We are not making up any names or words. I started to doubt that you understand what i said above.Trongphu (talk) 21:43, 8 December 2011 (UTC)
I understand your proposal quite well. My concerns stand. Choyoołʼįįhí:Seb az86556 > haneʼ 22:21, 8 December 2011 (UTC)
Ive got some old code laying around for wikispiecies interwiki links. If you could point me to the relevant discussions on wikispecies Ill take a stab at it. ΔT The only constant 02:01, 8 December 2011 (UTC)
here.Trongphu (talk) 21:43, 8 December 2011 (UTC)

DAB (Disambiguation) bot causing problems

There's a bot to assist disambiguating links; however it appears to be damaging pages. I am leaving this note here since the author's page is marked inactive.



You may notice strange strings of the form link:1 through link:5 on this page. Every time someone clicks a DAB Solver button to disambiguate a term, and tries to fix it, the term is replaced by link:n. The original text is not merely obscured but destroyed from the article source text. So, someone must identify all of these and if necessary go through the article history to recover the missing terms.

First of all, this isn't a bot; it's a toolserver tool, [25]. The user who uses this tool is responsible for checking their edits before saving them; if the tool malfunctions, they shouldn't save the edit. Second, Josh Parris isn't the author; try User:Dispenser instead. Third, it's really not such a big deal to undo or revert a single edit if need be. --R'n'B (call me Russ) 12:07, 8 December 2011 (UTC)
The little I've been able to piece together, this user had scripting disable in Chromium and didn't see the notice at the top asking them to enable it. Lucky the substitution placeholders (<<link:#>>) are easy enough to finding with AWB's Dump scanner. — Dispenser 02:18, 9 December 2011 (UTC)

Seemingly stupid question

I have a fairly stupid question but I confess I can't figure what I'm doing wrong. Someone stopped my bot earlier with a comment and now every time I try and start it the bot tells me I have a message. I have tried several things including clearing the comment and the cache and nothing seems to work. Any ideas? --Kumioko (talk) 04:04, 22 December 2011 (UTC)

Log in as the bot and read its talk page. ΔT The only constant 04:06, 22 December 2011 (UTC)
ROFL, OMG how could I be that stupid. Thanks, its working now. I kept reading it from my main account. It never occurred to me to login as the bot. Won't make that mistake again. Told you it was stupid. --Kumioko (talk) 04:17, 22 December 2011 (UTC)

WikitanvirBot is stupid in Marseille soap

Please shut off the bot. -- (talk) 18:51, 25 December 2011 (UTC)

WikitanvirBot's job is to add inter-language links to articles. Reiviewing its contributions, I see nothing wrong on the article mentioned. Could you please elaborate as to why you think the bot should be shut off? ~ Matthewrbowker Talk to me 19:08, 25 December 2011 (UTC)

Modifying programming for minor bot problems

Yobot added {{ibid}} to List of Ohio covered bridges, even though none of the citations were problematic. I don't want to stop the bot for one minor issue, and operator Magioladitis is on wikibreak until next May, so I can't see how to report this properly. Is there someone here that could address the issue? Nyttend (talk) 19:23, 29 December 2011 (UTC)

Is it obvious why Yobot thought it needed tagging? - Jarry1250 [Cogitation needed] 20:45, 29 December 2011 (UTC)
Not at all; as I say, the citations were unproblematic. One is to

<ref name=Delorme>Ohio Atlas & Gazetteer, DeLorme, pg 68-9</ref>

and the rest are <ref name="Delorme"/> You'll notice that the article is undercited, but these specific citations shouldn't produce this type of bot reaction, and a bot shouldn't tag an article with {{ibid}} if it simply lacks sufficient citations. Nyttend (talk) 20:49, 29 December 2011 (UTC)
Actually taking a look I see at least 7 different references, and even one of them using the term ibid. ΔT The only constant 20:55, 29 December 2011 (UTC)
Yeah, it's non visible output but it's there. - Jarry1250 [Cogitation needed] 21:00, 29 December 2011 (UTC)
If you leave a note on Magioladitis talk page he should respond fairly quick. At lead within a day or 2. I saw he responded to a question there earlier today. --Kumioko (talk) 21:19, 29 December 2011 (UTC)

Question about bot editing speed

I have a question about the speed barrier for bot edits. I have multiple tasks currently approved for my bot with 2 more pending and several more in the future. Is it ok for me to run some of these concurrently without being in violation of the speed limit. For example, the bot may be doing 8 - 10 edits a minute for one task and 8 - 10 edits a minute for a completely separate task at the same time meaning its running at 16-20 edits a minute and maybe more. I just want to make sure I am not going to be breaking any rules if I start one task while another is still going. --Kumioko (talk) 16:51, 10 January 2012 (UTC)

You should be fine. Edit speed is only really a problem when you're editing under your own account (e.g. AWB, scripts, etc), once you're using a flagged bot account it is not really a big deal. --Chris 16:55, 10 January 2012 (UTC)
Well, there is also the fact that a bot really shouldn't be editing so fast that it resembles a denial-of-service attack, although if you do do that you're not likely to face any on-wiki sanctions (you'd more likely just be summarily IP-banned by the sysadmins at the border routers).
In general, if each task is limited to one active HTTP connection at a time and uses the maxlag parameter to automatically throttle itself when the database servers are lagging, you should be fine. Anomie 17:15, 10 January 2012 (UTC)
Ok great thanks. I don't know much about the maxlag parameter so I'll look into that but I doubt I will be doing more than about 20 a minute anyway. AWB is a great tool but its a bit memory intensive so it doesn't usually go much over 10 a minute in the best of times. --Kumioko (talk) 17:27, 10 January 2012 (UTC)
Maxlag is a way for MediaWiki to be able to quickly drop non-critical requests when the databases are overloaded to give them a better chance to catch up, no matter how quickly those requests are coming in. According to this AWB uses maxlag; I don't know whether it offers an option to turn it off or change the threshold. Anomie 17:50, 10 January 2012 (UTC)
Great thanks. --Kumioko (talk) 17:58, 10 January 2012 (UTC)
The old figures of "several per minute" were to keep safe the stone age technology (compared with what is there now). Rich Farmbrough, 21:11, 11 January 2012 (UTC).
Thats what I was hoping. --Kumioko (talk) 21:44, 11 January 2012 (UTC)

Special request for a bot run

My bot already has the capability and general authority to do this type of change however some may view it as purely cosemetic so I am posting it here prior to running it to give an explaination and give an opportunity for comments.

Per a conversation on my talk page and on Wikipedia talk:WikiProject Templates#Update of category it appears that a WikiProject template redirect is causing articles to not categorize properly. It could be argued that this particular problem might be fixable by simply nudging the template or the redirect but it emphasizes the point I have been making for sometime that Template redirects should treated differently (preferable avoided when possible) than normal article redirects because the coding of the template can, and relateively frequently does, have negative affects on other things when redirected.

In this case it appears (and I could be wrong which is also why I am posting this here first) that the {{WikiProject GeorgiaUS}} needs to be replaced with Template:WikiProject Georgia (U.S. state) because the redirect appears to be causing categories not to be updated correctly.

The exact question in this is, can I go ahead and use my bot to replace these, using my prior approval of WikiProject template replacement BRFA, to make these changes? There are a about 3000 articles affected but many of them have other problems in addition to this one change. --Kumioko (talk) 16:14, 12 January 2012 (UTC)

Couldn't a sysop just purge the template/redirect? It's probably just gotten lost from queue. —  HELLKNOWZ  ▎TALK 16:17, 12 January 2012 (UTC)
This is just the job queue and occurs for many different things. They should just be left alone as it will update in time. Null editing (or in this chase changing the redirect) to avoid the job queue defeats the purpose of the job queue. That being said I null edited the template/redirect itself to see if that helps. -DJSasso (talk) 16:19, 12 January 2012 (UTC)
Yep. Category:Georgia (U.S. state) articles needing images is cleaning itself up. —  HELLKNOWZ  ▎TALK 16:24, 12 January 2012 (UTC)
Yes you are correct they could and that might help and thank you for doing that but if we weren't using a redirect then this wouldn't happen. You are correct, the queue does it for many things and a very large percentage of those are template redirect malfunctions, because redirecting code (templates) regardless of what some might want the not so tech savvy masses to believe, has 2nd and 3rd level effects that can adverselly impact other things. Sometimes they don't update categories, sometimes they don't work quite right, sometimes they don't work at all, sometimes they work in WP but not in a mirror site or something like Facebook for instance, etc.. The bottom line is we shouldn't be treating a template redirect in the same manner as an article redirect. I also don't like the comment that it will update in time, no offense intended but Godzilla used to be a little bitty lizard and look what happened to him/her over time. They became a really big problem that wreaked a lot of havoc and got into a lot of mischief before it was stopped. Frankly I don't know how long this problem has existed but I know that both that template and its redirect have been in place for a long time. How many more are there out there that aren't updating or functioning? I don't think anyone knows. --Kumioko (talk) 17:41, 12 January 2012 (UTC)
Actually it would have happened redirect or not. It wasn't the redirect that caused it, it was the move itself that caused it and put it in the job queue. Pretty much none of the things you mention actually happen when redirecting a template. The only issue that you can have with redirects of templates is if the old template had different parameter names than the new template. And usually most people fix those when redirecting a template. I think you have a fairly big misunderstanding of how template redirects work. Yes sometimes the job queue loses some tasks but that is fairly rare and not really because of of the redirect itself. As for how many others out there aren't functioning right now....if people haven't noticed then there probably isn't a problem or people would complain or look into it like in this case. -DJSasso (talk) 17:53, 12 January 2012 (UTC)
This "problem" has existed since job queue was made. Indeed, this wasn't the redirect that caused it. My guess is the job queue got borked up and flushed its entries (so this would have affected articles, categories, and user talk pages equally that needed an update at the time). —  HELLKNOWZ  ▎TALK 18:13, 12 January 2012 (UTC)
Exactly it would have affected anything that was waiting to refresh in the job queue when the job queue hiccuped. The only reason it looks like its a template redirect problem is because you can easily tell which pages never got their cache updated because they are "marked" via the redirect being there whereas looking at an article that was waiting to update you wouldn't have an easy way to tell you were still looking at the old cached version. However when the queue hiccuped it would have affected any article waiting in its queue not just template redirects . -DJSasso (talk) 18:17, 12 January 2012 (UTC)
No offense but however the redirect was created, its still a problem with a redirect and this redirect and template has been in existance for a long long time so it must have been like this for a while. Basically what I am getting though is that its a general bi product of the redirect process itself and not the type of redirect. Is that correct? And I completely disagree that just because knowone has caught it means it isn't a problem. Job queue aside I have seen several occassions where template redirects didn't function including the reasons listed above and sometimes in relation to interacting with sister wiki sites. In some cases knowone could figure out why it didn't work saying "it should" but when the redirect was chaneged to the actual template the problem was mysteriously fixed. In one case they even said that it was a rare bug that had already been identified and wasn't worth bringing to light because the status quo quo of template redirects works most of the time. Additionally its confusing for inexperienced editors (sometimes even for the experienced ones) and hard to program when you have 50 variations of something to try and compensate for. You are right too about the template parameter problem, that is another issue in itself and again would usually be solved by simply not using a redirect. Oh well, no reason to dwell on it, this ones fixed so the problem is solved and we can all move on. Thanks again for the help. Just wondering but this seems to be a fairly common problem is there a way I can check the "Job queue" the next time this problem comes up; or better yet just check in periodically to make sure things aren't hung up? --Kumioko (talk) 18:21, 12 January 2012 (UTC)
Well, if you're really curious, you can check here and the job queue length on one of the three servers handling the queue will be reported. But you can't see what articles are in the queue, mostly because it'd be a.) hard on the servers, and b.) it wouldn't be of much use. — madman 18:27, 12 January 2012 (UTC)
Thank you thats good to know. --Kumioko (talk) 18:32, 12 January 2012 (UTC)
(ec) Yup you basically have it :) . Whenever something is done on the wiki that changes how multiple pages look or are referenced it will go into the job queue so that the cache of the page can be updated. (unless you are editing one single page which causes it skip the queue and happen immediately). So then each page affected by whatever you have done wait in line to have their cache updated one by one so as not to put a strain on the resources basically. What sometimes happens and probably happened was that when the template was moved. All of the pages it was used on were added to the job queue to be updated to the fact that the template was now redirected. At some point while they were sitting in the queue the queue had an issue and dropped all the pages from its queue. So any that hadn't yet been updated still had their old cache. Between now and then some have had their redirect "fixed" or some other edit on their page which then would have updated the cache. However, some have never been touched since the move happened so sat with their old cache until I basically shoved them back into the job queue by editing the template/redirect. The queue itself usually isn't all that big...but if someone has changed the look of some high use navboxes and the like it can get pretty long. -DJSasso (talk) 18:39, 12 January 2012 (UTC)
Thanks. --Kumioko (talk) 19:00, 12 January 2012 (UTC)