User talk:Joeytje50

Please direct all spam/blackmail/threats to this page: User talk:Liquidhelium. Thank you. If you're here for questions about setting up the chat log bot, please see this page.

E=mc²
I hope I'm alive the day I see you creating (9001)*(300,000,000)² joules of energy. 15:13, January 16, 2013 (UTC)

Are you really cake
Or are you my sandwich? Or the pie? 19:39, January 18, 2013 (UTC)


 * Then mr. asdfghjkl was lying to me and you were right 12:57, January 19, 2013 (UTC)

Re:Mewns
YUS! Gud fer remembermerbing! Io luff chu. 01:14, January 21, 2013 (UTC)

Re:
Yes, I'm using the data for a Naive Bayes Classifier. The object of it is to provide more or less a machine learning algoritm that'll allow the same knowledge of a human. Now, what you're suggesting of a bot collecting this for a bot, it seems reasonable, but then flaws the whole data. Labeling a feature in the training data will only be known by a human, and not of a bot (I'd have to have the same data to categorize it anyways :P). The training data is supposed to be done by humans anyways for accurate data.

The objective I'm trying to accomplish is a machine learning bot, basically. Without a lot of human entered data, the bot won't understand what is good and what is bad. After being told what is good and what is bad, it can then build off of that, learning even more, becoming more accurate. My current accuracy rate is .952986. That is fairly good for the amount of data right now, but I'm looking for .98's and .99's. So yeah. 01:19, January 23, 2013 (UTC)
 * To me it seems easier just to have my text editor open, plopping in the data (it doesn't take that long for me it seems), then after it builds up, save it. Also to have it save after every single entry would turn into "edit spam" and would become unnecessary.


 * Regarding a revert, it would be the same way. The advantages with the revert is the edit summary, as that'll play apart in the data for the classifier to know that the "Undid rev.." is better known as "ham" than "spam". It'll also let it know that the features of user and active plays an important part in classification of another edit.


 * Regarding the revision ID. That is unnecessary extra data for the feature to give to the classifier, as this is nothing more than a training set for tests to be based off of. A naive Bayes classifier considers all these features to contribute independently to the probability that an edit is "spam" or "ham", whether or not they're in fact related to each other or to the existence of the other features.  If we give the revision ID, it does not play apart in the probability, so that essence of data is therefor moot.  For the classifier to go back and verify the given data would be extra time wasted and silly to do (because the only two contributors are Haidro and myself, and I trust both of us to give accurate data that of which we would classify an edit off of).


 * For future reference, I found this lovely picture that gives an excellant example of what a Naive Bayes Classifier can do:


 * Basically, this shows that every word/feature plays a part in classification. Given the revision ID, see how it wouldn't make an impact on the classification for the "spam", "ham" and "eggs"? The difference between these two images though is the fact that this is nothing more than text and labels.  I have features and labels which is more accurate, for better classification.  That way everything can play apart, rather than a bag of words.  Currently, I can produce a ratio of the most informative features and what the data currently shows:

user = 'yes'            ham : spam   =      7.9 : 1.0 old = ''               ham : eggs   =      3.7 : 1.0 user = 'no'            eggs : ham    =      3.3 : 1.0 page_length = 'small'          ham : eggs   =      2.9 : 1.0 active = 'yes'            ham : eggs   =      2.2 : 1.0 page_length = 'long'          spam : ham    =      1.8 : 1.0 active = 'no'            spam : ham    =      1.7 : 1.0 page_length = 'short'         eggs : spam   =      1.7 : 1.0 summary = ''               ham : eggs   =      1.6 : 1.0 page_length = 'medium'        spam : ham    =      1.5 : 1.0
 * This data means, if the user = 'yes', there is a 7.9 to 1 chance that it's going to be ham rather than spam. That's what humans will think too, wouldn't you say (you're going to more likely think an IP is going to vandalize over a user)?  This goes the same way for all the other data.  I also want to point out (because I feel someone might ask this sooner or later) that the page_length plays an important part in the data.  As I didn't want to make unnecessary requests to find the page views, as that would be invalid data without knowing other key contributors such as the when the page was created, did this require many users input, etc.  The page length felt like a fine alternative as this scenario most always plays out: If the page is long, it's popular.  There are some instances where this isn't true, but it's most commonly accurate (Nex is a long page, that's a popular page).  Even know, it plays a part in the data:   and that is commonly true.  Will that be the sole discretion of classifying an edit?  Of course not.  It's nothing more than the informative features found in the current data set.  Now, I do believe that those numbers will change with the more data given (the user = 'no' scenario is what I think will change the most), but that'll only be seen when the dataset is complete (and that obviously isn't soon).


 * Everything's working well otherwise, and will work well in the future. Hopefully this will answer any future questions.  19:33, January 23, 2013 (UTC)

Font
It seems that after I edited the sandbox, it worked; however you said nothing happened, so I refreshed the page, and it went back to the default serif. I'm not sure why, because before I came back and talked to you in the chat, I tested it on my userpage, and the preview shows it with that font. I even through the characters on the screenshot to see if they look alike, which they do. I'm confused as to why they it stopped working as soon as I got it to work on the sandbox. It should work being its in the Rs font css, shouldn't it? 00:15, January 26, 2013 (UTC)


 * Few hours later, it seems to work again. Not sure if someone was playing with something or what, but I'm writing this as it still works in case the above happens again. Weird.... 05:09, January 26, 2013 (UTC)

I believe you wanted to see this
Example syntax:

Request returns JSON like:
 * Oh, I know! Let's encase the response in an array and not an object! That will be a great idea! Yeah, no.

Example syntax logs out to the console (but you can do whatever you want with the data):

02:28, January 27, 2013 (UTC)
 * Maybe I could steal the HTML structure Vector uses for search suggestions and implement a function that does things with that at some point down the line, but not now. At the moment I just want to get fundamentals like, you know, edit down. 04:37, February 3, 2013 (UTC)

User:The Mol Man/Useless templates
Wanna give me some ideas for others? 02:44, January 27, 2013 (UTC)

Uhm, why?
http://runescape.wikia.com/index.php?title=Template%3AInfobox_Bonuses_Beta&diff=7240712&oldid=7198387 21:17, February 1, 2013 (UTC)

O i c
http://gyazo.com/0abd8ae3fa9020d2067e82f8c3571dd2 08:17, February 2, 2013 (UTC)

Template:Title
We've noticed that you are still using the title hack template and JavaScript to change the title of pages on this wiki. This template was originally created because of a bug in the MediaWiki software that prevented custom titles through. This particular bug was fixed in MW 1.18. Now that we have upgraded all wikis to MW 1.19, we would like to urge you to take advantage of  on this wiki.

According to this page, this template is still in use on a number of pages. I would be willing to run a bot and change over all instances of  to   and remove the relevant JavaScript for you. The major advantage is that it will no longer require JavaScript to change the title. This increases page load times slightly and allows search crawlers to index the pages correctly which, in turn, improves SEO. Please let me know your thoughts on this subject. Rappy 02:50, February 3, 2013 (UTC)


 * Is it just me, or is the job queue not updating here? This seems to suggest you're back up to 307k jobs to be run and I manually ran an update job on this wiki a few days ago. Are other Special Pages showing up as sluggish? Rappy 07:25, February 6, 2013 (UTC)

target="_blank"
Is there any...easy way to do this with wikitext? I came to the point where using verbatim tags (and thus Mediawiki pages) and then swapped spaces for underscores in a template would be the easiest, less scripty way to do it, but thought I'd just check in case you knew of an easier way?
 * If all else fails, we could just do something like


 * 11:49, February 5, 2013 (UTC)


 * Tried ? Rappy 07:19, February 6, 2013 (UTC)

I gawt bord
User:The_Mol_Man/HTML 02:44, February 6, 2013 (UTC)

Dragon wolf
Please link update images on the relevant update page's talk page. 18:03, February 7, 2013 (UTC)
 * Okay, just making sure you were aware. (: 18:58, February 7, 2013 (UTC)

You've been invited to my Max cape party!
''Change of date for Max cape party... please re-read.'' Bren's Max cape Party Thursday, 14 February 2013 at 7:00PM EST World 91 at Varrock East Bank

21:03, February 14, 2013 (UTC)

2007 scape
Hey Joeyjoeyjoeyjoeyjoey. Do you mind to admin me on 2007 scape, so I can help you guys import the stuff? I know how to import, I already used the tool. I can use pywikipediabot if you want, if I can get it going. Thanks Joeykinz, 01:26, February 15, 2013 (UTC)

Chat-pings
Is it possible to refine the pings for usernames, so it only pings if the word is use by itself, such as cam rather than campus, cameraphone, camelot... you get the picture. Maybe something like I think I asked Monchoman45 about this ages ago, but your stuff is so much easier to personalise :)

re:
I was just going to tell you that all I really have to do before releasing that thing in some stable form is document it a little.



2007Caek
Forum:Affiliate with 2007. I've used my awesome admin powahs to make the thread. I honestly doubt we need a thread there since all the people actually part of the community are here as well. The Wiki never decided on anything officially for 2007 and the interlanguage stuff really does make it weird, but hopefully this thread will tie all of the loose ends. 23:53, February 22, 2013 (UTC)

mr joey
I was working on documenting Tundra today. :D I rewrote the readme and I documented a lot of the functions. At the time writing, I haven't done edit or any of the read functions yet, but I'm getting there! :D 10:10, February 23, 2013 (UTC)

Ghost4942
Can you please review this? He's a sockpuppet of User:Ghost4942 who you infinitely blocked. Also I banned his IP because he was creating sockpuppets today. 00:57, February 24, 2013 (UTC)
 * Relevant chatlogs are at RuneScape:Chat/Logs/24 August 2012. I would advise against this, Dr Caek PhD. 01:26, February 24, 2013 (UTC)

rs2007
Less than 150k votes to go before me,bership becomes 'free' (it means you're automatically member on 2007 if you are member on the live game, right)? I really want to hit that 500k mark (490k will suffice though). Do you think we'll make it? (btw, I've got full steel and done Hazeel Cult, Murder Mystery, Sea Slug and Romeo and Juliet, training agility and exploring atm like a boss) 11:26, February 24, 2013 (UTC)
 * I've uploaded stuff for graphical updates, we can always use that. In addition, I've got the cutscenes of Demon Slayer and R&J recorded. Not Sea Slug or Hazeel Cult however, sorry. I'll post the images to the cutscene link in my signature, feel free to upload some of them there if I forget. 06:34, February 25, 2013 (UTC)

Hey! Listen!
I finalised the first stable release of Tundra; good ol' version 0.1. The code you want is here (with an un-minified version here). I added instructions to the readme for setting it up, so you shouldn't have any problems with that. If you want to start using it for things and maybe give me some ongoing feedback that would be appreciated! :D 07:16, February 25, 2013 (UTC)
 * One thing: I do realise that there's still a lot undocumented, so it will be hard for you to use those things. I am getting there, but perhaps you may just want to mess around with the things I do have documented. You could look at the code for anything undocumented for the time being; it's not like it's not commented. Also, that link I gave you was a fixed link to 0.1; code that references it will always load 0.1 and will always work. If you want a version of it that will automatically update, this is the file you want. I don't recommend using it though; I think it would be better to go through and manually upgrade your code each time to make sure it always works. 08:40, February 25, 2013 (UTC)

MediaWiki:Common.js/GECharts.js
Hey! I just noticed that " " doesn't work. This should display 2 decimal points in Index charts like the one in CTI but it still shows zero decimal points.

Could you debug the script and figure out why? Thanks! 02:00, March 1, 2013 (UTC)

Fix script
For the MediaWiki:Common.js/pagetitle.js, it is outputting errors in the javascript console... Fix is:

replace: $(function{	var newTitle = $("#title-meta").html;	if (!newTitle) return;	var edits = $("#user_masthead_since").text;	$(".firstHeading,#WikiaUserPagesHeader h1,#WikiaPageHeader h1").html(newTitle);	$(".#user_masthead_head h2").html(newTitle + "" + edits + " "); });

with: $(function{	var newTitle = $("#title-meta").html;	if (!newTitle) return;	$(".firstHeading,#WikiaUserPagesHeader h1,#WikiaPageHeader h1").text.replace(wgPageName, newTitle);	$(".#user_masthead_head h2").text.replace(wgPageName, newTitle); });

Regards, Kris.

01:36, March 7, 2013 (UTC)