Wikipedia:Wikipedia Signpost/2016-08-04/Recent research
{{Wikipedia:Signpost/Template:Signpost-article-start|{{{1|Easier navigation via better wikilinks}}}|By Jonathan Morgan and Tilman Bayer| 3 August 2016}}
{{WRN}}
=Briefly=
==New research journal about Wikipedia and higher education==
A new journal called [http://wikistudies.org/ Wiki Studies] is being launched. As [https://lists.wikimedia.org/pipermail/wiki-research-l/2016-June/005253.html explained] by founding editor Bob Cummings (a professor for Writing and Rhetoric at the University of Mississippi and author of a 2009 book titled "Lazy virtues: teaching writing in the age of Wikipedia"):
:Wiki Studies is an interdisciplinary, open-access, peer-reviewed journal focusing on the intersection of Wikipedia and higher education. We are interested in most all of the same topics hosted on the [https://lists.wikimedia.org/mailman/listinfo/wiki-research-l research listserv] and the newsletter, including articles about pedagogical practices, epistemology, bias, mission, and reliability. We will not charge for submission or publication, and will offer open access to readers. We will host on Open Journal Systems.
The submission deadline for the first annual volume, envisaged to appear in March 2017, is 31 December 2016.
==Conferences and events==
See the research events page on Meta-wiki for upcoming conferences and events, including submission deadlines.
=Other recent publications=
A list of other recent publications that could not be covered in time for this issue—contributions are always welcome for reviewing or summarizing newly published research.
- "Wikipedia's semantics of openness: Ideas and implementations of the Internet's extended participation potentials in the context of collaborative knowledge production" (sociology dissertation in German, original title: "Die Offenheitssemantik der Wikipedia. Ideen und Verwirklichungen der erweiterten Beteiligungspotentiale des Internets im Kontext kollaborativer Wissensproduktion").{{Cite paper| publisher = Universität Bielefeld| last = Groß| first = Linda| title = Die Offenheitssemantik der Wikipedia. Ideen und Verwirklichungen der erweiterten Beteiligungspotentiale des Internets im Kontext kollaborativer Wissensproduktion| date = 2016| url = https://pub.uni-bielefeld.de/publication/2901955}}
- "Towards a (de)centralization-based typology of peer production"{{Cite journal| issn = 1726-670X| volume = 14| issue = 1| pages = 189-207| last1 = Rosnay| first1 = Melanie Dulong de| last2 = Musiani| first2 = Francesca| title = Towards a (De)centralization-Based Typology of Peer Production| journal = tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society| date = 2016-03-26| url = http://www.triple-c.at/index.php/tripleC/article/view/728}} From the paper: "This paper proposes a typology of peer-production platforms, based on the centralisation/decentralisation levels of several of their design features. Wikipedia [presented as an example of a platform with "centralised architecture" but "decentralised governance"] is a case of governance being semi-distributed among several nodes structured in semi-centralised clusters (administrators and editors control what is accepted or rejected in case of conflict) or decentralised local networks (chapters for choosing projects and thematics on which to focus, e.g. the Wikicheese project of Wikimedia France)."
- "Developing an annotator for Latin texts using Wikipedia"
{{Cite book| last = Guarasci| first = Raffaele| title = Developing an annotator for Latin texts using Wikipedia| date = March 2016| url = https://hal.archives-ouvertes.fr/hal-01279853}} HAL Archives Ouvertes, working paper
From the abstract: "Although Wikipedia is an excellent resource from which to extract many kinds of information (morphological, syntactic and semantic) to be used in NLP tasks on modern languages, it was rarely applied to perform NLP tasks for the Latin language. The work presents the first steps of the developement of a POS Tagger based on the Latin version of Wiktionary and a Wikipedia-based semantic annotator."
- "Candidate searching and key coreference resolution for Wikification"{{Cite conference| publisher = ACM| doi = 10.1145/2857546.2857631| isbn = 9781450341424| pages = 83:1-83:5| last1 = Pham| first1 = Minh T. X.| last2 = Cao| first2 = Tru H.| last3 = Huynh| first3 = Huy M.| title = Candidate Searching and Key Coreference Resolution for Wikification|book-title= Proceedings of the 10th International Conference on Ubiquitous Information Management and Communication| location = New York, NY, USA| series = IMCOM '16| date = 2016| url = http://doi.acm.org/10.1145/2857546.2857631}} {{closed access}} From the abstract: "Wikification is the task to link textual mentions in a document to articles in Wikipedia. It comprises three main steps, namely, mention recognition, candidate generation, and entity linking. For candidate generation, existing methods use hyperlinks in Wikipedia or match a mention of discourse to Wikipedia article titles. They may miss the correct target entity and thus fail to link the mention to Wikipedia. In this paper, we propose to use a mention as a query and Wikipedia [sic] own search engine to look for additional candidate articles. [...] our proposed method outperforms or achieves competitive results in comparison to some state-of-the-art systems, but is simpler and uses less features."
- "Generating article placeholders from Wikidata for Wikipedia—increasing access to free and open knowledge"{{Cite paper| publisher = HTW Berlin (University of Applied Sciences)| last = Kaffee| first = Lucie-Aimée| title = Generating Article Placeholders from Wikidata for Wikipedia - Increasing Access to Free and Open Knowledge| date = 2016-04-02}} From the abstract: "The major objective of this thesis is to increase the access to open and free knowledge in Wikipedia by developing a MediaWiki extension called ArticlePlaceholder. ArticlePlaceholders are content pages in Wikipedia auto-generated from information provided by Wikidata. [...] This thesis [...] includes the personas, scenarios, user-stories, non-functional and functional requirements for the requirement analysis. The analysis was done in order to implement the features needed to achieve the goal of providing more information for under-resourced languages. The implementation of these requirements is the main part of the following thesis."
- "Analysing the Usage of Wikipedia on Twitter: Understanding Inter-Language Links"{{Cite conference| publisher = IEEE Computer Society| doi = 10.1109/HICSS.2016.243| isbn = 9780769556703| pages = 1920-1929| last1 = Zangerle| first1 = Eva| last2 = Schmidhammer| first2 = Georg| last3 = Specht| first3 = Gunther| title = Analysing the Usage of Wikipedia on Twitter: Understanding Inter-Language Links|book-title= Proceedings of the 2016 49th Hawaii International Conference on System Sciences (HICSS)| location = Washington, DC, USA| series = HICSS '16| date = 2016| url = http://doi.ieeecomputersociety.org/10.1109/HICSS.2016.243}} From the abstract: "In this paper, we analyse links within tweets referring to a Wikipedia of a language different from the tweet's language. [...] We find that the main cause for inter-language links is the non-existence of the article in the tweet's language. Furthermore, we observe that the quality of the tweeted articles is constantly higher in comparison to their counterparts, suggesting that users choose the article of higher quality even when tweeting in another language. Moreover, we find that English is the most dominant target for inter-language links." (See also [http://www.slideshare.net/evazangerle/analysing-the-usage-of-wikipedia-on-twitter-understanding-interlanguage-links presentation slides] and our coverage of a preceding paper: "Wikipedia and Twitter".)
==[http://snap.stanford.edu/wikiworkshop2016/#papers-www Wiki Workshop 2016] at [[International World Wide Web Conference]] (WWW)==
- "Wikipedia Tools for Google Spreadsheets"{{Cite conference| publisher = International World Wide Web Conferences Steering Committee| doi = 10.1145/2872518.2891112| isbn = 9781450341448| pages = 997-1000| last = Steiner| first = Thomas| title = Wikipedia Tools for Google Spreadsheets|book-title= Proceedings of the 25th International Conference Companion on World Wide Web| location = Republic and Canton of Geneva, Switzerland| series = WWW '16 Companion| date = 2016|url= http://snap.stanford.edu/wikiworkshop2016/papers/Wiki_Workshop__WWW_2016_paper_4.pdf}} From the abstract: "With the Wikipedia Tools for Google Spreadsheets, we have created a toolkit that facilitates working with Wikipedia data from within a spreadsheet context. We make these tools available as open-source on GitHub [https://github.com/tomayac/wikipedia-tools-for-google-spreadsheets], released under the permissive Apache 2.0 license." (See also :meta:Wikipedia Tools for Google Spreadsheets)
- "Assessing the Quality of Wikipedia Editors through Crowdsourcing"{{Cite conference| publisher = International World Wide Web Conferences Steering Committee| doi = 10.1145/2872518.2891113| isbn = 9781450341448| pages = 1001-1006| last1 = Suzuki| first1 = Yu| last2 = Nakamura| first2 = Satoshi| title = Assessing the Quality of Wikipedia Editors Through Crowdsourcing|book-title= Proceedings of the 25th International Conference Companion on World Wide Web| location = Republic and Canton of Geneva, Switzerland| series = WWW '16 Companion| date = 2016|url= http://snap.stanford.edu/wikiworkshop2016/papers/Wiki_Workshop__WWW_2016_paper_5.pdf }} From the abstract: "...we propose a method for assessing the quality of Wikipedia editors. By effectively determining whether the text meaning persists over time, we can determine the actual contribution by editors. This is used in this paper to detect vandal. However, the meaning of text does not always change if a term in the text is added or removed. Therefore, we cannot capture the changes of text meaning automatically, so we cannot detect whether the meaning of text survives or not. To solve this problem, we use crowdsourcing to manually detect changes of text meaning. In our experiment, we confirmed that our proposed method improves the accuracy of detecting vandals by about 5%."
- "Finding Structure in Wikipedia Edit Activity: An Information Cascade Approach"{{Cite conference| publisher = International World Wide Web Conferences Steering Committee| doi = 10.1145/2872518.2891110| isbn = 9781450341448| pages = 1007-1012| last1 = Tinati| first1 = Ramine| last2 = Luczak-Roesch| first2 = Markus| last3 = Hall| first3 = Wendy| title = Finding Structure in Wikipedia Edit Activity: An Information Cascade Approach|book-title= Proceedings of the 25th International Conference Companion on World Wide Web| location = Republic and Canton of Geneva, Switzerland| series = WWW '16 Companion| date = 2016|url= http://snap.stanford.edu/wikiworkshop2016/papers/Wiki_Workshop__WWW_2016_paper_2.pdf}} From the abstract: "This paper documents a study of the real-time Wikipedia edit stream containing over 6 million edits on 1.5 million English Wikipedia articles, during 2015.[...] Our findings show that by constructing information cascades between Wikipedia articles using editing activity, we are able to construct an alternative linking structure in comparison to the embedded links within a Wikipedia page. This alternative article hyperlink structure was found to be relevant in topic, and timely in relation to external global events (e.g., political activity)."
- "With a Little Help from my Neighbors: Person Name Linking Using the Wikipedia Social Network"{{Cite conference| publisher = International World Wide Web Conferences Steering Committee| doi = 10.1145/2872518.2891109| isbn = 9781450341448| pages = 985-990| last1 =Geiß | first1 = Johanna| last2 = Gertz| first2 = Michael| title = With a Little Help from My Neighbors: Person Name Linking Using the Wikipedia Social Network|book-title= Proceedings of the 25th International Conference Companion on World Wide Web| location = Republic and Canton of Geneva, Switzerland| series = WWW '16 Companion| date = 2016| url = http://snap.stanford.edu/wikiworkshop2016/papers/Wiki_Workshop__WWW_2016_paper_1.pdf}} From the abstract: "In this paper, we present a novel approach to person name disambiguation and linking that uses a large-scale social network extracted from the English Wikipedia."
- "Cleansing Wikipedia Categories using Centrality"{{Cite conference| publisher = International World Wide Web Conferences Steering Committee| doi = 10.1145/2872518.2891111| isbn = 9781450341448| pages = 969-974| last1 = Boldi| first1 = Paolo| last2 = Monti| first2 = Corrado| title = Cleansing Wikipedia Categories Using Centrality|book-title= Proceedings of the 25th International Conference Companion on World Wide Web| location = Republic and Canton of Geneva, Switzerland| series = WWW '16 Companion| date = 2016|url=http://snap.stanford.edu/wikiworkshop2016/papers/Wiki_Workshop__WWW_2016_paper_3.pdf }} From the abstract: "We propose a novel general technique aimed at pruning and cleansing the Wikipedia category hierarchy, with a tunable level of aggregation. Our approach is endogenous, since it does not use any information coming from Wikipedia articles, but it is based solely on the user-generated (noisy) Wikipedia category folksonomy itself." See also https://github.com/corradomonti/wikipedia-categories
- "Learning Web Queries For Retrieval of Relevant Information About an Entity in a Wikipedia Category"{{Cite conference| publisher = International World Wide Web Conferences Steering Committee| doi = 10.1145/2872518.2891114| isbn = 9781450341448| pages = 1013-1014| last1 = Yadav| first1 = Vikrant| last2 = Kumar| first2 = Sandeep| title = Learning Web Queries for Retrieval of Relevant Information About an Entity in a Wikipedia Category|book-title= Proceedings of the 25th International Conference Companion on World Wide Web| location = Republic and Canton of Geneva, Switzerland| series = WWW '16 Companion| date = 2016|url=http://snap.stanford.edu/wikiworkshop2016/papers/Wiki_Workshop__WWW_2016_paper_6.pdf }} From the abstract: "... we present a novel method to obtain a set of most appropriate queries for retrieval of relevant information about an entity from the Web. Using the body text of existing articles in a Wikipedia category, we generate a set of queries capable of fetching the most relevant content for any entity belonging to that category. We find the common topics discussed in the articles of a category using Latent Semantic Analysis (LSA) and use them to formulate the queries. Using Long Short-Term Memory (LSTM) neural network, we reduce the number of queries by removing the less sensible ones and then select the best ones out of them."
- "On the Retrieval of Wikipedia Articles Containing Claims on Controversial Topics"{{Cite conference| publisher = International World Wide Web Conferences Steering Committee| doi = 10.1145/2872518.2891115| isbn = 9781450341448| pages = 991-996| last1 = Roitman| first1 = Haggai| last2 = Hummel| first2 = Shay| last3 = Rabinovich| first3 = Ella| last4 = Sznajder| first4 = Benjamin| last5 = Slonim| first5 = Noam| last6 = Aharoni| first6 = Ehud| title = On the Retrieval of Wikipedia Articles Containing Claims on Controversial Topics|book-title= Proceedings of the 25th International Conference Companion on World Wide Web| location = Republic and Canton of Geneva, Switzerland| series = WWW '16 Companion| date = 2016| url = http://snap.stanford.edu/wikiworkshop2016/papers/Wiki_Workshop__WWW_2016_paper_7.pdf}} From the abstract: "This work presents a novel claim-oriented document retrieval task. For a given controversial topic, relevant articles containing claims that support or contest the topic are retrieved from a Wikipedia corpus."
- "Automatic Discovery of Emerging Trends using Cluster Name Synthesis on User Consumption Data"{{Cite conference| publisher = International World Wide Web Conferences Steering Committee| doi = 10.1145/2872518.2891116| isbn = 9781450341448| pages = 981-983| last1 = Chattopadhyay| first1 = T.| last2 = Maiti| first2 = Santa| last3 = Pal| first3 = Arindam| last4 = Ghose| first4 = Avik| last5 = Pal| first5 = Arpan| last6 = Viswanathan| first6 = Shanky| last7 = Sivakumar| first7 = Narendran| title = Automatic Discovery of Emerging Trends Using Cluster Name Synthesis on User Consumption Data: Extended Abstract|book-title= Proceedings of the 25th International Conference Companion on World Wide Web| location = Republic and Canton of Geneva, Switzerland| series = WWW '16 Companion| date = 2016| url = http://snap.stanford.edu/wikiworkshop2016/papers/Wiki_Workshop__WWW_2016_paper_8.pdf }} From the abstract: "Technically it is possible for the telecommunication companies to recommend suitable advertisements if they can classify the web sites browsed by their customers into classes like sports, e-commerce, social networking, streaming media etc. Another problem is to classify a new website when it doesn't belong to any of the existing clusters. In this paper, the authors are going to propose a method to automatically classify the websites and synthesize the cluster names in case it doesn't belong to any of the predefined clusters. [...] This proposed system uses the Wikipedia data [from articles about such websites] to construct the document for the websites browsed by the customers."
- "Applying a Multi-Level Modeling Theory to Assess Taxonomic Hierarchies in Wikidata"{{Cite conference| publisher = International World Wide Web Conferences Steering Committee| doi = 10.1145/2872518.2891117| isbn = 9781450341448| pages = 975-980| last1 = Brasileiro| first1 = Freddy| last2 = Almeida| first2 = João Paulo A.| last3 = Carvalho| first3 = Victorio A.| last4 = Guizzardi| first4 = Giancarlo| title = Applying a Multi-Level Modeling Theory to Assess Taxonomic Hierarchies in Wikidata|book-title= Proceedings of the 25th International Conference Companion on World Wide Web| location = Republic and Canton of Geneva, Switzerland| series = WWW '16 Companion| date = 2016|url=http://snap.stanford.edu/wikiworkshop2016/papers/Wiki_Workshop__WWW_2016_paper_11.pdf}} From the abstract: "In this paper, we address the quality of taxonomic hierarchies in Wikidata. We focus on taxonomic hierarchies with entities at different classification levels (particular individuals, types of individuals, types of types of individuals, etc.). We use an axiomatic theory for multi-level modeling to analyze current Wikidata content, and identify a significant number of problematic classification and taxonomic statements. The problems seem to arise from an inadequate use of instantiation and subclassing in certain Wikidata hierarchies."
=References=
{{reflist|30em}}