You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A quick search shows you that CorpusCrawler does not crawl or use Wikipedia. I don't know Python but it seems feasible, either from scratch on Wikipedia API (1) or using existing server-side tools (2).
Assess quality of wikipedia raw text data in minority languages.
Compare gain to other available public corpora such Tatoeba (358 languages).
Crawling via API
By using and loading available list of articles per wikipedia, then scrap the sites. If too large, could be limited to max=n articles.
Given an iso code such as Ndonga's ng :
download List of page titles in main namespace archive (see below)
get the articles into a python list variable (python)
code a crawler in /Lib/corpuscrawler/util.py, following other crawler as examples 1, which query Wikipedia API, extract the valuable text, save the text. (python)
A quick search shows you that CorpusCrawler does not crawl or use Wikipedia. I don't know Python but it seems feasible, either from scratch on Wikipedia API (1) or using existing server-side tools (2).
Assess interest
Crawling via API
By using and loading available list of articles per wikipedia, then scrap the sites. If too large, could be limited to
max=narticles.Given an iso code such as Ndonga's
ng:Wikipedia API provides text
Various formats available:
format: The format of the output.jsont: Output data in JSON format.jsonfmt: Output data in JSON format (pretty-print in HTML).nonet: Output nothing.phpt: Output data in serialised PHP format.phpfmt: Output data in serialised PHP format (pretty-print in HTML).rawfmt: Output data, including debugging elements, in JSON format (pretty-print in HTML).xmlt: Output data in XML format.xmlfmt: Output data in XML format (pretty-print in HTML).List of Wikipedia (~300)
List of articles per Wikipedia
For convenience, I use the tiny Ndonga (
ng) Wikipedia (8 articles), easier to explore by hand.For larger demo, you could also inspect similar URLs with the iso of :
Namespaces
On all wikis. See also here
0: (main)1: Talk:2: User:3: User_talk:Dumps' & paths
Using Wikipedia extractors ?
Hybrid approach
util.py, code a simple crawler which get just that .zip, convert back to txt content, add to the corpora.cc: @brawer