I am trying to process a wikimedia dump file (eg: http://dumps.wikimedia.org/enwiki/20150304/enwiki-20150304-pages-meta-history9.xml-p000897146p000925000.bz2) using gwtwiki and java. I am pretty new to java (I could understand and write simple java scripts) and I'm using eclipse. I have imported the gwtwiki project and tried to run the DumpExample.java and I got the Usage: Parser <XML-FILE>
response error.
I don't know where to define the path of the .bz2 dump file and tried to at least edit the Usage: Parser <XML-FILE>
error response to something else but I got the same result even when trying to run it step by step or adding a few more lines of code like System.out.println("test");
Documentation offers no explanation of how exactly this should be done as I imagine that for someone that knows java well this should be pretty self explanatory.
Now, i don't need a step by step tutorial on how can I achieve this but I would like a starting point or a few clues and I will do my learning on my own. After searching for days I see that I don't even know where to start. I also know you could say something like:
Learn more Java!
but I always learn better by actually engaging in a project like this.
The DumpExample.java:
package info.bliki.wiki.dump;
import org.xml.sax.SAXException;
/**
* Demo application which reads a compressed or uncompressed Wikipedia XML dump
* file (depending on the given file extension <i>.gz</i>, <i>.bz2</i> or
* <i>.xml</i>) and prints the title and wiki text.
*
*/
public class DumpExample {
/**
* Print title an content of all the wiki pages in the dump.
*
*/
static class DemoArticleFilter implements IArticleFilter {
public void process(WikiArticle page, Siteinfo siteinfo) throws SAXException {
System.out.println("----------------------------------------");
System.out.println(page.getId());
System.out.println(page.getRevisionId());
System.out.println(page.getTitle());
System.out.println("----------------------------------------");
System.out.println(page.getText());
}
}
/**
* Print all titles of the wiki pages which have "Real" content
* (i.e. the title has no namespace prefix) (key == 0).
*/
static class DemoMainArticleFilter implements IArticleFilter {
public void process(WikiArticle page, Siteinfo siteinfo) throws SAXException {
if (page.isMain()) {
System.out.println(page.getTitle());
}
}
}
/**
* Print all titles of the wiki pages which are templates (key == 10).
*/
static class DemoTemplateArticleFilter implements IArticleFilter {
public void process(WikiArticle page, Siteinfo siteinfo) throws SAXException {
if (page.isTemplate()) {
System.out.println(page.getTitle());
}
}
}
/**
* Print all titles of the wiki pages which are categories (key == 14).
*/
static class DemoCategoryArticleFilter implements IArticleFilter {
public void process(WikiArticle page, Siteinfo siteinfo) throws SAXException {
if (page.isCategory()) {
System.out.println(page.getTitle());
}
}
}
/**
* @param args
*/
public static void main(String[] args) {
if (args.length == 1) {
System.out.println("test");
System.out.println("test");
System.out.println("test");
System.out.println("test");
System.err.println("Usagessss: Parser <XML-FILEZZZZZZ>");
System.out.println("test2");
System.exit(-1);
}
// String bz2Filename =
// "c:\\temp\\dewikiversity-20100401-pages-articles.xml.bz2";
String bz2Filename = args[0];
try {
IArticleFilter handler = new DemoArticleFilter();
WikiXMLParser wxp = new WikiXMLParser(bz2Filename, handler);
wxp.parse();
} catch (Exception e) {
e.printStackTrace();
}
}
}
Late answer, maybe it will help you or if you've moved on, mayby it will help the next person stumbling onto this post, I'm using this implementation: