I am trying to parse the xml from a wikia dump to pull out the child element and then look for the links in the text identified by [[ and ]]. So from the following sample snippet from one wiki we should get
<mediawiki xmlns="http://www.mediawiki.org/xml/export-0.6/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.mediawiki.org/xml/export-0.6/ http://www.mediawiki.org/xml/export-0.6.xsd" version="0.6" xml:lang="en">
<siteinfo>
<sitename>Wookieepedia</sitename>
<base>http:///10.8.66.74/wiki/Main_Page</base>
<generator>MediaWiki 1.19.24</generator>
<case>first-letter</case>
<namespaces>
<namespace key="-2" case="first-letter">Media</namespace>
...
<namespace key="1202" case="first-letter">Message Wall Greeting</namespace>
</namespaces>
</siteinfo>
<page>
<title>Brianna</title>
<ns>0</ns>
<id>5</id>
...
<text xml:space="preserve" bytes="36038">{{Eras|old|featured}}
{{Youmay|the [[Echani]] [[hybrid]]|the [[Brianna (Human)|Human]]}}
{{Character
|type=Jedi
...
that the above would identify that the Brianna page links to the Echani page, as well as to the "hybrid" and "Brianna (Human)" pages.
Is there a good mediawiki parsing tool for python that can spit this out? Performance is not a major concern, since this is done offline, and these wikis are not huge.
Your approach isn't sound: use the links API instead. There are multiple Python clients. Never do wikitext parsing on your own unless absolutely forced!
Also note that, for all wikis but the small ones, Wikia's dumps are completely broken (truncated at a random point). See also https://archive.org/details/wikia_dump_20141219 and https://github.com/Wikia/app/pull/6118#issuecomment-183633326