I'm trying to extract domain using tldextract
ext = tldextract.extract(editString2)
print (ext.domain)
but i get this error at the same time, anyway to stop this error? I'm getting the print and result but just trying to find a way to not let it show this error.
error reading TLD cache file C:\Python33\lib\site-packages\tldextract\.tld_set: 'charmap' codec can't decode byte 0x81 in position 2350: character maps to <undefined>
Exception reading Public Suffix List url https://raw.github.com/mozilla/mozilla-central/master/netwerk/dns/effective_tld_names.dat. Consider using a mirror or constructing your TLDExtract with `fetch=False`.
Traceback (most recent call last):
File "C:\Python33\lib\site-packages\tldextract\tldextract.py", line 247, in _PublicSuffixListSource
page = unicode(urlopen(url).read(), 'utf-8')
File "C:\Python33\lib\urllib\request.py", line 156, in urlopen
return opener.open(url, data, timeout)
File "C:\Python33\lib\urllib\request.py", line 475, in open
response = meth(req, response)
File "C:\Python33\lib\urllib\request.py", line 587, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python33\lib\urllib\request.py", line 513, in error
return self._call_chain(*args)
File "C:\Python33\lib\urllib\request.py", line 447, in _call_chain
result = func(*args)
File "C:\Python33\lib\urllib\request.py", line 595, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found
"mozilla/mozilla-central" on GitHub was renamed to "mozilla/gecko-dev", without a redirect, hence the 404. The URL is fixed in the latest version of
tldextract
, 1.3.1If it hadn't been fixed though, you can manually provide a PSL URL to your own
TLDExtract
callable with thesuffix_list_url
kwarg. See the docs.