I need to get a few facts from SEC 10-K filings for e.g. gross revenue, gross profit, gross margin, operating expenses etc. along with the corresponding context.
For filings like https://www.sec.gov/Archives/edgar/data/1318605/000156459018002956/tsla-20171231.xml , it seems feasible to just use XPath to find out the few required elements and the values. But there are filings like (https://www.sec.gov/Archives/edgar/data/19617/000001961718000057/jpm-20171231.xml) where total expense is broken up in different segments with an extension taxonomy.
My question is
- What would be a reliable way to work with files like these? Say, if I just want the total operational expenditure. Is there a reliable way to find what are the elements I'll need to read and then may be sum up?
- I've tried using the UBMatrix library for reading xbrl files. It works on some files (non-SEC, can read node values) but for SEC 10-K filings throws NPE. Could there be a particular reason why xbrls instance documents from SEC is failing? (haven't checked library code though)
In any case, if doing it simply with XPath is possible I'd prefer that. Validity of the xbrl document is not important.
The most reliable way to work with XBRL files is to use an XBRL processing library. There are a few in Java, some proprietary (with a fee) and some open source.
There is a maintained list of tools and services on xbrl.org:
https://www.xbrl.org/the-standard/how/tools-and-services/
As far as I know, the SEC documents are reliable, widely consumed by a lot of people and tested on many processors. If there is a problem with UBMatrix such as a null pointer exception, I recommend reaching out to them and letting them know so they can address it.
It is definitely (in theory) possible to use XPath/XQuery/XSLT as well, since XBRL uses XML syntax, but you need to be aware that by resolving the contexts (which is a join in relational terms), you would be in fact re-implementing an incomplete XBRL processor from scratch, with the risks of bugs and sunk costs that go with it. There are a lot of subtleties and an ecosystem of specifications in addition to the core XBRL one (e.g., Dimensions, ...) to take into account in order to not retrieve the wrong values. By using an existing processor, you are building on top of the efforts that other people already invested into doing so, in order to get all the XBRL semantics right: this is a benefit of XBRL being a standard.
As a final remark: the exact XBRL tags used for gross revenue, gross profit, etc, may vary from company to company, because some use their own tags (extensions) and not the US-GAAP tags. Also, some companies omit some facts that need to be computed by consumers based on other facts. This can be addressed using mappings and formulas on top of the XBRL processor. Charles Hoffman shared reports on the matter with a lot of useful advice, and maintains such mappings online (keywords to search for this are: fundamental accounting concepts, report frames).