I'm trying to use the 'edgar' package to parse 10-k's for specific keywords (https://cran.r-project.org/web/packages/edgar/edgar.pdf).
My code works fine when testing with just a few companies, but I need to do this for ~8000 companies over 9 years. First problem arose when I ran out of disk space, but splitting the companies into smaller chunks seemed to be the easy fix. Now that I have the relevant files downloaded, I try to run the code to get the count of the keywords, it often crashes at around 80% with the fatal error message, and no further information.
Is there any way I could perform this in a more reliable way?
Splitting the company list into smaller chunks solved the disk space error, but parsing the downloaded files seems to crash RStudio around 80% usually (fluctuates, sometimes works sometimes not)
Below is the code I'm using
testicik120170.5 <- searchFilings(cik.no = cik0.5, form.type = c("10-K"),
filing.year = c(2017), word.list, useragent)
write.csv(testicik120170.5, "cik0.5-2017.csv", row.names = FALSE)