I need to summarize documents with spacy-pytextrank, what is the best approach to make it faster without increasing the resources of the machine?
I was thinking of parallelizing the computation using concurrent futures. Then apply texrank to each chunk. I know that in this way texrank would evaluate each chunk independently, but I don't see this as a problem if the chunks are sufficiently long.
Does anyone have any better ideas?
speed up PyTextRank for summarizing a document
116 Views Asked by Ire00 At
1
There are 1 best solutions below
Related Questions in PYTHON
- app engine ndb fetch is very slow
- GAE/P: Migrating to NDB efficiently
- App Engine - NDB query with projection requires subproperty?
- Can inheritance be modelled in app engine datastore by same kind and different properties?
- App engine datastore denormalization: index properties in the main entity or the denormalized entity?
- How to filter NDB entities having exactly the same repeated properties?
- How to update one property for multiple NDB entities?
- ndb appengine nonetype error
- Find GAE Datastore entities by application ID
- How to refresh configuration entries kept in Google App Engine Datastore?
Related Questions in SPACY
- app engine ndb fetch is very slow
- GAE/P: Migrating to NDB efficiently
- App Engine - NDB query with projection requires subproperty?
- Can inheritance be modelled in app engine datastore by same kind and different properties?
- App engine datastore denormalization: index properties in the main entity or the denormalized entity?
- How to filter NDB entities having exactly the same repeated properties?
- How to update one property for multiple NDB entities?
- ndb appengine nonetype error
- Find GAE Datastore entities by application ID
- How to refresh configuration entries kept in Google App Engine Datastore?
Related Questions in SUMMARIZATION
- app engine ndb fetch is very slow
- GAE/P: Migrating to NDB efficiently
- App Engine - NDB query with projection requires subproperty?
- Can inheritance be modelled in app engine datastore by same kind and different properties?
- App engine datastore denormalization: index properties in the main entity or the denormalized entity?
- How to filter NDB entities having exactly the same repeated properties?
- How to update one property for multiple NDB entities?
- ndb appengine nonetype error
- Find GAE Datastore entities by application ID
- How to refresh configuration entries kept in Google App Engine Datastore?
Related Questions in PYTEXTRANK
- app engine ndb fetch is very slow
- GAE/P: Migrating to NDB efficiently
- App Engine - NDB query with projection requires subproperty?
- Can inheritance be modelled in app engine datastore by same kind and different properties?
- App engine datastore denormalization: index properties in the main entity or the denormalized entity?
- How to filter NDB entities having exactly the same repeated properties?
- How to update one property for multiple NDB entities?
- ndb appengine nonetype error
- Find GAE Datastore entities by application ID
- How to refresh configuration entries kept in Google App Engine Datastore?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Note that
pytextrankis a pipeline component inspaCyso any parallel processing needs to take into account howspaCyruns and its architecture. Notably, there is onedocper large-ish "chunk" of text (i.e., source document) and it probably does not make sense to parallelize by reusing thedocobjects, but instead focus on reusing thenlpobject and parallelizing by running severaldocpipelines concurrently. That's how other projects have handled this kind of situation you're describing.As one of the committers on
pytextrank, yes in fact we having been looking at ways to leverage concurrent futures in Python to help parallelize internally within the library. Also, we had a side project for a customer where we used similar Python concurrency throughrayalthough the built-inasyncioin later versions of the language provide most of what we'd needed.To be candid, there are probably better ways to summarize text using language models, though the extractive approach in
pytextrankis unsupervised and fast. We had not been prioritizing much development for summarization features; however, there seems to be lots of interest.What would help would be to know: Where do the resources get bottlenecked in your use case? In other words, is utilization of multi-cores low, or is the application I/O-bound? Then we can prioritize how to leverage language features for concurrency.