I have roughly 200 documents that need to have IBM Watson NLU analysis done. Currently, processing is performed one at a time. Will NLU be able preform a batch analysis? What is the correct python code or process to batch load the files and then response results? The end goal is to grab results to analyze which documents are similar in nature. Any direction is greatly appreciated as IBM Support Documentation does not cover batch processing.
IBM Watson Natural Language Understanding uploading multiple documents for analysis
476 Views Asked by RileyZ71 At
1
There are 1 best solutions below
Related Questions in PYTHON
- spring-integration-dsl-groovy-http return null when i use httpGet method
- groovy xml namespace definition used in attribute value lost after XmlParse/serialize
- jenkins with groovy postbuild .Not able to execute anything in groovy script field
- How can I set the the expected Exception type for a catch statement with a parameter I've passed into a method?
- How to add quotes into sql where clause in Groovy script?
- integrating groovy with api
- java.util.ConcurrentModificationException on cloneEntity
- jenkins (or groovy) using pom.xml from previous execution
- How to use multiple classes in multiple files in scripts?
- How to work around Groovy's XmlSlurper refusing to parse HTML due to DOCTYPE and DTD restrictions?
Related Questions in IBM-CLOUD
- spring-integration-dsl-groovy-http return null when i use httpGet method
- groovy xml namespace definition used in attribute value lost after XmlParse/serialize
- jenkins with groovy postbuild .Not able to execute anything in groovy script field
- How can I set the the expected Exception type for a catch statement with a parameter I've passed into a method?
- How to add quotes into sql where clause in Groovy script?
- integrating groovy with api
- java.util.ConcurrentModificationException on cloneEntity
- jenkins (or groovy) using pom.xml from previous execution
- How to use multiple classes in multiple files in scripts?
- How to work around Groovy's XmlSlurper refusing to parse HTML due to DOCTYPE and DTD restrictions?
Related Questions in IBM-WATSON
- spring-integration-dsl-groovy-http return null when i use httpGet method
- groovy xml namespace definition used in attribute value lost after XmlParse/serialize
- jenkins with groovy postbuild .Not able to execute anything in groovy script field
- How can I set the the expected Exception type for a catch statement with a parameter I've passed into a method?
- How to add quotes into sql where clause in Groovy script?
- integrating groovy with api
- java.util.ConcurrentModificationException on cloneEntity
- jenkins (or groovy) using pom.xml from previous execution
- How to use multiple classes in multiple files in scripts?
- How to work around Groovy's XmlSlurper refusing to parse HTML due to DOCTYPE and DTD restrictions?
Related Questions in WATSON-NLU
- spring-integration-dsl-groovy-http return null when i use httpGet method
- groovy xml namespace definition used in attribute value lost after XmlParse/serialize
- jenkins with groovy postbuild .Not able to execute anything in groovy script field
- How can I set the the expected Exception type for a catch statement with a parameter I've passed into a method?
- How to add quotes into sql where clause in Groovy script?
- integrating groovy with api
- java.util.ConcurrentModificationException on cloneEntity
- jenkins (or groovy) using pom.xml from previous execution
- How to use multiple classes in multiple files in scripts?
- How to work around Groovy's XmlSlurper refusing to parse HTML due to DOCTYPE and DTD restrictions?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
NLU can be "manually" adapted to do batch analysis. But the Watson service that provides what you are asking for is Watson Discovery. It allows to create Collections (set of documents) that will be enriched thru an internal NLU function and then queried.