I am currently working on a Storm Crawler based project. In the current project, we modified some Bolts and Spouts of the original Storm Crawler Core artifact. For example, we changed some parts of ParserBolt or etc. In addition, we develop some processing steps in the above project. Our Bolts has been mixed with the original Storm Crawler project. For example, I have an image classifier that gives some images from the Storm Crawler and does some classification on that. Now, I am going to separate the crawl phase from the processing phase. For the crawl phase, I want to use the latest version of Storm Crawler and save its results into a Solr collection named Docs. For the second phase (which is independent of the crawl phase), I have another Storm based project which has not any relation to Storm Crawler. The input tuples of the second topology need to be feed from the Docs collection. I have not any idea about feeding documents from the Solr collection to the second storm topology. Is it a good design architecture or not? If yes, what is a good way for importing data to the second topology? It also should be noted that I want to use these projects without any downtime.
Separation of crawl phase from processing phase in Storm Crawler
104 Views Asked by aeranginkaman At
1
There are 1 best solutions below
Related Questions in SOLR
- Developing a search and tag heavy website
- How can I integrate Solr5.1.0 with Nutch1.10
- Solr ping taking time during full import
- Indexed data is not displaying on storefront
- Heap size issue on migrating from Solr 5.0.0 to Solr 5.1.0
- Monolithic ETL to distributed/scalable solution and OLAP cube to Elasticsearch/Solr
- Exact word not boosting much Solr
- Solr stopped with Error opening new searcher at org.apache.solr.core
- Data import in solr from multiple entities
- solr reindexing issue for EdgeNgramFilter
- Heap memory Solr and Elasticsearch
- How to index documents with their metadata in a DB using Solr 5.1.0
- Isnull equivalent in SOLR
- SolrNet query not working for Scandinavian characters
- Query always the same with Sunspot/Solr on rails
Related Questions in ARCHITECTURE
- Is it recommended to use Node.js for an online room booking web application?
- Defining Callbacks for custom Javascript Functions
- iOS: app doesn't pass the upload for the architecture
- What is the value of multiple Hybris extensions?
- os kern error : "ld: symbol(s) not found for architecture x86_64"
- How to avoid context in business layer
- Libgdx: Objects creating other objects
- Do software engineers in general have no idea about Software Architecture Design?
- Java generic class that contains an instance of implementation of generic interface
- Web application architecture, N-tiers, 3 tiers or multi-layer
- Is having 3 layers Controller, BO and DAO a standard way? why not just Controller and DAO?
- Architecture for creating a JavaScript framework
- Symfony2 proper use for services
- Refactor some calls on each Zf2 controller action
- Architecture - Task Scheduling (Data File Processing) - Windows Service
Related Questions in APACHE-STORM
- How can I serialize a numpy array while preserving matrix dimensions?
- Logging from a storm bolt - where is it going?
- Storm Word Count Topology - Concept issue with number of executions
- Supervisor node will not connect to storm cluster
- Storm [ERROR] Async loop died
- How to export data from Cassandra to mongodb?
- Why is my streamparse topology definition complaining about a wrong number of arguments to thrift$mk-topology?
- storm caching in topology level available for all bolts
- java.lang.RuntimeException : no viable alternative at input '<EOF>'
- storm supervisor exits when processing event
- apache storm into node js
- Passing cmd line params to storm subprocesses
- storm-starter with intellij idea,maven project could not find class
- storm + kafka: understanding ack, fail and latency
- storm topology: one to many (random)
Related Questions in STORMCRAWLER
- Is there any limit on redirects in StormCrawler?
- Crawling using Storm Crawler
- Parallel Processing of New Domain/URL inserted in StormCrawler using ElasticSearch
- Debugging Storm Crawler
- How can i debug the the docker container(storm crawler) which is written in java in vs code?
- Is there any systematic way to turn on or turn off some Bolt in StormCrawler?
- About the effect of parallelism in StormCrawler
- How to stop storing special characters in content while indexing
- Using Kafka topic for feeding seeds url to Storm Crawler
- Separation of crawl phase from processing phase in Storm Crawler
- Emit a custom metadata from seed URLs through all child discovered URLs for all depth
- How to store custom metatags in elasticsearch index from a website using stormcrawler
- Unable to install Stormcrawler error with connection refusal port 7071
- Unable to Inject URL seed file in stormcrawler
- Storm Crawler with Java 11
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
That is an opinion-based question but to answer it, you definitely can separate your pipeline into multiple topologies. It is a good practice when you need different types of hardware e.g. GPU for image processing vs cheaper instances for the crawl.
You could index your documents into SOLR but other solutions would also work, for instance queues etc... What you will need on the second topology is a bespoke SOLR spout. If you want the 2nd project to be independent from SC, you won't be able to leverage the code from our SOLR module but you could take it as a source of inspiration.
There might be better approaches depending on your architecture in general and whether the 2nd topology needs to ingest the content of the images. That's beyond the scope of technical questions on StackOverflow though.