I have been developing applications using Spark/Spark-Streaming but so far always used HDFS for file storage. However, I have reached a stage where I am exploring if it can be done (in production, running 24/7) without HDFS. I tried sieving though Spark user group but have not found any concrete answer so far. Note that I do use checkpoints and stateful stream processing using updateStateByKey.
Spark/Spark Streaming in production without HDFS
694 Views Asked by smishra At
1
There are 1 best solutions below
Related Questions in SCALA
- Spark .mapValues setup with multiple values
- Where do 'normal' println go in a scala jar, under Spark
- Serializing to disk and deserializing Scala objects using Pickling
- Where has "Show Type Info on Mouse Motion" gone in Intellij 14
- AbstractMethodError when mixing in trait nested in object - only when compiled and imported
- Scala POJO Aggregator Exception
- How to read in numbers from n lines into a Scala list?
- Spark pairRDD not working
- Scala Eclipse IDE compiler giving errors until "clean" is run
- How to port Slick 2.1 plain SQL queries to Slick 3.0
- Log of dependency does not show
- Getting unary error for escaped characters in Scala
- Akka actor invoked with a function delegate - is this bad practice?
- Json implicit format with recursive class definition
- How to create a executable jar reading files from local file system
Related Questions in APACHE-SPARK
- Spark .mapValues setup with multiple values
- Where do 'normal' println go in a scala jar, under Spark
- How to query JSON data according to JSON array's size with Spark SQL?
- How do I set the Hive user to something different than the Spark user from within a Spark program?
- How to add a new event to Apache Spark Event Log
- Spark streaming + kafka throughput
- dataframe or sqlctx (sqlcontext) generated "Trying to call a package" error
- Spark pairRDD not working
- How to know which worker a partition is executed at?
- Using HDFS with Apache Spark on Amazon EC2
- How to create a executable jar reading files from local file system
- How to keep a SQLContext instance alive in a spark streaming application's life cycle?
- Cassandra spark connector data loss
- Proper way to provide spark application a parameter/arg with spaces in spark-submit
- sorting RDD elements
Related Questions in HDFS
- Using HDFS with Apache Spark on Amazon EC2
- How to read a CSV file from HDFS via Hadoopy?
- How can I migrate data from one HDFS cluster to another over the network?
- Spark/Spark Streaming in production without HDFS
- Jcascalog to query thrift data on HDFS
- What is Metadata DB Derby?
- Can Solr or ElasticSearch be configured to use HDFS as their persistence layer in a way that also supports MapReduce?
- How to import only new data by using Sqoop?
- How to access hdfs by URI consisting of H/A namenodes in Spark which is outer hadoop cluster?
- Force HDFS globStatus to skip directories it doesn't have permissions to
- Trying to use WinInet to upload a file to HDFS
- Apache Spark architecture
- Is possible to set hadoop blocksize 24 MB?
- Unable to create file using Pail DFS
- Hadoop Distributed File Systmes
Related Questions in SPARK-STREAMING
- How to keep a SQLContext instance alive in a spark streaming application's life cycle?
- Getting java.lang.IllegalArgumentException: requirement failed while calling Sparks MLLIB StreamingKMeans from java application
- Output shows "ResultSet" instead of value in Scala Spark
- Spark/Spark Streaming in production without HDFS
- HashMap as a Broadcast Variable in Spark Streaming?
- Parallel reduceByKeyAndWindow()s with different time values
- All masters are unresponsive ! ? Spark master is not responding with datastax architecture
- How to find spark master URL on Amazon EMR
- How to optimize shuffle spill in Apache Spark application
- Offsets for Kafka Direct Approach in Spark 1.3.1
- How to use spark for map-reduce flow to select N columns, top M rows of all csv files under a folder?
- scala.MatchError: in Dataframes
- Kafka ->Spark streaming -> Hbase. Task not serializable Error Caused by: java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
- display the content of clusters after clustering in streaming-k-means.scala code source in spark
- Is the DStream return by updateStateByKey function only contains one RDD?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Depending on the streaming(I've been using Kafka), you do not need to use checkpoints etc.
Since spark 1.3 they have implemented a direct approach with so many benefits.
If you are using Kafka, you can found out more here: https://spark.apache.org/docs/1.3.0/streaming-kafka-integration.html
Approach 2.