When running a pyspark job there's a significant launch overhead. is it possible to run 'lightweight' jobs that don't use an external daemon? (mainly for testing with small data sets)
Is possible to run spark (specifically pyspark) in process?
1.2k Views Asked by Ophir Yoktan At
1
There are 1 best solutions below
Related Questions in APACHE-SPARK
- Overlapping UICollectionView in storyboard
- Cannot pod spec lint because of undeclared type errors
- Is the transactionReceipt data present in dataWithContentsOfURL?
- UIWebView Screen Fitting Issue
- ZXingObjC encoding issues
- iOS: None of the valid provisioning profiles allowed the specific entitlements
- How to hide "Now playing url" in control center
- CloudKit: Preventing Duplicate Records
- Slow performance on ipad erasing image
- Swift code with multiple NSDateFormatter - optimization
Related Questions in PYSPARK
- Overlapping UICollectionView in storyboard
- Cannot pod spec lint because of undeclared type errors
- Is the transactionReceipt data present in dataWithContentsOfURL?
- UIWebView Screen Fitting Issue
- ZXingObjC encoding issues
- iOS: None of the valid provisioning profiles allowed the specific entitlements
- How to hide "Now playing url" in control center
- CloudKit: Preventing Duplicate Records
- Slow performance on ipad erasing image
- Swift code with multiple NSDateFormatter - optimization
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Update
My answer is not true anymore.
There is pysparkling project now that provides
It is still early version - but it lets you run your PySpark application with pure Python. But YMMV - Spark API evolves fast, and pysparkling might not have all latest API implemented.
I would still use full fledged PySpark for my tests - to make sure that my application works as it should on my target platform - which Apache Spark.
Previous answer
No, there is no way to run only Spark as single Python process only. PySpark is only thin API layer on top of Scale code. The latter should be run inside of JVM.
My company are heavy user of PySpark and we run unit tests for spark jobs continuously. There are not that much overhead while running Spark jobs in local mode. It is true that it starts JVM, but it is an order of magnitude faster than our old tests for Pig code.
If you have a lot tasks to run (i.e. many unit tests) you can try to reuse Spark Context - this will reduce the amount of time spent for starting up daemon for every test case. Please keep in mind that in this case you need to clean up after every test case (i.e. unpersist if your program cached some rdds).
In our company we decided to start new Spark Context for every test case - to keep them clean for now. Running Spark in local mode is fast enough for us at least for now.