After using the dask multiprocessing scheduler for a long period of time, I noticed that the python processes started by the multiprocessing scheduler take a lot of memory. How can I restart the worker pool?
How to terminate workers started by dask multiprocessing scheduler?
2.2k Views Asked by Arco Bast At
1
There are 1 best solutions below
Related Questions in PYTHON
- C++ using std::vector across boundaries
- Linking to a static lib compiled with MSVC
- What are MATLAB DLLS?
- VC++ .net: Functionality from managed DLL is not exported
- DLL being marked as DELETEPENDING
- ASP.NET Web Forms give error while connecting to Oracle 11g
- Air Native Extensions: Released Air App w/Windows Native Extension works fine on develpment PC but not on other computers
- WiX and custom extensions for primary output?
- Using different versions of the same DLL in a project
- visual c++ - missing unrelated DLLs
Related Questions in DASK
- C++ using std::vector across boundaries
- Linking to a static lib compiled with MSVC
- What are MATLAB DLLS?
- VC++ .net: Functionality from managed DLL is not exported
- DLL being marked as DELETEPENDING
- ASP.NET Web Forms give error while connecting to Oracle 11g
- Air Native Extensions: Released Air App w/Windows Native Extension works fine on develpment PC but not on other computers
- WiX and custom extensions for primary output?
- Using different versions of the same DLL in a project
- visual c++ - missing unrelated DLLs
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Update: You can do this to kill the workers started by the multiprocessing scheduler:
from dask.context import _globals pool = _globals.pop('pool') # remove the pool from globals to make dask create a new one pool.close() pool.terminate() pool.join()First answer:
For tasks that consume a lot of memory, I prefer to use the
distributedscheduler even in localhost.It's very straightforward:
distributed.Clientclass to submit your jobs.In [1]: from distributed import Client In [2]: client = Client('1.2.3.4:8786') In [3]: client <Client: scheduler="127.0.0.1:61829" processes=8 cores=8> In [4]: from distributed.diagnostics import progress In [5]: import dask.bag In [6]: data = dask.bag.range(10000, 8) In [7]: data dask.bag In [8]: future = client.compute(data.sum()) In [9]: progress(future) [########################################] | 100% Completed | 0.0s In [10]: future.result() 49995000I found out this way more reliable than the default scheduler. I prefer explicitly submit the task and handle the future to use the progress widget, which is really nice in a notebook. Also you can still do stuff while waiting the results.
If you get errors due to memory issues, you can restart the workers or the scheduler (start all over again), use smaller chunks of data and try again.