If I want to remap processes-core for MPI program, can I migrate after those are spawned? For example: Node 1 have: P0,P3,P6 and Node 2 have: P1,P4,P7. Can I migrate P1 to Node 1? Topology aware MPI suggests remapping in research papers. That hints of picking a process and put it into such a node that provides best result. Is it possible to do?
Is it possible to migrate one process from one core of a node to another core of another node in MPI?
66 Views Asked by optimus_Prime At
2
There are 2 best solutions below
2
Ben Michalowicz
On
To go off of what Victor said:
MPI-libraries do allow you to manually place processes via the use of a hostfile and/or mpirun-based flags (be it inside MPICH, OpenMPI, MVAPICH2, etc.). Profiling your application via something like TAU and viewing a communication matrix (see tau.uoregon.edu for documentation) before choosing the "best" process mapping for your application.
Related Questions in PROCESS
- I run an EXE program from a Windows Service but I can't see form C#?
- How can launch an external process from java and still be able to interact with this process?
- Unable to start program outside Windows folder
- Check if app is already running, and if kill it C#
- How to process A direct send message to a thread of process B?
- Batch script ignores %ERRORLEVEL% or using previously set one
- How do I know the last sched time of a process
- How to close a file handle which came from a parent process C#
- Execute 'ksetup.exe' commandline command programmatically
- Process ran as different user - web service call
- Starting process from .NET app and Attachment Execution Service
- Share info between two processes - what's the safest way?
- Independent process in php
- Managing a Process inside a Thread
- erlang processes and message passing architecture
Related Questions in MPI
- MPI Processes Communication error
- Scattered indices in MPI
- MPI+OpenMP job submission script on LSF
- Forwarding signals in bash script which is submitted on the cluster
- boost mpi sends NULL messages
- How to know the all the ranks that are part of a group in MPI outside that group?
- How can I measure the memory occupancy of Python MPI or multiprocessing program?
- IPython MPI with a Machinefile
- Parallel HDF5: "make check" hangs when running t_mpi
- Excel VBA call DLL developed using MPI
- non-blocking communications in MPI: order of messages
- Largest Number Datatype MPI
- MPI reverse probe
- On entry to NIT parameter number 9 had an illegal value
- Find an element in array using MPI?
Related Questions in MPICH
- On entry to NIT parameter number 9 had an illegal value
- Launch Intel debugger (idb) in parallel mode with MPICH and input parameters file
- installing mpich2 always installs me mpich
- Bad termination after program completes
- How to MPI_Gather a 2d array of structs with C++?
- How to run normal program in mpich clusters?
- Trying to make FFTW3 MPI work, getting zeros
- Determining MPI implementation programmatically
- Conflict between IMSL and MPI
- increasing the number of CPUs in mpi increses the processing time?
- MPICH3 not running on multiple mahines: hydra_pmi_poxy error : Exec format error
- openMPI/mpich2 doesn't run on multiple nodes
- MPI Cart_Create and Cart_coords
- Adding MPI path requirement for using NFS sharing folder in ubuntu
- Is MPI_Alltoall used correctly?
Related Questions in MVAPICH2
- MPI + CUDA AWARE, concurrents kernels and MPI_Sendrecv
- OpenMPI v/s Mvapich2: MPI_Send without MPI_Recv
- mpirun on CPUs with specified IDs
- MVAPICH2 - supported network types
- MVAPICH hangs on MPI_Send for message larger than eager threshold
- Comparing CPU utilization during MPI thread deadlock using mvapich2 vs. openmpi
- MVAPICH on multi-GPU causes Segmentation fault
- How to tell MVAPICH2 to use tcp?
- Running cpi example of MVAPICH2 by mpirun_rsh failed
- Compiling a Fortran 2003 program with MVAPICH2
- Is it possible to migrate one process from one core of a node to another core of another node in MPI?
- What does "Got unknown event 17 ... continuing ..." mean with MPI
- Compiling mvapich2-2.1 with PGI
- Toy program Fails using OpenMPI 1.6 but works with Mvapich2
- Mvapich2 (with IB) strange info after computing
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
No. MPI does not have any migration functionality. Topology-aware MPI (which as you remark is pretty much research level, not production) uses knowledge of how the application communicates to map ranks to nodes. Normally ranks are put on successive nodes; if you have knowledge about what ranks often communicate, they can be mapped closer together.