Define specific spark executors per worker on a Spark cluster of 3 worker nodes

99 Views Asked by At

I have a Spark cluster of 3 servers (1 worker per server = 3 workers). The resources are very much the same across servers (70 cores, 386GB of RAM each).

I also have an application that I spark-submit, with 120 cores and 200GB ram (24 executors).

When I submit the aforementioned app, my cluster manager (standalone) assign all executors to the first two workers and leave the third worker alone without any executor being occupied there.

I want to assign a specific number of executors at each worker and not let the cluster manager (yarn, mesos, or standalone) decide, as with this setup the load of the 2 workers (servers) is extremely high, leading to disk utilization 100%, disk I/O issues, etc.

  • Spark version: 2.4.4
  • Cluster Manager: Standalone (Will yarn solve my issue?)

I searched everywhere without any luck.

0

There are 0 best solutions below