I have a cluster made of 8 core machines.
In the application configuration, I mention that each executor needs to have 3 corers.
But when I submit the job, I observe that only one executor is spinned up in each physical machine. Ideally there would be 2 executors per machine (leaving 2 threads for node manager).
I don't see that spark offers a configuration that controls number of executors in a single physical node.
By the way, this is a Data Proc cluster in GCP.
Could you please help how can we stop this happening to increase the machine utilisation.
Thanks in advance!