I am deploying spark app through standalone cluster. I have one master and 2 slaves.
I am testing my cluster. I have application .jar copied everywhere at the same location.
I have observed following issue:
on Master
bin/spark-submit --class ***** --master spark://master:6066 --conf spark.driver.userClassPathFirst=true --deploy-mode cluster --executor-memory 1g --executor-cores 1 ******.jar
Exception in thread "main" java.net.BindException: Cannot assign requested address: Service 'Driver' failed after 16 retries! Consider explicitly setting the appropriate port for the service 'Driver' (for example spark.ui.port for SparkUI) to an available port or increasing spark.port.maxRetries.
on slave1
bin/spark-submit --class ***** --master spark://master:6066 --conf spark.driver.userClassPathFirst=true --deploy-mode cluster --executor-memory 1g --executor-cores 1 ******.jar
Job executes
on slave2
bin/spark-submit --class ***** --master spark://master:6066 --conf spark.driver.userClassPathFirst=true --deploy-mode cluster --executor-memory 1g --executor-cores 1 ******.jar
Job executes
But I submit more than one job on a slave, only the first job executes.
On Master
bin/spark-submit ******.jar --class ******
Job executes occupies full resources on both the slaves and does not consider the remaining params
However, If I put the jar at the end of command, the above mentioned first 3 scenarios occur.
I have done cluster config using http://spark.praveendeshmane.co.in/spark/spark-1-6-1-cluster-mode-installation-on-ubuntu-14-04.jsp on AWS ec2 instances.
I want to execute multiple jobs simultaneously.