Make Akka spawn a process for the remote node

596 Views Asked by At

I'm kindof confused as to what responsibility Akka takes on when creating an actor system. I want to have a simple application of a parent and two child actors where each child resides on a different process(and therefore on different node). Now I know I can use a router with remote config or just start a remote actor, but(and correct me if I'm wrong) when creating this remote actor, Akka expects that the process already exists and the node is already running on that process, and then its only deploying that child actor to that node. Isn't there any way of making Akka do the spawning for us?

This is the code that isn't working because I haven't created the process myself:

application.conf:

akka {
  remote.netty.tcp.port = 2552
  actor {
    provider = "akka.remote.RemoteActorRefProvider"
  }
}

child {
  akka {
    remote.netty.tcp.port = 2550
    actor {
      provider = "akka.remote.RemoteActorRefProvider"
    }
  }
}

Parent.scala:

object Parent extends App{
  val system = ActorSystem("mySys")
  system.actorOf(Props[Parent],"parent")
}

class Parent extends Actor with ActorLogging{


  override def preStart(): Unit = {
    super.preStart()
    val address = Address("akka.tcp", "mySys", "127.0.0.1", 2550)
    context.actorOf(Props[Child].withDeploy(Deploy(scope = RemoteScope(address))), "child")
  }

  override def receive: Receive = {
    case x => log.info(s"Got msg $x")
  }
}

and Child.scala:

class Child extends Actor with ActorLogging{
  override def receive: Receive = {
    case x=> //Ignore
  }
}

But if I run this main inside Child.scala right after running the main on Parent.scala:

object Child extends App{
  ActorSystem("mySys", ConfigFactory.load().getConfig("child"))
}

class Child extends Actor with ActorLogging{
  override def receive: Receive = {
    case x=> //Ignore
  }
}

Then the node will connect.

If there isn't any way of doing that then how can Akka restart that process/node when the process crushes?

2

There are 2 best solutions below

4
On

You are responsible for creating, monitoring and restarting actor systems. Akka is only responsible for actors within those actor systems.

0
On

It's not only not possible with Akka but in general no process can just spawn a new process on a different machine. Think about the security implications if that was possible! You always need some existing process on the target machine that spawns the new process for you, such as sshd or some resource/cluster manager.

So, passwordless SSH + a shell script is a thing typically done to start worker processes, e.g., by Hadoop, Spark, and Flink (the latter two using Akka under the hood, by the way).