I'm kindof confused as to what responsibility Akka takes on when creating an actor system. I want to have a simple application of a parent and two child actors where each child resides on a different process(and therefore on different node). Now I know I can use a router with remote config or just start a remote actor, but(and correct me if I'm wrong) when creating this remote actor, Akka expects that the process already exists and the node is already running on that process, and then its only deploying that child actor to that node. Isn't there any way of making Akka do the spawning for us?
This is the code that isn't working because I haven't created the process myself:
application.conf:
akka {
remote.netty.tcp.port = 2552
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
}
child {
akka {
remote.netty.tcp.port = 2550
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
}
}
Parent.scala:
object Parent extends App{
val system = ActorSystem("mySys")
system.actorOf(Props[Parent],"parent")
}
class Parent extends Actor with ActorLogging{
override def preStart(): Unit = {
super.preStart()
val address = Address("akka.tcp", "mySys", "127.0.0.1", 2550)
context.actorOf(Props[Child].withDeploy(Deploy(scope = RemoteScope(address))), "child")
}
override def receive: Receive = {
case x => log.info(s"Got msg $x")
}
}
and Child.scala:
class Child extends Actor with ActorLogging{
override def receive: Receive = {
case x=> //Ignore
}
}
But if I run this main inside Child.scala right after running the main on Parent.scala:
object Child extends App{
ActorSystem("mySys", ConfigFactory.load().getConfig("child"))
}
class Child extends Actor with ActorLogging{
override def receive: Receive = {
case x=> //Ignore
}
}
Then the node will connect.
If there isn't any way of doing that then how can Akka restart that process/node when the process crushes?
You are responsible for creating, monitoring and restarting actor systems. Akka is only responsible for actors within those actor systems.