Elasticsearch Unicast Weird Behavior in Clustering

71 Views Asked by At

I have two nodes each of which forms a cluster (with one empty node).

0.0.0.0:9200 (elasticsearch)
0.0.0.0:9201 (test-1)

Node at 9200 is in cluster elasticsearch (maybe default cluster.name). Node at 9201 is in cluster test-1. (Additionally, important or not, I bind network.hosts of both nodes to 0.0.0.0)

I want to join a new node to test-1. When I leave discovery.zen.ping.unicast.hosts setting commented out alone, the new node is successfully joined to test-1. However, When I set it something else, e.g., ["0.0.0.0"] or ["127.0.1"], it is failed to join...

Joining a new node to elasticsearch has no problem. ["0.0.0.0"], ["127.0.1"] and ["IP"] all worked well. (But ["0.0.0.0", "ANOTHER-IP"] failed... Please answer about this as well if possible...)

What causes this joining issue? Have anybody experienced problems like this?

1

There are 1 best solutions below

2
On

The discovery.zen.ping.unicast.hosts should have the IPs of all the nodes joining the cluster. Do this for all the nodes in the cluster and use IPs not 0.0.0.0 or 127.0.0.1.

As your new node is trying to join the test-1 cluster you can try to change the port of the new node to 9201 and see if it joins.

The minimal things required to form a cluster:

  1. Same cluster.name
  2. Put different node.name
  3. discovery.zen.ping.unicast.hosts - IPs of all the nodes in the cluster.

gateway.recover_after_nodes and discovery.zen.minimum_master_nodes - comment these lines if they are not already so for all the nodes of the cluster.

Lastly check your firewall settings and disable the firewall if necessary. Check if the nodes can talk to each other.