By mistake, two instances of ES were launched (at the same 9200 port)
on al least 4 servers of a 8 nodes cluster. That happened while
multiple processes were indexing docs.
After shutting down these extra instances and restarting the cluster,
the state was in RED as 2 shards were unassigned. Today I restarted
all nodes one by one and the cluster health got Green.
From my tests on localhost, new ES instances running on the same
machine form a cluster like any other node and reallocate shards among
them, is this correct?
If so, I understand that shutting down ES instances would act as if a
node was stopped, regardless if some instances are running on the same
server, correct?
If not... is there any chance that information sotred previously to
the problem could be lost/corrupted because sharding reallocation
between these multiple instances of ES?
After shutting down these extra instances and restarting the cluster,
the state was in RED as 2 shards were unassigned.
How did you shut them down? one by one until cluster reached green?
E.g. if you shutdown two instances which have two replicas of a shard
(and you only have two) this shard cannot be reassigned ...
From my tests on localhost, new ES instances running on the same
machine form a cluster like any other node and reallocate shards among them, is this correct?
yes, but not only on the same machine ... you can avoid that via the
cluster name
If so, I understand that shutting down ES instances would act as if a
node was stopped, regardless if some instances are running on the same server, correct?
what do you mean here?
If not... is there any chance that information sotred previously to the problem could be lost/corrupted because sharding
reallocation between these multiple instances of ES?
By mistake, two instances of ES were launched (at the same 9200 port)
on al least 4 servers of a 8 nodes cluster. That happened while
multiple processes were indexing docs.
After shutting down these extra instances and restarting the cluster,
the state was in RED as 2 shards were unassigned. Today I restarted
all nodes one by one and the cluster health got Green.
From my tests on localhost, new ES instances running on the same
machine form a cluster like any other node and reallocate shards among
them, is this correct?
If so, I understand that shutting down ES instances would act as if a
node was stopped, regardless if some instances are running on the same
server, correct?
If not... is there any chance that information sotred previously to
the problem could be lost/corrupted because sharding reallocation
between these multiple instances of ES?
Thanks for your help, and patiente too (my english is far from good
How did you shut them down? one by one until cluster reached green?
E.g. if you shutdowntwoinstanceswhich havetworeplicas of a shard
(and you only havetwo) this shard cannot be reassigned ...
Actually I shutted them all down, one by one (i.e. kill $PID), and
restarted again one by one until status reached green.
yes, but not only on the same machine ... you can avoid that via the
cluster name
yap, that makes sense
If so, I understand that shutting down ES instances would act as if a
node was stopped, regardless if some instances are running on the same server, correct?
what do you mean here?
sorry if it is not very clear. I just meant that if I have two ES
instances running on the same machine, both part of the same cluster,
and I shut down one of them, that would be the same as if I shut down
an ES running on a different node. From your answers I think yes.
If not... is there any chance that information sotred previously to the problem could be lost/corrupted because sharding
reallocation between these multiple instances of ES?
By mistake, two instances of ES were launched (at the same 9200 port)
on al least 4 servers of a 8 nodes cluster. That happened while
multiple processes were indexing docs.
After shutting down these extra instances and restarting the cluster,
the state was in RED as 2 shards were unassigned. Today I restarted
all nodes one by one and the cluster health got Green.
From my tests on localhost, new ES instances running on the same
machine form a cluster like any other node and reallocate shards among
them, is this correct?
Yes. Though you can disable allowing to run more than one node on a machine
by setting node.max_local_storage_nodes setting to 1.
If so, I understand that shutting down ES instances would act as if a
node was stopped, regardless if some instances are running on the same
server, correct?
Yes. So, if you brought down the nodes one by one, and waited for green
state between each shutdown, then all shards would end up migrated to the
remaining nodes.
If not... is there any chance that information sotred previously to
the problem could be lost/corrupted because sharding reallocation
between these multiple instances of ES?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.