We observed wrong routing from TransportClient when we scale up the
cluster. For example, suppose that we have two ES clusters, es0, es1 and
es_sink_0 is the TransportClient talking to es0, es_sink_1 is one talking
to es1. If we scale up es1, it happens that es_sink_0 is sending data to
es1. We are using client.transport.sniff=true by default. This should not
happen theoretically because TransportClient will refresh its server list
through communicating with the cluster and new nodes should not join to the
wrong cluster.
Is there anybody who has seen this problem before? Any comments will be
totally appreciated. We didn't find the root cause yet but this is the
really serious problem. So, temporarily, I want to turn off sniff and add
the feature that manually updates the server list through external discover
module.
We observed wrong routing from TransportClient when we scale up the
cluster. For example, suppose that we have two ES clusters, es0, es1 and
es_sink_0 is the TransportClient talking to es0, es_sink_1 is one talking
to es1. If we scale up es1, it happens that es_sink_0 is sending data to
es1. We are using client.transport.sniff=true by default. This should not
happen theoretically because TransportClient will refresh its server list
through communicating with the cluster and new nodes should not join to the
wrong cluster.
Is there anybody who has seen this problem before? Any comments will be
totally appreciated. We didn't find the root cause yet but this is the
really serious problem. So, temporarily, I want to turn off sniff and add
the feature that manually updates the server list through external discover
module.
I already did because it's default right? Also, I am seeing the following
which is the proof of not ignoring cluster name.
2014-12-19 06:58:08,187 WARN elasticsearch[Sun Girl][generic][T#49358]
transport - [Sun Girl] node null not part of the cluster Cluster
[es_logsummary], ignoring...
We observed wrong routing from TransportClient when we scale up the
cluster. For example, suppose that we have two ES clusters, es0, es1 and
es_sink_0 is the TransportClient talking to es0, es_sink_1 is one talking
to es1. If we scale up es1, it happens that es_sink_0 is sending data to
es1. We are using client.transport.sniff=true by default. This should not
happen theoretically because TransportClient will refresh its server list
through communicating with the cluster and new nodes should not join to the
wrong cluster.
Is there anybody who has seen this problem before? Any comments will be
totally appreciated. We didn't find the root cause yet but this is the
really serious problem. So, temporarily, I want to turn off sniff and add
the feature that manually updates the server list through external discover
module.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.