I have found out that in a tribe node client setup, if one of the clusters being accessed via the Tribe node, has more than a set number of indices, the tribe node looses the ability to search that cluster.
My Setup:
2 Node ES Cluster indexing twitter data
1 Node logstash Cluster indexing logs from applications
1 Node tribe node setup
Version of ES : 1.5.2
Version of java : 1.7_0_76
running on Linux: 2.6.32-431.5.1.el6.x86_64
The log stash Cluster had 67 indices for the past 67 days. (rolling index of one per day).
At some point my tribe node lost its "connection" to the log stash ES Cluster. The Logstash Cluster is able to see the tribe node as a Client node. However, the Tribe node is not able to get any information back. Is this because of corrupt data ? or is there any built in limit to using Tribe nodes ?
If it is data corruption, how can I go about finding out what might be wrong ?
Nothing.. that's the other thing, the logs for the tribe node do not log
anything ever, even on startup... (all log settings are default settings)
I did one follow up experiment to see if its the number of indices or
something else. So I created 60 or so indices with one dummy record in
each. The tribe node was able to see all of the indices and I was able to
query from the time node to get the dummy record in those indices.
On the other "corrupt" cluster, I delete a whole lot of indices (I went
down to about 10 indices from 60 indices). and the Tribe node still could
not see the cluster. (the cluster was still able to see the tribe node
tho').
Sorry for the late response on this thread. Here is an update:
Something basic es index got corrupted and the Tribe node would not respond properly. However if I wiped out the data directory and restarted, The tribe node response beautifully. The reason for the corrupt ES setup might have been because of my experimentation with various recovery settings and not getting it right all the way through, and instead of starting from fresh with every new experiment, I went ahead with whatever was there.
But, I seem to have resolve the problem now and have not had that issue for a while.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.