Hi,
I have two cluster with ES 1.7.5, one have 15 nodes and another has 12 nodes , each cluster has one index with 20 primary shards and with small differences in the data size. Both clusters running same queries and has same data (structure) , on one cluster filter cache evictions are about 20-30 and on another are 600000-900000.
Is there any explanation that I can get such extremely high numbers on the second cluster ?
very hard to tell with this amount of information. Have you checked if those filter cache evictions are happening on all nodes or only on specific ones? Check the node stats for that.
What do you mean with the same data? Same mapping?
Hi,
I checked , this happens on all nodes and it's really strange all those high numbers.
Same data, I mean it's exactly same mapping, same structure, on cluster is covering one geographical region and another cluster is for second region.
What could be the problem ?
same hardware? same configuration for filter cache size on each cluster/node? same configuration settings in general? same queries as well? same number of open connections? anything suspicious in the logs?
Same AWS instances, same hardware, same elastic configuration. Everything is the same !!!! Same queries, nothing special in the log, except that due to this issue, scan operations with scroll fail on time out.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.