Can you please let u know, what parameter should I improve and how?
I frequently get too many open files error and cluster shuts down after that.
Now there are 86 percent open files? I fear this figure will increase and it will go down,
Heap allocated is 16 GB. Why does it shows ram.percent as 61? it should be 50%
What is segment count? Does it matter for the performance? how to improve this factor?
you seem to have a fair share of open file descriptors, which usually means a lot of open files, which in turn might mean you have too many shards. That said, the best thing would be to configure file descriptors to be unbounded, but here you might be just hiding another problem.
RAM includes memory used by other processes, it is just telling operating system stats.
Less shards might be an idea - but this is just an assumption without seeing the whole picture.
As you can see, there is a huge amount of memory in the cached column, which means it is very likely in use by the linux page cache aka the file system cache. This is the reason why you should only assign a maximum of half of your main memory to the heap.
Is it indication that its going to increase day by day and I have to reduce that. I feer that I will get "too many open files" error very soon, if it increased further and the cluster will be down.
I have 3 node cluster
Each has 32 GB RAM. 16 GB allocated to ES_HEAP
Total shards: 17141
Indices: 2091
Index setting:
3 shards
1 replica
Means total 6 shards per index. This may not match with total shards 17141, because previous setting was different (3 shards, 3 replicas)
17141 shards across 3 nodes is a lot of shards for each node to handle, and explains the large number of file handles used. What is the average shard size?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.