I am using ES 2.4 and pushing the logs from logstash 2.4 with 20 workers (having 24 cores). I am seeing that 7911 elastics 20 0 37.4g 9.2g 390m S 190.2 29.4 1380:58 java
ES is using total memory(34 gb) of the server. After digging a lot I found that my ES segment count is too high.
My index details: green open smsc_logs-2016.11.14 5 0 108885173 0 23.3gb 23.3gb
From here I read that segments will tell you the number of Lucene segments this node currently serves. This can be an important number. Most indices should have around 50–150 segments, even if they are terabytes in size with billions of documents.
How to reduce this segment number??? I cant reduce the workers because I don't want to decrease my indexing speed. Is there any effective ways to reduce this segment counts?????
segments will tell you the number of Lucene segments this node currently serves. This can be an important number. Most indices should have around 50–150 segments, even if they are terabytes in size with billions of documents.
In my index I have only 30GB data (16 crore docs) but I have 336 segments while indexing. And also os.memory.used_percent is 99. I want to reduce these stats (Segements & memory).
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.