Sorry if this question is already answered. I have a serious problem, My each log in Elasticsearch is about 2 MB (My indices now are around 350 GB), I am confused Is that the expected behavior from Elasticsearch or some weird going on my application ?
It looks like each record takes up around 170 bytes per shard. If you are looking to reduce this I would recommend optimise your mappings as described in the documentation and this blog post.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.