Logs is sending into Elasticsearch takes much more disk space than in txt format

Dear all,

Logs is sending into Elasticsearch takes much more disk space than in txt format.

I have 20 application servers with ~ 5 apps on every host. So daily volume of logs ~ 10GB. But in ES it equals ~ 90-100GB.

How can i optimize it and reduce used disk space?

Quick response will be greatly appreciated.

You can first visit the mapping. It's by default generated automatically.
May be sometimes you don't need to index some fields or you don't want to compute aggregations?
May be you want to disable _all field which is enabled by default...

Could you please give more detailed info about? Or provide some links where i can read about.

Thanks in advance, David

https://www.elastic.co/guide/en/elasticsearch/reference/5.4/mapping-all-field.html
https://www.elastic.co/guide/en/elasticsearch/reference/5.4/mapping.html

Might help

You can look at this blog post. It is for ES 2.x, but a lot of it is still valid. Another thing that can affect storage is how large shards you have. Compression tend to be more efficient for larger shards. What is your average shard size?

I don't know.

How can i get know how large is my shard size? Is there any APIs?

Use the _cat/indices or _cat/shards APIs.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.