How to reduce the log size?

Hi Team,

Sorry if this question is already answered. I have a serious problem, My each log in Elasticsearch is about 2 MB (My indices now are around 350 GB), I am confused Is that the expected behavior from Elasticsearch or some weird going on my application ?

Thanks in advance.

What is the total raw data volume you have ingested?

I am sorry, How can I check the data volume ?

What does the cat indices API say about this index/indices? How large are your raw documents?

/var/lib/elasticsearch/nodes/0/indices ----> 312G

I went into particular index then i have --> 0 1 2 3 4 _state (states)

/var/lib/elasticsearch/nodes/0/indices/03ZVSj4iQvW8sQziqfKfsg/0/index ---> i have docs here

where all the docs are around --> 260K

just to be clear i have 40 indices now which takes ~ 312G

What is the output of the API I asked about?

Below are the samples

Nov -2
green open logstash-2017.11.02 -thc3P_8TjmMUkkDntpKGQ 5 1 61648586 0 19.7gb 9.8gb

Nov -1
green open logstash-2017.11.01 tJJ3LosfTASQ3PBJ4x8_gw 5 1 65746826 0 21.9gb 10.9gb

It looks like each record takes up around 170 bytes per shard. If you are looking to reduce this I would recommend optimise your mappings as described in the documentation and this blog post.

Thanks for the info, will check the documention

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.