How to get the daily amount of logs in GB

Hi there,

I'm trying to get what GB amount of logs we get each day across each of the ELK datacenters?
But on the kibana dashboard, I could see only the list of indices with minimal sizes in MB and KB.
So could you please help me to find the above pieces of information?
Thanks in advance.

Kibana dashboards don't have an easy way to do meta analysis of the data, it typically searches based on the contents of data. We could graph the number of documents, and maybe do some extrapolation.

At a low level, something like https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-indices.html or the Monitoring application may be options.

Would any of these work for you?

Hi Jon,

thanks for the reply.
actually I tried that one, but my requirement is to get the total number of log size details that generate on daily basis(particularly in GB).
also I seen in this forum where people are saying :
in the below link the user had mentioned the line

( the size of the daily log was many times higher than the normal case on 6 server (30-40GB on each server).

I read that as the size of the local log files on 6 specific servers, not as the indexed size of the logs in Elasticsearch...

Anyway, if you have daily indices in Elasticseaech, Kibana will show the size of each index in Monitoring. How do you shard or split up your Elasticsearch indices?

thanks for the reply.
We have the weekly indices.
example:
GET /_cat/indices
green open logstash-i-node-2019.04 ovWNhA 1 1 7710 0 2.3mb 1.1mb
green open logstash-lp-web-2019.02 cgcyP6A 1 1 2311271 0 1.4gb 754.8mb
green open logstash-ir-spark-2019.11 sUrgLxw 1 1 7848132 0 3.9gb 2gb

could you tell me based on this, how can i get the total amount of logs that flow into kibana daily (in GB format)?
also when you say that (I read that as the size of the local log files on 6 specific servers) , you meant the size of the logs directory from each server? if so, how you find the size on daily basis?please

I guess I just misunderstood your question...

You can get the column headings from the indices API using ?v for verbose

GET /_cat/indices?v

Second to last column is store.size and last column is pri.store.size. I'm guessing a bit but I think pri.store.size is the total size of the primary shards of the index and store.size is the total size of all shards of the index. So to get an estimated log volume per day take the sum of store.size for all indices and divide by 7.

I'm sure there would be a better way as well but this is not something I have really looked into...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.