Calculate disk usage

I've deployed a test cluster that feeds from about 5 servers.

I was asked to spec out the disk requirement once 50 servers would feed it logs with ILM setup (ILM should delete logs older than 3 month or 1 year - haven't decided yet).

Problem is, I'm not sure how much space it currently takes and how to calculate how much space it would take once ILM is set up.

Is it possible to calculate how much space elastic would take for 50 servers with ILM set up for 3 month or a year? Is it possible to set logs to be archived once their older than X time?

Huge thanks ahead.

GET /_cat/indices?v

Will give you among other details, store.size && pri.store.size
Calculate averages of your indices for a day/week/month
Then extrapolate for the time period you want.

store size is the size of the index + replicas on the disk.

Thanks for the response.

This is the response I get:

curl -X GET "localhost:9200/_cat/indices/?v"
curl: (52) Empty reply from server

Can you get any data from the server, like just:

curl localhost:9200/

Or use the Dev Tools icon in Kibana?

You may have security enabled, so no reply until you add auth credentials? Or this server using SSL/TLS so

curl https://localhost:9200/_cat_indices?v

1 Like

Thanks for the response!

GET /_cat/indices?v

Worked in the dev tool. I get lots of results. Is it possible to see a sum of all the data?

GET /_cluster/stats summarizes, such as store size in bytes.

Thanks for the response. This seems to return an even longer output. I'm not sure what variable holds the sum size.

See total cluster size:

"store" : {
"size_in_bytes" : 415514514555
},

1 Like

Thanks for the response.

Mine says 14773708614. If it's bites, it's 2 GB. if it's kb then it's 14773 GB which also doesn't make sense.

Everything is bytes, so 14.7GB.

14,773,708,614 bytes

1 Like

Awesome! So just look how much data it takes for like a week and make the estimates?

Also, do you know if it's possible to have Elasticsearch compresses data it receives ( before i start making compressions)

Yes, for total size; it'll go up/down as segments merge, but watch over time and you can see - also turn on self monitoring in Kibana and might also be able to see at cluster level (I forget).

Elasticsearch compresses by default. You can increase it, but not sure makes much difference.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.