Need immediate help with disk utilization

Hi All,

I'm normally a systems guy (*NIX/Windows), but our ES person is on vacation for the next few days & suddenly our disks are filling up (showing 96% at last check, our "alert threshold" is at 90%). I've been reviewing command histories/config files/web pages/etc. like crazy, but I haven't been able to figure out how to trim/truncate data in order to keep up. I was told that we have two "buckets" or containers, one that holds long-term data, & the other that holds shorter-term data, I'm hoping to be able to reduce the contents of the second container.

Since I only started this position a few days ago, I have effectively zero Elasticsearch knowledge. :frowning: Would be really grateful for some help before our systems crash. I believe that I have found the directories where the system holds the data, as well as the config file locations.

Thanks in advance for any help you might be able to provide.

Do the indexes have any replicas that you can drop temporarily until things clear up? If you use a cluster dashboard plugin like kopf, ElasticHQ, head, or Marvel you should be able to get a decent overview of the size of the indexes and the replica situation. At least kopf allows you to easily adjust the replica count for each index (which you obviously can do via the REST API too).

Thank you for the quick reply! We do utilize head (if that's the one in the '../_plugin/head/' file structure), and I am digging through that UI now.

Within the 'Indices' tab in head, I see seven entries, with three columns. The 1st column lists the index name, the 2nd column is titled 'Size', & the 3rd is called 'Docs'.

The interesting thing to me is that the data in the 'Size' column appears to be listed as two values (i.e., "x.xTi/y.yTi") where the first value (x.x) is 50% of the 2nd value (y.y).

So far though, I'm not seeing anything that references replicas. Thanks again for the reply.

Issue resolved, topic can now be closed.