We see returned: {"_shards":{"total":833,"successful":417,"failed":0}} - but free disk space reported by the OS still remained unchanged.
Is the 416 difference between "total" and "successful" the 2016 indices deleted above?
Finally, I then installed Python etc and ran curator and removed another 14 indices. Success! Some disk space was freed up - but only that of the 14 indices. Moving forward I now have a solution - so now I have to recover the space from the indices "deleted" previously.
_cat/indices shows a total of 98 indices made up of:
14 - .monitoring-*
1 - .security (we had X-pack trial, now uninstalled)
1- .kibana
6 - winlogbeat-2017* (from some testing we were doing)
76 - logstash-netflow9-2017*
None of the earlier 2016 indicies that I deleted (see first message) are listed.
The total size (adding up last column) is about 200Gb and Windows now reports 21Gb free on a 220Gb disk - which looks consistent.
Repeating the above forcemerge I also now see {"_shards":{"total":843,"successful":422,"failed":0}} - but of course new indicies are created daily.
The winlogbeat-2017* and logstash-netflow9-2017* each show 5 shards per index - the others just 1 - which makes a total of 426 - not quite 422 - not sure if it ought to be?
I am very confused - before I ran the first delete and removed approximately half the indicies we were using 199Gb (as reported by the OS) - now with half as many indices I am still using same disk space!
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.