The /dev/mapper/centos-var partition got full Can anyone suggest a solution for extending it or any solution?
This question is out of the scope of this forum, it is not related to Elasticsearch, you need to check with your system admins to solve this issue.
Unless it's all Elasticsearch logs? But I'd highly doubt that
We can not start our Elasticsearch service when starting it given this message, so may I know is there any ELK logs saving so that if I cleared this we will lose history logs?
I could narrow down some point as follows, so seem like it filled data with elasticsearch indices (almost 1.6 TB), so we have 3 elasticsearch nodes in our cluster, any step to resolve this issue?
From pics, your disk partion is 1.8 TB, and ES has used 1.6 TB for indices, logs are 2.4 GB. Even if you clean logs, will not help. You have to extend your partition, as ECEngineers said, it's not in their domain. This is the best option, however make plan for future, for instance extension for 1 TB is a temporary solution. Be careful with extending.
Would not recommend to manually remove any of indices.
To run safely ES, you must have enough disk space.
cluster.routing.allocation.disk.watermark.low: (Default 85%)
cluster.routing.allocation.disk.watermark.high: (Default 90%)
cluster.routing.allocation.disk.watermark.flood_stage: (Default 95%)
Might be a useful command, check which is the biggest index:
du -h /var/lib/elasticsearch/indices | sort -rh | head -20
From Devtools/ curl:
GET /_cat/indices?v&s=store.size:desc curl -u user:pass "localhost:9200/_cat/indices?v&s=store.size:desc"
The uuid column is the directory name in: /var/lib/elasticsearch/indices/
Though I'm a bit surprised how you ended up there? Did you change the (floodstage) watermark or did another process fill the rest of the disk? The default settings should avoid that situation.
Safest option would be to enlarge the disk. If you are sure you have a replica of all the data (or don't really need what's on that node anyway), you could delete the (Elasticsearch) node. Deleting individual files from the data directory is IMO only a last effort hack that I'd avoid unless you have no other options left.