Hello.
I got No disk space left problem, when I was indexing log data almost 1.8 TB.
Our systems environment are..
8 nodes (2 nodes : raw data store & generate bulk inserting / 6 nodes : index data)
Each nodes are 8 core cpu, 64 GB memory, ES_HEAP_SIZE = 30GB, we give 80% index buffer size(20GB)
We using 0.90_BETA version elastic search.
Each 6 data node has 900 GB disk space (300 GB HDD x3), and these hard disks are mounted like,
/root/elasticsearch/data/data1, /root/elasticsearch/data/data2, /root/elasticsearch/data/data3
So, we setting the path.data in elasticsearch.yml, and we confirmed path.data settings are properly affected by checking the log of node:DEBUG
The problem is, when our index size get closer to 1.8 TB(the situation that one of HDD is almost full), all nodes makes exception log messages(bulk fail, merge fail) that there are no space left on disk even there are still 2 empty disk is available.
Below is our node disk status.
/dev/xvda1 485M 52M 408M 12% /boot
/dev/xvdb1 79G 184M 75G 1% /root/elasticsearch/data/data1
/dev/xvdc1 296G 191M 281G 1% /root/elasticsearch/data/data2
/dev/xvdf1 296G 296G 0 100% /root/elasticsearch/data/data3 <- data3 is full, but data1 and data 2 is empty......
I totally not understand what's happening on ES....
please help me