I have a two node cluster, one node ran out of disk space, i have added another disk to path.data:
so disk space is recovered, however the second disk is not being used yet. And two shared are stuck INITIALIZING.
from the elastic search log file.
[internal:index/shard/recovery/file_chunk]]; nested: NotSerializableExceptionWrapper[i_o_exception: No space left on device];
at org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:258)
at org.elasticsearch.indices.recovery.RecoveryTarget.access$1100(RecoveryTarget.java:69)
With only 2 nodes in the cluster, any index with 1 replica configured will have a shard on each node. As Elasticsearch does not allocate multiple copies of a shard to the same node, this probably means that it will be hard for data to move around. You could try to set replicas to 0 for a few indices that have replica shards on the node that filled up and then rest it back to 1 again to see if that causes the new disk to get data.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.