I am in a bit of a bind. One of the nodes on my cluster ran out of space. I can't start elasticsearch because there is no space left. I'm unable to move or delete shards / indices from that node because I can't start Elasticsearch.
What are my options?
Can I symlink some folders under the data directory to a larger drive? I'm not sure how to proceed.
You can't find any space, by deleting logs or unused yum caches or anything to free up a bit of space, or is this no space on the file system hosting ES? Yes, one thing is just get bigger drive and copy over all the data (we did that) but can take a long time. I'd think you could symlink a few big files, but would have to test - others probably have more insights; regardless, snapshot first if you can (in cloud, etc.).
Also make sure no root space allocated, which varies by your linux, file system type, etc.:
Elasticsearch has a bunch of protection in place to prevent a node from consuming every bit of available disk space, so my guess is that something else filled the disk up. If so, as Steve says, clear out that other stuff.
I don't think symlinks will work, no, the security manager shouldn't allow that. There's no user-serviceable parts inside the data directory so we very much recommend against moving anything around as you suggest.
Is your cluster health yellow? If so, perhaps the simplest path forward is to wipe the node and start again.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.