I recently ran into an out of space issue (I know, bad planning) and I would like to recover as much data from the shard as possible. I can see the data files under the node directory for that particular index and was wondering if there were any tools available to attempt to recover data from a shard (using Elasticsearch 7.4 currently).
there is something that I do not understand. If you are running out space, elasticsearch will stop writing to the shards this instance holds. Everything should be readable and the elasticsearch instance will still be available and you can query it. At the moment you delete something from the harddisk or expand it somehow and elasticsearch checks that the harddisk has not reached it watermark, it will start writing to the shards again.
What version of elasticsearch are you running? Is it a cluster? How many nodes?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.