I have Elastic Search server with two cluster nodes,
You can find Elastic Search cluster configuration file here
It handles 20 GB data per day. Due to insufficient memory I suppose to
delete some of old index folders from both the nodes, Unfortunately I
removed folders from server(Linux) using "rm -rf" command instead of
"curl -XDELETE" in both the nodes after killing elastic search (Using
"kill -9"). When I try to start elasticsearch after deletion I got to
know more than 1 instance of elastic search is running in server(I
don't know how) and more than 1 shard (It suppose to be 1) in some
folders, Then I deleted additional shards using "rm -rf" command. When
I try to index I'm getting the following error and the documents are
not getting indexed,
UnavailableShardsException[[2012-03-07][0] [2] shardIt, [0] active :
Timeout waiting for [1m], request:
org.elasticsearch.action.bulk.BulkShardRequest@7403e02a]
Is there any possibility of recovering the cluster without loss of
already available data(Currently I'm having 900GB indexed data), If so
could you please suggest me a solution to resolve this issue?
It handles 20 GB data per day. Due to insufficient memory I suppose to
delete some of old index folders from both the nodes, Unfortunately I
removed folders from server(Linux) using "rm -rf" command instead of
"curl -XDELETE" in both the nodes after killing Elasticsearch (Using
"kill -9"). When I try to start elasticsearch after deletion I got to
know more than 1 instance of Elasticsearch is running in server(I
don't know how) and more than 1 shard (It suppose to be 1) in some
folders, Then I deleted additional shards using "rm -rf" command. When
I try to index I'm getting the following error and the documents are
not getting indexed,
UnavailableShardsException[[2012-03-07][0] [2] shardIt, [0] active :
Timeout waiting for [1m], request:
org.elasticsearch.action.bulk.BulkShardRequest@7403e02a]
Is there any possibility of recovering the cluster without loss of
already available data(Currently I'm having 900GB indexed data), If so
could you please suggest me a solution to resolve this issue?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.