Hi all,
We have a cluster of 4 nodes, and we constantly add new indexes and remove
old ones in a daily basis operation. We had a node down for 3 days, and
when starting it again it just added the indexes it had. The thing is that
the name of the new indexes we create everyday depend on the date (ex:
my_index_name_20141203), so when this node started again it just didn´t
find any inconsistence within its indexes (already deleted in the cluster)
and it added them again.
So the question is: Is there any way to remove the data when a given node
fails? so it starts as if a new empty node has been added to the cluster.
Hi all,
We have a cluster of 4 nodes, and we constantly add new indexes and remove
old ones in a daily basis operation. We had a node down for 3 days, and
when starting it again it just added the indexes it had. The thing is that
the name of the new indexes we create everyday depend on the date (ex:
my_index_name_20141203), so when this node started again it just didn´t
find any inconsistence within its indexes (already deleted in the cluster)
and it added them again.
So the question is: Is there any way to remove the data when a given node
fails? so it starts as if a new empty node has been added to the cluster.
Hi all,
We have a cluster of 4 nodes, and we constantly add new indexes and
remove old ones in a daily basis operation. We had a node down for 3 days,
and when starting it again it just added the indexes it had. The thing is
that the name of the new indexes we create everyday depend on the date (ex:
my_index_name_20141203), so when this node started again it just didn´t
find any inconsistence within its indexes (already deleted in the cluster)
and it added them again.
So the question is: Is there any way to remove the data when a given node
fails? so it starts as if a new empty node has been added to the cluster.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.