It seems like the setting "gateway.local.auto_import_dangled: no" is not supported for the Elasticsearch 6.2. I'm getting the below exception if I use this setting.
java.lang.IllegalArgumentException: unknown setting [gateway.local.auto_import_dangled] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
What is the new way to disable auto import of dangled indices for ES 6.2?
Can someone please help me on this? My Elasticsearch cluster goes into an unusable state because of auto import.
My alias starts pointing to two indices after auto import and then the Logstash stops logging to the cluster.
Let me explain my scenario. I'm using rollover API with an alias and the alias always points to the latest index created during the rollover.
When an dangling index gets auto imported, the alias ends up pointing to the latest index and also the newly imported index. Logstash then throws the below error and stops writing to the Elasticsearch.
[ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>400, :url=>"http://10.133.132.122:9200/_bulk", :body=>"{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Alias [filebeat_logs] has more than one indices associated with it [[filebeat-2018.03.16-000007, filebeat-2018.03.16-000006]], can't execute a single index op"}],"type":"illegal_argument_exception","reason":"Alias [filebeat_logs] has more than one indices associated with it [[filebeat-2018.03.16-000007, filebeat-2018.03.16-000006]], can't execute a single index op"},"status":400}"}
When a node joins the cluster, any shards stored in its local data directory directory which do not already exist in the cluster will be imported into the cluster. This functionality is intended as a best effort to help users who lose all master nodes. If a new master node is started which is unaware of the other indices in the cluster, adding the old nodes will cause the old indices to be imported, instead of being deleted.
I saw the cluster getting into this state on a production environment when there was some connectivity issues between the nodes.
I reproduced this in my lab with two nodes in the cluster. I stopped the network service of one of the node thus breaking the communication between the nodes and the nodes went out of sync. When the network connectivity was resumed, indices on the nodes were still out of sync.
Restarting the ES on one of the nodes brought the indices in sync but there were some dangling indices on the node with the same name which already exist in the cluster. After a while that index from the cluster got deleted as part of rollover and the dangling index with the same name got auto imported. This made the alias to point to the latest index and the imported index.
When you did this with two nodes, did you have both nodes master eligible? If so, did you have minimum_master_nodes correctly set to 2 in order to avoid split brain scenarios?
Both nodes were master eligible but I did not have the minimum_master_nodes configured. This setting was commented out so I believe it took the default value of 1.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.