Disable auto_import_dangled for ES 6.2

It seems like the setting "gateway.local.auto_import_dangled: no" is not supported for the Elasticsearch 6.2. I'm getting the below exception if I use this setting.

java.lang.IllegalArgumentException: unknown setting [gateway.local.auto_import_dangled] please check that any required plugins are installed, or check the breaking changes documentation for removed settings

What is the new way to disable auto import of dangled indices for ES 6.2?

Hi,

Can someone please help me on this? My Elasticsearch cluster goes into an unusable state because of auto import.
My alias starts pointing to two indices after auto import and then the Logstash stops logging to the cluster.

Remove the alias on the index you don't want to have an alias anymore?

I think I saw recent changes that might fix the issue with old settings.
May be upgrade to 6.2.3?

Hi,

I'm using the alias as I need to rollover the index once it gets to certain size.

Can the rollover be done without the alias on ES 6.2?

Sorry I'm not sure I understood this. What issue is fixed in 6.2.3?

The error you mentioned before about unknown setting . It might have been fixed. Didn't check as typing on a mobile now.

I'm confused too by the relationship between this error and aliases.
It doesn't seem to be related.

Anyway, may be you set the alias in a template instead of letting the rollover API.

I've seen recently a similar case on this forum.

Let me explain my scenario. I'm using rollover API with an alias and the alias always points to the latest index created during the rollover.

When an dangling index gets auto imported, the alias ends up pointing to the latest index and also the newly imported index. Logstash then throws the below error and stops writing to the Elasticsearch.

[ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>400, :url=>"http://10.133.132.122:9200/_bulk", :body=>"{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Alias [filebeat_logs] has more than one indices associated with it [[filebeat-2018.03.16-000007, filebeat-2018.03.16-000006]], can't execute a single index op"}],"type":"illegal_argument_exception","reason":"Alias [filebeat_logs] has more than one indices associated with it [[filebeat-2018.03.16-000007, filebeat-2018.03.16-000006]], can't execute a single index op"},"status":400}"}

If your index filebeat-2018.03.16-000006 has been imported, then remove the alias filebeat_logs from it.

I need to prevent the cluster from going into this state on a production environment.

I see now what you meant.

The question is why you are in such a state.
I mean that you might have loose all the master elligible nodes, right?

From https://www.elastic.co/guide/en/elasticsearch/reference/master/_dangling_indices.html

When a node joins the cluster, any shards stored in its local data directory directory which do not already exist in the cluster will be imported into the cluster. This functionality is intended as a best effort to help users who lose all master nodes. If a new master node is started which is unaware of the other indices in the cluster, adding the old nodes will cause the old indices to be imported, instead of being deleted.

1 Like

I saw the cluster getting into this state on a production environment when there was some connectivity issues between the nodes.

I reproduced this in my lab with two nodes in the cluster. I stopped the network service of one of the node thus breaking the communication between the nodes and the nodes went out of sync. When the network connectivity was resumed, indices on the nodes were still out of sync.
Restarting the ES on one of the nodes brought the indices in sync but there were some dangling indices on the node with the same name which already exist in the cluster. After a while that index from the cluster got deleted as part of rollover and the dangling index with the same name got auto imported. This made the alias to point to the latest index and the imported index.

When you did this with two nodes, did you have both nodes master eligible? If so, did you have minimum_master_nodes correctly set to 2 in order to avoid split brain scenarios?

More important - is your production system set up according to these guidelines?

1 Like

Both nodes were master eligible but I did not have the minimum_master_nodes configured. This setting was commented out so I believe it took the default value of 1.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.