Dangling Indices in Ingest nodes after running Currator

Hi,

After running curator for deleting indicies older than two weeks, I see following error in Ingest nodes elasticsearch logs!

tailf /var/log/elasticsearch/dbus.log 

[2018-02-19T10:26:09,264][WARN ][o.e.g.DanglingIndicesState] [ip-10-50-30-150] [[logstash-demo-all-container-2018.06/LQF1f6_WQ3CMDO2qYXIa6Q]] can not be imported as a dangling index, as an index with the same name and UUID exist in the index tombstones.  This situation is likely caused by copying over the data directory for an index that was previously deleted.
[2018-02-19T10:26:09,264][WARN ][o.e.g.DanglingIndicesState] [ip-10-50-30-150] [[logstash-demo-all-application-2018.06/iuSMlXguQZODV3jN5I2tfw]] can not be imported as a dangling index, as an index with the same name and UUID exist in the index tombstones.  This situation is likely caused by copying over the data directory for an index that was previously deleted.
[2018-02-19T10:26:09,264][WARN ][o.e.g.DanglingIndicesState] [ip-10-50-30-150] [[logstash-demo-all-script-2018.06/uB5SavnPSbe_JIBkUoUrjQ]] can not be imported as a dangling index, as an index with the same name and UUID exist in the index tombstones.  This situation is likely caused by copying over the data directory for an index that was previously deleted.
[2018-02-19T10:26:09,264][WARN ][o.e.g.DanglingIndicesState] [ip-10-50-30-150] [[logstash-demo-all-cron-2018.06/Om6NSFQATseZvk28EhHrfw]] can not be imported as a dangling index, as an index with the same name and UUID exist in the index tombstones.  This situation is likely caused by copying over the data directory for an index that was previously deleted.
[2018-02-19T10:26:09,264][WARN ][o.e.g.DanglingIndicesState] [ip-10-50-30-150] [[logstash-demo-all-haproxy-2018.06/CrksY5NCTIGpbBFzFKksPw]] can not be imported as a dangling index, as an index with the same name and UUID exist in the index tombstones.  This situation is likely caused by copying over the data directory for an index that was previously deleted.
[2018-02-19T10:26:09,264][WARN ][o.e.g.DanglingIndicesState] [ip-10-50-30-150] [[logstash-demo-all-access-2018.06/ZKN-rpgKQu6spuppLDeufg]] can not be imported as a dangling index, as an index with the same name and UUID exist in the index tombstones.  This situation is likely caused by copying over the data directory for an index that was previously deleted.

My elasticsearch cluster is consists of eight nodes that are as following:

  • Three Master nodes (10.50.45.124,10.50.40.180, 10.50.30.106)
  • Two Ingest nodes (Running Kibana and Curator)
  • Three Data nodes

Curator is installed on the both ingest nodes but it is pointed to run the action file (DELETING OLD INDICES) on the master nodes as you can see in below I put IPs for master nodes in curator config file.

cat /opt/elasticsearch-curator/curator.yml 
---
client:
  hosts:
    - 10.50.45.124
    - 10.50.40.180
    - 10.50.30.106
  port: 9200
  url_prefix:
  use_ssl: False
  certificate:
  client_cert:
  client_key:
  ssl_no_validate: False
  http_auth:
  timeout: 30
  master_only: False
logging:
  loglevel: INFO
  logfile:
  logformat: default
  blacklist: ['elasticsearch', 'urllib3']

Curator action files is as following:

cat /opt/elasticsearch-curator/delete_indices.yml 
---
actions:
  1:
    action: delete_indices
    description: Delete indices older than one week for "logstash-playground-all-" prefixed indices.
    options:
      ignore_empty_list: True
      timeout_override:
      continue_if_exception: False
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: logstash-playground-all-*-
      exclude:
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%W'
      unit: weeks
      unit_count: 1
      exclude:

  2:
    action: delete_indices
    description: Delete indices older than two weeks for "logstash-demo-all-" prefixed indices.
    options:
      ignore_empty_list: True
      timeout_override:
      continue_if_exception: False
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: logstash-demo-all-*-
      exclude:
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%W'
      unit: weeks
      unit_count: 2
      exclude:

  3:
    action: delete_indices
    description: Delete indices older than one month for "logstash-production-all-" prefixed indices.
    options:
      ignore_empty_list: True
      timeout_override:
      continue_if_exception: False
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: logstash-production-all-*-
      exclude:
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: months
      unit_count: 1
      exclude:

  4:
    action: delete_indices
    description: Delete indices older than two months for "logstash-production-kafka-" prefixed indices.
    options:
      ignore_empty_list: True
      timeout_override:
      continue_if_exception: False
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: logstash-production-kafka-*-
      exclude:
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: months
      unit_count: 2
      exclude:

I am aware that, I can remove these dangling indices from /var/lib/elasticsearch/nodes/0/ with a bash script but as it is risky I want to find a way to solve this issue with the curator!

I am looking forward for your suggestion and help.
Thank you very much

If they're dangling, Curator probably can't see them because they don't appear in the cluster state.

How can I fix it?

If you use curator_cli with the show_indices command, do you see the "dangling" indices? If not, then Curator cannot help you. You will likely have to remove them from the filesystem path manually.

Thank you @theuntergeek. I removed them manually.

Do you see any misconfiguration in my curator.yml or in the action file? Wondering what caused this!

Curator only makes API calls. It's basically an index selecting wrapper around Elasticsearch API calls. The only way something like that happens is if there are issues in your cluster where nodes were interrupted while performing the actions associated with the API calls. I can't begin to guess what conditions led to that end result, but it's not possible for it to be something Curator did or did not do. You could have manually selected the same indices and run

DELETE index1,index2,index3...

and it could have done the same exact thing, because that's all that Curator calls once it's isolated the exact indices you told it to find.

Thank you for the detailed explanation.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.