Hi,
After running curator for deleting indicies older than two weeks, I see following error in Ingest nodes elasticsearch logs!
tailf /var/log/elasticsearch/dbus.log
[2018-02-19T10:26:09,264][WARN ][o.e.g.DanglingIndicesState] [ip-10-50-30-150] [[logstash-demo-all-container-2018.06/LQF1f6_WQ3CMDO2qYXIa6Q]] can not be imported as a dangling index, as an index with the same name and UUID exist in the index tombstones. This situation is likely caused by copying over the data directory for an index that was previously deleted.
[2018-02-19T10:26:09,264][WARN ][o.e.g.DanglingIndicesState] [ip-10-50-30-150] [[logstash-demo-all-application-2018.06/iuSMlXguQZODV3jN5I2tfw]] can not be imported as a dangling index, as an index with the same name and UUID exist in the index tombstones. This situation is likely caused by copying over the data directory for an index that was previously deleted.
[2018-02-19T10:26:09,264][WARN ][o.e.g.DanglingIndicesState] [ip-10-50-30-150] [[logstash-demo-all-script-2018.06/uB5SavnPSbe_JIBkUoUrjQ]] can not be imported as a dangling index, as an index with the same name and UUID exist in the index tombstones. This situation is likely caused by copying over the data directory for an index that was previously deleted.
[2018-02-19T10:26:09,264][WARN ][o.e.g.DanglingIndicesState] [ip-10-50-30-150] [[logstash-demo-all-cron-2018.06/Om6NSFQATseZvk28EhHrfw]] can not be imported as a dangling index, as an index with the same name and UUID exist in the index tombstones. This situation is likely caused by copying over the data directory for an index that was previously deleted.
[2018-02-19T10:26:09,264][WARN ][o.e.g.DanglingIndicesState] [ip-10-50-30-150] [[logstash-demo-all-haproxy-2018.06/CrksY5NCTIGpbBFzFKksPw]] can not be imported as a dangling index, as an index with the same name and UUID exist in the index tombstones. This situation is likely caused by copying over the data directory for an index that was previously deleted.
[2018-02-19T10:26:09,264][WARN ][o.e.g.DanglingIndicesState] [ip-10-50-30-150] [[logstash-demo-all-access-2018.06/ZKN-rpgKQu6spuppLDeufg]] can not be imported as a dangling index, as an index with the same name and UUID exist in the index tombstones. This situation is likely caused by copying over the data directory for an index that was previously deleted.
My elasticsearch cluster are consists of eight nodes that are as following:
- Three Master nodes (10.50.45.124,10.50.40.180, 10.50.30.106)
- Two Ingest nodes (Running Kibana and Curator)
*Three Data nodes
Curator is installed on the both ingest nodes but it is pointed to run the action file (DELETING OLD INDICES) on the master nodes as you can see in below I put IPs for master nodes in curator config file.
cat /opt/elasticsearch-curator/curator.yml
---
client:
hosts:
- 10.50.45.124
- 10.50.40.180
- 10.50.30.106
port: 9200
url_prefix:
use_ssl: False
certificate:
client_cert:
client_key:
ssl_no_validate: False
http_auth:
timeout: 30
master_only: False
logging:
loglevel: INFO
logfile:
logformat: default
blacklist: ['elasticsearch', 'urllib3']
Curator action files is as following:
cat /opt/elasticsearch-curator/delete_indices.yml
---
actions:
1:
action: delete_indices
description: Delete indices older than one week for "logstash-playground-all-" prefixed indices.
options:
ignore_empty_list: True
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: prefix
value: logstash-playground-all-*-
exclude:
- filtertype: age
source: name
direction: older
timestring: '%Y.%W'
unit: weeks
unit_count: 1
exclude:
2:
action: delete_indices
description: Delete indices older than two weeks for "logstash-demo-all-" prefixed indices.
options:
ignore_empty_list: True
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: prefix
value: logstash-demo-all-*-
exclude:
- filtertype: age
source: name
direction: older
timestring: '%Y.%W'
unit: weeks
unit_count: 2
exclude:
3:
action: delete_indices
description: Delete indices older than one month for "logstash-production-all-" prefixed indices.
options:
ignore_empty_list: True
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: prefix
value: logstash-production-all-*-
exclude:
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: months
unit_count: 1
exclude:
4:
action: delete_indices
description: Delete indices older than two months for "logstash-production-kafka-" prefixed indices.
options:
ignore_empty_list: True
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: prefix
value: logstash-production-kafka-*-
exclude:
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: months
unit_count: 2
exclude:
I can remove these dangling indices from /var/lib/elasticsearch/nodes/0/
with a bash script but as it is risky I want to find a way to solve this issue with the curator!
I am looking forward for your suggestion and help.
Thank you very much