There are no more pending tasks, but I still have that life cycle error on 540 indices.
When viewing the filebeat-7.4.2-2019.11.27 index in Kibana, the ILM section looks like this:
Index lifecycle management
Index lifecycle error
illegal_argument_exception: index.lifecycle.rollover_alias [filebeat] does not point to index [filebeat-7.4.2-2019.11.27]
Lifecycle policy
filebeat-lane-custom-existing
Current action
rollover
Failed step
check-rollover-ready
Current phase
hot
Current action time
2019-11-27 14:44:15
That specific index has this in its settings:
"index.lifecycle.name": "filebeat-lane-custom-existing",
It doesn't have any rollover_alias or other alias.
The only aliases that exist:
.kibana .kibana_3 - - - -
.kibana_task_manager .kibana_task_manager_2 - - - -
metricbeat metricbeat-000001 - - - true
heartbeat heartbeat-000001 - - - true
filebeat filebeat-000001 - - - true
journalbeat journalbeat-000001 - - - true
logstash logstash-000001 - - - true
This is the filebeat-lane-custom-existing policy:
PUT _ilm/policy/filebeat-lane-custom-existing
{
"policy": {
"phases": {
"hot": {
"min_age": "0ms",
"actions": {
"set_priority": {
"priority": 100
}
}
},
"warm": {
"actions": {
"allocate": {
"number_of_replicas": 1,
"include": {},
"exclude": {}
},
"forcemerge": {
"max_num_segments": 1
},
"set_priority": {
"priority": 50
},
"shrink": {
"number_of_shards": 1
}
}
},
"cold": {
"min_age": "90d",
"actions": {
"allocate": {
"number_of_replicas": 0,
"include": {},
"exclude": {}
},
"freeze": {},
"set_priority": {
"priority": 0
}
}
}
}
}
}
Um, on the specific index I listed above, I see this under the error message: Current action time 2019-11-27 14:44:15
Is that saying that the index hasn't tried to apply a ILM policy since 11/27?
There is one other lingering issue that I don't think is actually related to the life cycle error message, but I figured I'd better mention it. Just in case.
When viewing my policy in Kibana, I see this message:
No node attributes configured in elasticsearch.yml
You can't control shard allocation without node attributes.
I'm not sure what to do about that message since I'm running a test cluster on a single desktop via Docker Compose. My nodes look something like the following in my docker-compose.yml file.
esnode3:
container_name: esnode3
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
environment:
- cluster.name=reagan3-cluster
- node.name=esnode3
- node.master=true
- discovery.seed_hosts=esnode1
- cluster.initial_master_nodes=esnode1,esnode2,esnode3
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms6000m -Xmx6000m"
- http.cors.enabled=true
- http.cors.allow-origin="*"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- r3_cluster_esdata3:/usr/share/elasticsearch/data
- r3_cluster_snapshots:/opt/elasticsearch/snapshots
healthcheck:
test: ["CMD", "curl","-s" ,"-f", "http://localhost:9200/_cat/health"]
ports:
- 127.0.0.1:9203:9200
networks:
- elknet
restart: always
As you can see, I'm only using environment variables to configure ES. If I recall correctly, when I first saw that message, I thought it had something to do with not configuring node.master
, but as you can see, adding node.master
to the environment section didn't help...
Anyway, I think I'm stumped on this for the day. So I'll end here.