[ELASTIC 7] Issue with Index Lifecycle Management

Hello everybody,

I followed this elasticsearch blogpost to set a hot warm cold cluster

Here is my policy created using Kibana :

{
"policy": {
    "phases": {
        "hot": {
            "min_age": "0ms",
            "actions": {
                "rollover": {
                    "max_age": "30d",
                    "max_size": "1gb"
                },
                "set_priority": {
                    "priority": 50
                }
            }
        },
        "warm": {
            "min_age": "7d",
            "actions": {
                "allocate": {
                    "include": {},
                    "exclude": {},
                    "require": {
                        "box_type": "warm"
                    }
                },
                "forcemerge": {
                    "max_num_segments": 1
                },
                "set_priority": {
                    "priority": 25
                }
            }
        },
        "cold": {
            "min_age": "30d",
            "actions": {
                "allocate": {
                    "include": {},
                    "exclude": {},
                    "require": {
                        "box_type": "cold"
                    }
                },
                "freeze": {},
                "set_priority": {
                    "priority": 10
                }
            }
        },
        "delete": {
            "min_age": "60d",
            "actions": {
                "delete": {}
            }
        }
    }
}
}

Here is the template I created :

{
"order": 10,
"index_patterns": ["dev-*"],
"settings": {
"index.routing.allocation.require.data": "hot",
"index.lifecycle.name": "hot-warm-cold-delete-60days",
}
}

Here is my logstash output :

output {
stdout { codec => rubydebug }
elasticsearch {
ilm_enabled => true
hosts => ["192.168.250.120:9200"]
index => dev-%{+YYYY.MM.dd}"
}
}

After Logstash restart, and after the first log coming from syslog device, Logstash stop with this error :

    [WARN ] 2019-05-07 17:37:35.740 [[main]>worker0] elasticsearch - Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://192.168.250.120:9200/][Manticore::SocketTimeout] Read timed out {:url=>http://192.168.250.120:9200/, :error_message=>"Elasticsearch Unreachable: [http://192.168.250.120:9200/][Manticore::SocketTimeout] Read timed out", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[ERROR] 2019-05-07 17:37:35.769 [[main]>worker0] elasticsearch - Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://192.168.250.120:9200/][Manticore::SocketTimeout] Read timed out", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[ERROR] 2019-05-07 17:37:37.827 [[main]>worker0] elasticsearch - Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4}
[WARN ] 2019-05-07 17:37:38.445 [Ruby-0-Thread-4: :1] elasticsearch - Restored connection to ES instance {:url=>"http://192.168.250.120:9200/"}
[INFO ] 2019-05-07 17:38:41.917 [[main]>worker0] elasticsearch - retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[dev-2019.05.07][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[dev-2019.05.07][0]] containing [index {[dev-2019.05.07][_doc][gC3xkmoBt6xed1DUgGHU], source[{\"priority\":85,\"timestamp\":\"May  7 17:36:33\",\"@version\":\"1\",\"logsource\":\"elastic-kibana\",\"@timestamp\":\"2019-05-07T15:36:33.000Z\",\"tags\":[\"_grokparsefailure\"],\"facility_label\":\"security/authorization\",\"syslog_severity_code\":5,\"pid\":\"2048\",\"message\":\"pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.250.254  user=support\",\"severity_label\":\"Notice\",\"syslog_facility_code\":1,\"program\":\"sshd\",\"host\":\"192.168.250.102\",\"facility\":10,\"syslog_facility\":\"user-level\",\"severity\":5,\"type\":\"syslog\",\"syslog_severity\":\"notice\"}]}]]"})

Looks like Elastic is not reachable but this is not the case , because If I change logstash configuration to create an index which is not in the scope of the template, it's fine.
So, issue is related with the template or policy but despite after reading many documentation, I can't figure this out

When looking on Kibana index management, index "dev" is RED.

Would you have an idea on what is wrong with my configuration ?

Thanks for help !

OK. By following this tutorial => https://elasticsearch.cn/article/6358 (you'll have to translate it) it works fine

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.