I was doing some testing with my ELK setup before it is going into production.
I have two ES clusters and based on the kafka topic it will push.
I intentionally made the one of the ES down and the logstash is not pushing the request for the another working kafka topic to the working ES.
What should I need to do here to have full uptime and have a better degradation mode processing?
Ansible managed
output {
if [type_name] == "topic1" {
elasticsearch {
hosts => ["es1"] ### <<<< THIS IS THE ES CLUSTER THAT I MADE DOWN
index => "%{[type_name]}-%{+YYYY.MM.dd}"
document_type => "%{[type_name]}"
}
}
if [type_name] == "topic2" {
elasticsearch {
hosts => ["es2"]
index => "%{[type_name]}-%{+YYYY.MM.dd}"
document_type => "%{[type_name]}"
}
}
if [type_name] == "topic3" {
elasticsearch {
hosts => ["es3"]
index => "%{[type_name]}-%{+YYYY.MM.dd}"
document_type => "%{[type_name]}"
}
}
if [type_name] == "topic4" {
elasticsearch {
hosts => ["es4"]
index => "%{[type_name]}-%{+YYYY.MM.dd}"
document_type => "%{[type_name]}"
}
}
if [type_name] == "nginx" {
elasticsearch {
hosts => ["search-nginx-msvbrtx3cnxthe2ttezqmrrm3m.ap-southeast-1.es.amazonaws.com:80"]
index => "%{[type_name]}-%{+YYYY.MM.dd}"
document_type => "%{[type_name]}"
}
}
}