hi @Rios i followed your suggestion and I found that logstash took a long time to publish an event to Elastic. As you can see here, I separate the output config for the index that will be monthly, weekly, or daily indices:
"outputs" : [ {
"id" : "ocp-daily",
"documents" : {
"dlq_routed" : 1216,
"successes" : 674948
},
"bulk_requests" : {
"with_errors" : 1163,
"successes" : 75063,
"responses" : {
"200" : 76226
}
},
"events" : {
"duration_in_millis" : 3444186,
"in" : 676164,
"out" : 676164
},
"name" : "elasticsearch",
"flow" : {
"worker_millis_per_event" : {
"current" : 6.683,
"last_1_minute" : 5.304,
"last_5_minutes" : 5.224,
"last_15_minutes" : 5.587,
"last_1_hour" : 5.334,
"lifetime" : 5.094
},
"worker_utilization" : {
"current" : 7.488,
"last_1_minute" : 7.563,
"last_5_minutes" : 6.668,
"last_15_minutes" : 6.895,
"last_1_hour" : 7.046,
"lifetime" : 6.96
}
}
{
"id" : "ocp-monthly",
"documents" : {
"dlq_routed" : 357,
"successes" : 223974
},
"bulk_requests" : {
"with_errors" : 268,
"successes" : 62649,
"responses" : {
"200" : 62917
}
},
"events" : {
"duration_in_millis" : 2218478,
"in" : 224331,
"out" : 224331
},
"name" : "elasticsearch",
"flow" : {
"worker_millis_per_event" : {
"current" : 10.32,
"last_1_minute" : 10.4,
"last_5_minutes" : 9.649,
"last_15_minutes" : 9.987,
"last_1_hour" : 10.21,
"lifetime" : 9.889
},
"worker_utilization" : {
"current" : 4.293,
"last_1_minute" : 4.5,
"last_5_minutes" : 4.232,
"last_15_minutes" : 4.446,
"last_1_hour" : 4.509,
"lifetime" : 4.483
}
}
}
I don't understand why logstash takes so long to publish an event. you can see that logstash needs 10ms to publish each event. my configuration was simple. What else can I tune from this?
if [merge] == "month" {
elasticsearch {
id => "ocp-monthly"
hosts => ["https://host1:9215", "https://host2:9215", and so on]
ssl_certificate_authorities => '/logstash/logstash-8.18.2/config/certs/ca.crt'
ssl_verification_mode => 'none'
index => "ocp-sby-%{[kubernetes][namespace]}-%{+YYYY.MM}"
document_id => "%{[custom_id]}"
user => '${ES_USER}'
password => '${ES_PWD}'
}
}
else if [merge] == "weekly" {
elasticsearch {
hosts => ["https://host1:9215", "https://host2:9215", and so on"]
ssl_certificate_authorities => '/logstash/logstash-8.18.2/config/certs/ca.crt'
ssl_verification_mode => 'none'
index => "ocp-sby-%{[kubernetes][namespace]}-%{+xxxx.ww}"
document_id => "%{[custom_id]}"
user => '${ES_USER}'
password => '${ES_PWD}'
}
}
else {
elasticsearch {
id => "ocp-daily"
hosts => ["https://host1:9215", "https://host2:9215", and so on"]
ssl_certificate_authorities => '/logstash/logstash-8.18.2/config/certs/ca.crt'
ssl_verification_mode => 'none'
index => "ocp-sby-%{[kubernetes][namespace]}-%{+YYYY.MM.dd}"
document_id => "%{[custom_id]}"
user => '${ES_USER}'
password => '${ES_PWD}'
}
}
I took a quick look at the monitoring cluster, and it turns out that almost all pipelines have been publishing events for a long time to Elastic
My elastic cluster consists of 7 Data nodes with 16GB of heap each.
Thanks