Huge delay in logs to ES from Kafka/Logstash

New for ELK
Have 2 Kafka , 2 Logstash , 3 master ES and 2 Data nodes

my logstash config is
input {
kafka {
bootstrap_servers => "kafka01:9092, kafka02:9092"
topics => ["filebeat"]
codec => json
heartbeat_interval_ms => "1000"
poll_timeout_ms => "10000"
session_timeout_ms => "120000"
request_timeout_ms => "130000"
consumer_threads => 40
}
}

filter {
mutate {
add_field => {"[@metadata][index]" => "%{[kafka][topic]}"}
}
}

output {
elasticsearch {
hosts => ["esmas01:9200", "esmas02:9200", "esmas03:9200"]
index => "logstash-%{[@metadata][index]}-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}

I could See logs are reaching Kafka , Looks like Logstash consumption delay or Outpul delay to ES

Can some one help .

TIA

The first thing to do is remove the stdout output.

Elasticsearch can often be the bottleneck so it makes sense to check it first. It would help if you could tell us the specification of the Elasticsearch nodes ( number of CPU cotes, RAM and size and type of storage).

How much data do you have in the cluster? How much are you writing per day? How many indices and shards are you indexing into?

Thanks Christian for helping
Removed stdout , found some improvement
ES master nodes are 3 core / 16 GB , ES Data node 3 core / 12 GB .
storage is attached to Data node is San based

Data is around 1.1 TB per Data nodes

curl -X GET "localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason&pretty" .monitoring-kibana-7-2020.07.06 0 p STARTED .monitoring-kibana-7-2020.07.06 0 r STARTED .kibana_1 0 p STARTED .kibana_1 0 r STARTED logstash-%{[kafka][topic]}-2020.06.26 0 p STARTED logstash-%{[kafka][topic]}-2020.06.26 0 r STARTED logstash-%{[kafka][topic]}-2020.06.28 0 p STARTED logstash-%{[kafka][topic]}-2020.06.28 0 r STARTED .apm-agent-configuration 0 p STARTED .apm-agent-configuration 0 r STARTED logstash-%{[kafka][topic]}-2020.06.15 0 p STARTED logstash-%{[kafka][topic]}-2020.06.15 0 r STARTED .monitoring-es-7-2020.07.05 0 p STARTED .monitoring-es-7-2020.07.05 0 r STARTED logstash-%{[kafka][topic]}-2020.06.24 0 p STARTED logstash-%{[kafka][topic]}-2020.06.24 0 r STARTED logstash-%{[kafka][topic]}-2020.07.01 0 p STARTED logstash-%{[kafka][topic]}-2020.07.01 0 r STARTED logstash-%{[kafka][topic]}-2020.06.19 0 p STARTED logstash-%{[kafka][topic]}-2020.06.19 0 r STARTED logstash-%{[kafka][topic]}-2020.06.29 0 p STARTED logstash-%{[kafka][topic]}-2020.06.29 0 r STARTED .async-search 0 p STARTED .async-search 0 r STARTED logstash-%{[kafka][topic]}-2020.06.21 0 p STARTED logstash-%{[kafka][topic]}-2020.06.21 0 r STARTED logstash-%{[kafka][topic]}-2020.06.18 0 p STARTED logstash-%{[kafka][topic]}-2020.06.18 0 r STARTED logstash-%{[kafka][topic]}-2020.06.30 0 p STARTED logstash-%{[kafka][topic]}-2020.06.30 0 r STARTED logstash-%{[kafka][topic]}-2020.06.25 0 p STARTED logstash-%{[kafka][topic]}-2020.06.25 0 r STARTED logstash-%{[kafka][topic]}-2020.06.22 0 p STARTED logstash-%{[kafka][topic]}-2020.06.22 0 r STARTED logstash-%{[kafka][topic]}-2020.06.23 0 p STARTED logstash-%{[kafka][topic]}-2020.06.23 0 r STARTED logstash-%{[kafka][topic]}-2020.06.17 0 p STARTED logstash-%{[kafka][topic]}-2020.06.17 0 r STARTED logstash-%{[kafka][topic]}-2020.06.20 0 p STARTED logstash-%{[kafka][topic]}-2020.06.20 0 r STARTED logstash-%{[kafka][topic]}-2020.07.04 0 p STARTED logstash-%{[kafka][topic]}-2020.07.04 0 r STARTED .kibana_task_manager_1 0 p STARTED .kibana_task_manager_1 0 r STARTED logstash-%{[kafka][topic]}-2020.07.03 0 p STARTED logstash-%{[kafka][topic]}-2020.07.03 0 r STARTED logstash-%{[kafka][topic]}-2020.06.14 0 p STARTED logstash-%{[kafka][topic]}-2020.06.14 0 r STARTED logstash-%{[kafka][topic]}-2020.07.02 0 p STARTED logstash-%{[kafka][topic]}-2020.07.02 0 r STARTED .apm-custom-link 0 p STARTED .apm-custom-link 0 r STARTED logstash-%{[kafka][topic]}-2020.06.16 0 p STARTED logstash-%{[kafka][topic]}-2020.06.16 0 r STARTED kibana_sample_data_ecommerce 0 p STARTED kibana_sample_data_ecommerce 0 r STARTED .monitoring-es-7-2020.07.06 0 p STARTED .monitoring-es-7-2020.07.06 0 r STARTED logstash-%{[kafka][topic]}-2020.06.27 0 p STARTED logstash-%{[kafka][topic]}-2020.06.27 0 r STARTED .monitoring-kibana-7-2020.07.05 0 p STARTED .monitoring-kibana-7-2020.07.05 0 r STARTED

I hope these information are sufficient to help me on this issue

TIA

I have some additional questions:

  • What is the output of the cluster stats API?

  • What does CPU usage on the data nodes look like while you are indexing?

  • What does iostat -x give on the data nodes?

  • How did you arrive at the settings you are using for the Logstash kafka plugin?

  • Is there anything in the logs indicating slow or frequent GC? Are there any other warning or error messages, espacially on the data nodes?

Hi @Christian_Dahlqvist

The issue seems to have got resolved by removing "stdout output" from logstash config .

Thanks for the Support

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.