Es loss applog data

my applog is applog -> kafka -> logstash -> es
but every day,it will loss 10 sec data during 23:59:50 - 00:00:10
who can help me
es 7.4
logstash 7.4

Does this period coincide with new daily indices being created? If this is the case the fact that no data is indexed during this period does not mean data is lost as Kafka would buffer it. If creating new daily indices take that long it could however indicate that you have a performance problem in the cluster or that you simply have too many indices and shard, making cluster state updates slow. How many indices are you creating per day? How many shard do you have in the cluster? How large is the cluster?

thanks。
this is a new elk system,the indices are less than 30.
and i will create the daily indices during that time.
my pipleline is like this:

input {
kafka {
bootstrap_servers => "......"
consumer_threads => "......"
session_timeout_ms => "......"
}
}
filter {
json {
source => "message"
}
ruby {
code => "event.set('my_index_day', event.timestamp.time.localtime.strftime('%Y.%m.%d'))"
}
}
output {
elasticsearch {
action => "index"
hosts => ["......"]
index => "applog-%{my_index_day}"
}
}

What type of storage are you using for Elasticsearch? What is the specification of the Elasticsearch nodes in terms of CPU, RAM, heap and storage?

just storing apache log。
my es nodes
cpu 2socket 16core
ram 128g
heapsize 31g
storage 40Tb

What type of storage are you using? Is it locally attached disks? Is there anything in the Elasticsearch logs around that time? How many nodes do you have?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.