Data delay writing to ES

We are currently using logstash collect the netflow from our routers, we expected a real-time data,but now we found that we can only see the data of yesterday, that means the data delayed for one day, and we check our persiste queue, the data in the queue is only 4G , is there anyway to solve the data delay ?

If the PQ is growing, it would seem like either Logstash or your downstream systems are not able to keep up with the flow. What does your configuration look like? Where are you sending data? What throughput are you seeing?

The configuration of my logstash is like:

input {
udp {
type => "netflow"
port => <%= setting("var.input.udp.port", 2055) %>
codec => netflow {
versions => [5,9]
}
add_field => {"datacenter" => "Eastern"}
}
}

output {
<%= elasticsearch_output_config() %>
}

I am now using the netflow module in logstash immediately,
I sent my netflow from our network devices
I am now seeing the realtime traffic,but now (16:17) i can only see the data before 15:00

What indexing throughput are you seeing in Elasticsearch? What is the specification of your Elasticsearch cluster? Do you have monitoring installed?

The index in my es is like "netflow-idc-2018.02.03",in my ealsticsearch we have 9 nodes(including 4 data nodes, 2 master nodes,2 coordinating nodes),And now we are using kibana monitor the health of my es cluster,but it seems that all the things are normal through the monitoring

What is the indexing throughput? What is the specification of the hosts the data nodes are running on?

Also, having 2 master nodes is bad - you should always look to have 3.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.