Metricbeat to logstash duplicate events

Hi,

I hope someone can help me understand the following behavior im seeing from logstash outputting duplicate metric beat events to local file.

I was looking at the output today and I can see the same event from the same host and with same time with the same exact data twice. I believe I came across multiple ones but here is one example:

{"type":"metricsets","tags":["beats_input_raw_event"],"@timestamp":"2017-06-15T13:25:28.570Z","system":{"core":{"system":{"pct":0.0197},"softirq":{"pct":3.0E-4},"steal":{"pct":0.0012},"idle":{"pct":0.821},"irq":{"pct":0.0},"iowait":{"pct":2.0E-4},"id":0,"user":{"pct":0.1577},"nice":{"pct":0.0}}},"beat":{"hostname":"my_host","name":"my_host","version":"5.4.1"},"@version":"1","host":my_host","metricset":{"rtt":1168,"module":"system","name":"core"},"source-ip":"my_ip"}

VS.

{"type":"metricsets","tags":["beats_input_raw_event"],"@timestamp":"2017-06-15T13:25:28.570Z","system":{"core":{"system":{"pct":0.0197},"softirq":{"pct":3.0E-4},"steal":{"pct":0.0012},"idle":{"pct":0.821},"irq":{"pct":0.0},"iowait":{"pct":2.0E-4},"id":0,"user":{"pct":0.1577},"nice":{"pct":0.0}}},"beat":{"hostname":my_host","name":"my_host","version":"5.4.1"},"@version":"1","host":"my_host","metricset":{"rtt":1168,"module":"system","name":"core"},"source-ip":"my_ip"}

what I am doing is outputting to local file every min on LS, I have a cleanup cronjob and plenty of disk space so no worries on that part, then we collect 15mins worth of metric data coming from many VMs in our network.

metrics execution period is at default 10s and no filtering or anything else happening on LS, for now.
Data is coming from all beats to the same LS and I am load balancing between two LS ports 5044 and 5043, on the beat output, for the same LS VM. So not sure if any of the above settings are causing such error or if this is beat related ?

all the nodes in my cluster and beats are using version 5.4.

Thanks!

I believe this may have been due to some wrong LS output configs.
Looks like we haven't encountered this anymore, and may also be due to the number of retries we had.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.