Hello
Is it possible?
Scenario:
We have a file that is being picked up by filebeat. The prospectors splits up the content of the file into multiple events. Now, how do we capture the total events before it passes to output?
Hello
Is it possible?
Scenario:
We have a file that is being picked up by filebeat. The prospectors splits up the content of the file into multiple events. Now, how do we capture the total events before it passes to output?
May I ask why do you need the number of messages before sending it to outputs?
Filebeat provides logs metrics by default every 30s. It contains info on how many events were sent, how many were filtered, etc.
{"pipeline": {
"clients": 0,
"events": {
"active": 0,
"filtered": 1,
"published": 810,
"retry": 50,
"total": 811
},
"queue": {
"acked": 810
}
}
Hi,
We are trying to figure out the number of logs picked up by the prospectors then compared it to elasticsearch data.
Does that mean from the sample above, the total events picked up by filebeat is 811 before indexing it to elasticsearch?
Yes, Filebeat read 811 events as pipeline.events.total
suggests. But one message was filtered out (pipeline.events.filtered
) and not published. So in the end 810 was forwarded to Elasticsearch, as seen in pipeline.events.published
.
thanks for you reply @kvch! Much appreciated!
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.