Hi Community,
I observed that there is some data is getting lost while ingesting cloudwatch logs to logstash via filebeat. Any idea how to debug/fix the issue?
Hi Community,
I observed that there is some data is getting lost while ingesting cloudwatch logs to logstash via filebeat. Any idea how to debug/fix the issue?
Can you provide more context on how are you getting that logs and how identified that you are losing data?
Are you using the filebeat cloudwatch input to get the logs and send to Logstash?
What do you have in logs for both filebeat and Logstash?
How many loggroups do you have in Cloudwatch and what is the volume of logs you have? The cloudwatch filebeat input does not perform well on large cloudwatch log groups.
Hi @leandrojmp
Sorry for the delay in reply and thanks for your response, at that time i was convinced that there will be some loss of data in filebeat when it is reading from cloud watch.
To answer you question, I have defined cloud watch log group and log stream with frequency of 2 minutes, these are navigation logs from an application.
i could compare the cloud watch logs and the logs stored in Elasticsearch using kibana, and more over i have enabled my filebeat logging level as debug and i can see that some of the cloud watch logs are not in filebeat log
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.