Garbage turns out to be coming from our developers...
Coincidental with my trying to roll out the new filebeat.
Looking good so far. I will know better tomorrow.
Garbage turns out to be coming from our developers...
Coincidental with my trying to roll out the new filebeat.
Looking good so far. I will know better tomorrow.
Keep me posted.
@viveklak @Tim_Burt A big thank you to you two for keep investigating this issue and pushing forward to get it fixed. I really appreciate all the work you put into this one.
Just got another one.. But perhaps more useful info.
[root@ip-172-16-59-236 rails]# filebeat -version
filebeat version 6.0.0-alpha1 (amd64), libbeat 6.0.0-alpha1
Looks like 6 million identical records in 2 ms.
Here is the caveat.. I have a developer that is dumping a large JSON object into the rails log files. This single line is approx 118 K in length. These lines are prevalent in the rails logs
The line that is duplicated in the Kibana screenshot is surrounded by these long lines.
Could it be possible this could be a buffer overrun?
I can share with you the raw logs but not in this forum. You can contact me at: removed
@Tim_Burt Just sent you an email, interested to see the log file. Can we reopen a new thread for this topic? I get the impression it is also an issue but not directly related to the previous one we that got fixed.
Thanks for the log file. I tried to reproduce the issue locally but couldn't so far. I tested on my side with File output instead of Kafka to take Kafka out of the equation.
Do you see this happen every time such a JSON thingy is in the log or only from time to time? Can you reproduce it locally on your side with Kafka?
One idea could be that Kafka only accepts a certain message size?
Thanks for the effort... Yes, I am aware that my issues may be with Kafka, it is difficult to discern which element is misbehaving. I do have a test environment for file input, but I have not had the time at the moment. We just did a migration and we are all still scrambling with the mop-up.
Your fix seems to have done the trick. I am no longer getting the millions of duplicates. i have seen 2 or 4 duplicates of a line, but that may have a different origin entirely. I will need to do much more research.
Thank you for all of your help and hard work. We are looking forward to 5.0.0.
I think we can close this topic for now.
Please ping us when you hit issue again (also with Kafka ).
Side note: Beta1 was just released: https://www.elastic.co/blog/elastic-stack-release-5-0-0-beta1
Ping.... I sent you an email with logs and configs....
Nothing I cannot handle, but it may help you solve a corner case.
Thanks again for your help!
This topic was automatically closed after 21 days. New replies are no longer allowed.
I will look into it as soon as I get the time and will get back to you.
For everyone going through this Thread, we found the potential problem here: Filebeat 5.0beta1 with kafka output: multiplication of log lines
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.