This is my complete configmap manifest. I have added multiline for catching excptions. Do these need to be in different type? If yes then how coz the log files path will remain the same, wont that be a problem?
And, nothing is getting parsed as of now. The whole json is coming as it is in elasticsearch inside a field "log".
Ok. Do you have matching config in logstash (or elastic) to catch those documents and interpret the contents of the log field as json and break out the fields?
Why logstash? From my understanding of the docs, i just need to deploy filebeat to my kubernetes cluster as a daemon set, and if the logs have json in separate lines, filebeat will automatically be able to parse it and send to elasticsearch with respective fields.
Oh, I see. You try to forward the container logs. As docker can be somewhat painful, especially regarding multiline and when to do JSON parsing. For this use case filebeat introduced the docker prospector type.
Yes, beats 6.x has some built in parsing. If it works for you, great, but
unless you only have one data source (for you, docker instances), it won't
integrate with anything else.
I'm all for processing the data as close to the source as possible, but I
haven't seen a way to configure that processing. Also, in my case, I don't
control the endpoints and have no way of updating that code, so I do
everything in logstash.
Also, the module filters have really bad field names.
So, my orginal issue was solved, but I can see that CPU consumption for filebeat spikes extremely high. Is this a known issue? or it is a problem in version 6.2.4?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.