I am trying to ship logs from a Kibana instance running in a docker container. I noticed that there is a module for Kibana in development (posted Github link below) and I pulled the pipeline code from Github and attempted to send the logs to this kibana pipeline in elastic search. It doesn't seem to parse the JSON fields at all; in Kibana the message field just contains the unparsed JSON log (seen below).
It sounds like Filebeat was not configured to parse the JSON logs generated by Kibana (see config from module). The ingest pipeline from the Kibana filebeat module does not do any JSON parsing and expects it to already be parsed. Because of this there should be an error.message field added to each event by the ingest pipeline. Do you see that error? Are you sure the data is being passed to the pipeline?
You can see stats from the ingest pipelines by querying GET _nodes/stats/ingest and looking for kibana-module-copy. It will tell you the number of events and number of failures.
Ok so I found out that my old filebeat.yml config wasn't correct, to specify a pipeline while using autodiscover it needs to be done directly under the config: field. My new filebeat.yml config is below.
After these changes, querying GET _nodes/stats/ingest shows that the pipeline is receiving the logs from Kibana, however, they are still not being parsed. I am receiving this error message: field [json] not present as part of path [json].
Not really sure how to fix that, since the test logs for the Kibana module in development are the same format as my current logs (and the test logs seem to be parsed correctly from what I can see in GitHub). Any Ideas?
The type: docker indicates it is already an input -> remove the input namespace.
The input config namespace is specific to modules (your nginx autodiscovery is indeed using filebeat modules). In filebeat modules provide additional abstractions and configurations on top of inputs in filebeat and other components in the Elastic Stack.
I think I have tracked down the issue. I needed to update Kibana to version 6.3.0. I am receiving logs now and the pipeline seems to be partially parsing the log, however, I am now running into an error message saying Unexpected character ('r' (code 114)): was expecting double-quote to start field name\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper@37c5d5c3; line: 1, column: 3]
and similar errors referencing different characters.
After some more analysis, it seems that all of the fields are being parsed, but none of the fields are renamed as defined in the pipeline (I see the native JSON fields in Kibana). Not exactly sure why there are errors in every entry for the Kibana log still.
Edit:
Fixed the above issue. My pipeline was incorrectly configured. Full pipeline below.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.