Filebeat fails to process kibana json logs “failed to format message from *json-.log “in a docker enviroment with logstash

Problem description - since I have installed logstash I am seeing the following in kibana logs -
failed to format message from /var/lib/docker/.containers/xxx-json.log
If I remove logstash and send directly through elasticsearch I do not see these errors in kibana logs. It is also important to not that the data is getting to kibana. For some reason I am seeing error on some of the output. I will provide a sample at the end of this problem description.

filebeat config: (filebeat 6.8.6)
[{"paths": ["/var/log/docker/containers//.log*"], "fields": "paths": ["environment": "dev", "system" "test", "level": "docker-service"}, "json.keys_under_root": false, "tags": ["docker", "json", "dev"], "multiline": ["negate": true, "pattern": "^\[|^[0-9]{4]-[0-9]{2}-[0-9]{2}", "match": "after"}, "type": "log"} }]

path: ${path.config}/modules.d/*.yml
reload.enabled: false

hosts: ["dev.logstash1.test:5044, "dev.logstash2.test:5044"]
loadbalance: true

logstash.config (7.7.0)
input {
beats => 5044
host => "x.x.x.x"
output {
elasticsearch {
hosts=> ["https://dev.logs1.test:9200", "https://dev.logs2.test:9200"]
ssl_certificate_verification => false
index => "logstash-app-%{+YYYY.MM.dd}"

elasticsearch config logger
name: node1
master: true
data: true
ingest: true

logs: /usr/local/share/applications/elasticsearch/logs

bootstrap.memory_lock: true

host: localhost
tcp_keep_alive: true

port: 9201
publish_port: 9200
host: localhost
tcp.port: 9301
publish_host: localhost
publish_port: 19301

minimum_master_nodes: 2
- localhost:19301
- localhost: 19302
- localhost: 19303

Kibana - (6.8.2)
kibana config
server.port: 5601 "kibana"
elasticsearch.url: "https://dev.logs1.test:9200"
kibana.index: ".kibana"
kibana.defaultApppld: "discover"

server.ssl.enabled: true
server.ssl.certificate" "path_to_cert/cert.pem"
server.ssl.key: "path_to_cert/key.pem"
server.ssl.supportedProtocols: [TLSv1.2"]

elasticsearch.ssl.certificate: "path_to_cert/cert.pem"
elasticsearch.ssl.key: "path_to_cert/key.pem"
elastisearch.ssl.vertificationMode: none

pid.file: "path_to_pid/"
logging.dest: "/path_to_log/kibana.log"

have tried with and without- the following with not success -
xpack.infra.sources.fields.massage: ['message', '@message', 'json.message' ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
logging.silent: true

Sample data failing -

I am guessing here, since I do not run filebeat, elasticsearch, or kibana, but ... my understanding of filebeat modules is that they provide a way to parse a bunch of standard log file formats. filebeat does not do the parsing, instead it uses an ingest pipeline in elasticsearch. If you are sending data to logstash, then to use an ingest pipeline you would have to set the pipeline option on the elasticsearch output.

Thank-you for responding.

I think I am understanding your input here
On the logstash.conf, on the outputs, I am not pointing to elasticsearch, the next point in the path as show below.

data flow current filebeat => logstash =>. elasticearch => kibana

data flow when problem not occurring - filebeat => elasticsearch => kibana

It is also important to note that the data is getting to kibana, as I can see output in discovery and the dashboards. I just get these errors on some the of the messages.json docker input (failed to format referenced before...) It is also important to not that the reason I am using logstash is that I can also route to the s3 bucket as well as to kibana for online viewing. It is just since I have inserted logstah into the mix, that I am getting these additional messages/error messages.

Here is the output pipeline defined in logstash to elastisearch
output {
elasticsearch {
hosts=> ["https://dev.logs1.test:9200";, "https://dev.logs2.test:9200";]
ssl_certificate_verification => false
index => "logstash-app-%{+YYYY.MM.dd}"

When filebeat sends data directly to elasticsearch, in addition to the log file entries it sends metadata saying what format the log files are in. So if filebeat says "this is an IIS access log" then elasticsearch will process it using this ingest pipeline. If that processing does not happen then kibana will display the "failed to format message" error.

In logstatsh, that processing will not happen unless you set the pipeline option when sending data to elasticsearch. If you have multiple log formats you may need to configure filebeat to add a field that indicates which pipeline the event should be sent through.

ETA: It looks like filebeat adds the metadata for you. See the documentation.

Thanks again for responding.....

here is what I tried....almost verbatim from the documentation....

output {
if [@metadata][pipeline] {
elasticsearch {
hosts => "original hosts"
index => "original index"
pipeline => "%{[@metadata][pipeline]}"
} else {
elasticsearch {
hosts => "https://original hosts"
index => "original index "

if I just use the first part (if) ..nothing comes out.....

If I add in the second part( i.e. else).....the error comes there a step I missed?

Do I need to do this step from the doc?
filebeat setup --pipelines --modules nginx,system (perhaps for logstash?)


output { stdout { codec => rubydebug { metadata => true } } }

and see if you can spot the name of the pipeline in an event.

ok..figured out elasticsearch pipeline. I ran curl -k -v -X GET "https://dev.logs1.test:9200/_ingest_pipeline/*" -H 'content-Type: application/json'

it returned back xpack_monitoring_2....put that in the pipeline with no joy.

ok..finally fixed the issue. The problem was that elasticsearch puts everything into a message file, including json.log data. Logstash puts json.log data into a json.log file. I updated the kibana.xml file parameter xpack.infra.default.fields.message: ['@message'. 'json.message', 'json.message', 'json.log'] with json.log being the key value and then the kibana.log no longer showed the failed to format message error and actually show the output of the log.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.