We want to improve our logging (filebeat -> logstash -> elasticsearch -> kibana) and so far I have filebeat running on our docker host in its own container (I map all log files manually, since we do not manage the docker host we do not know for sure which containers will be running).
The pipeline logstash -> elasticsearch -> kibana is a blackbox, so all I can see is filebeat and kibana.
Now after a lot of trying I get entries in kibana, except the actual message is missing.
This is the filebeat.yml
# filebeat.yml
filebeat.inputs:
- type: log
paths:
- '/usr/share/filebeat/dockerlogs/*/*.log.json'
json:
keys_under_root: true
add_error_key: true
message_key: message
ignore_decoding_error: true
output:
logstash:
hosts: ["logstash:5044"]
ssl.certificate_authorities: ["/usr/share/filebeat/cert/logstash_ca.crt"]
ssl.certificate: "/usr/share/filebeat/cert/logstash_beat_client.crt"
ssl.key: "/usr/share/filebeat/cert/logstash_beat_client.key"
logging.level: debug
logging.json: false
logging.metrics.enabled: false
ssl.verification_mode: none
And I can see that in both the logs and the temporary file output the message field is present.
Though I can not see them in kibana, the people who manage logstash have ensured me they do not filter on the beats input. This is an excerpt of the output file:
{
"@timestamp": "2019-06-25T09:04:49.719Z",
"@metadata": {
"beat": "filebeat",
"type": "doc",
"version": "6.5.3"
},
"host": {
"name": "10a58a8474b2"
},
"level": "INFO",
"@version": "1",
"message": "Starting AuthorizationServiceApplication on svanschooten-VirtualBox with PID 5111 (/home/svanschooten/Workspace/mn-aut/target/classes started by svanschooten in /home/svanschooten/Workspace/mn-aut)",
"logger_name": "com.mn.authorisation.AuthorizationServiceApplication",
"input": {
"type": "log"
},
"beat": {
"name": "10a58a8474b2",
"hostname": "10a58a8474b2",
"version": "6.5.3"
},
"level_value": 20000,
"source": "/usr/share/filebeat/dockerlogs/authorisation/mn-aut.log.json",
"offset": 620569,
"thread_name": "main",
"prospector": {
"type": "log"
}
}
Though I only see this in kibana:
{
"_index": "beats-2019.06.25",
"_type": "doc",
"_id": "QYHhjWsBPMQwkJWzluTq",
"_score": 1,
"_source": {
"log_origin": "beats_5044",
"level": "DEBUG",
"beat": {
"name": "10a58a8474b2",
"version": "6.5.3",
"hostname": "10a58a8474b2"
},
"origin": "filebeat",
"tags": [
"beats5044",
"beats_input_codec_plain_applied"
],
"offset": 620569,
"thread_name": "main",
"source": "/usr/share/filebeat/dockerlogs/authorisation/mn-aut.log.json",
"@version": "1",
"level_value": 10000,
"host": "10a58a8474b2",
"logger_name": "com.mn.authorisation.AuthorizationServiceApplication",
"input": {
"type": "log"
},
"prospector": {
"type": "log"
},
"@timestamp": "2019-06-25T09:04:49.719Z"
},
"fields": {
"@timestamp": [
"2019-06-25T09:04:49.719Z"
]
}
}
When I use the 'Docker' input there are issues with parsing the lines but then I do get the log lines (though I have no control on which container logs are shipped). As we deploy multiple times a week it is unfeasible to update the container ID's every time, so this approach is not wanted.