Setting up logs functionality with fluentd

I'm using elasticsearch and kibana 6.5.1 and trying to get the logs functionality working with logs we are inserting via fluentd. Unfortunately all I'm getting is message that no logs can be found and to adjust my filter. I'm seeing some action on the timeline at the right hand of the screen, but nothing ever shows when I set logs to start streaming or do searches for messages I know are there. I can see new messages through the discover module of Kibana, but nothing ever shows up in the logs module.

Any ideas on what could be wrong with the configuration?

Relevant entries from kibana.yml:

xpack.infra.sources.default.logAlias: 'fluentd-*'
xpack.infra.sources.default.fields.message: ['msg', 'MESSAGE']
xpack.infra.sources.default.fields.host: '_HOSTNAME'
xpack.infra.sources.default.fields.container: 'container_name'
xpack.infra.sources.default.fields.pod: 'pod_name'
xpack.infra.sources.default.fields.tiebreaker: '_SOURCE_REALTIME_TIMESTAMP'

An example of the source field (json view in kibana) from one of our records (redacting a few fields):

"_index": "fluentd-2019.02.25",
"_type": "fluentd",
"_id": "7lARJ2kBLWoRN3y-8Wpv",
"_version": 1,
"_score": null,
"_source": {
"PRIORITY": "6",
"_PID": "62984",
"_UID": "0",
"_GID": "0",
"_COMM": "dockerd-current",
"_EXE": "/usr/bin/dockerd-current",
"_CMDLINE": "blah blah cmdline",
"_CAP_EFFECTIVE": "1fffffffff",
"_SYSTEMD_CGROUP": "/system.slice/docker.service",
"_SYSTEMD_UNIT": "docker.service",
"_SYSTEMD_SLICE": "system.slice",
"_BOOT_ID": "a4a22bc8790b4ad1a666e4446f60cc06",
"_MACHINE_ID": "955d8fb4b61d4db3ad878e45e9008bcc",
"_HOSTNAME": "blahblah.com",
"CONTAINER_ID": "fd1f537b429f",
"CONTAINER_ID_FULL": "fd1f537b429fb51eca428b60334c93689029b7ed24bbf471f2999b62d013353f",
"CONTAINER_NAME": "blah-blah-container-name",
"CONTAINER_TAG": "",
"_TRANSPORT": "journal",
"_SELINUX_CONTEXT": "system_u:system_r:container_runtime_t:s0",
"MESSAGE": "{\"@timestamp\":\"2019-02-25T15:50:43.046-08:00\",\"msg":\"Clearing decision cache\",\"logger_name\":\"com.blahblah.fraud_analysis.cache.DecisionCacheHolder\",\"thread_name\":\"scheduling-1\",\"level\":\"INFO\",\"level_value\":20000,\"traceId\":\"2181d65dcf43ebbe\",\"spanId\":\"2181d65dcf43ebbe\",\"spanExportable\":\"true\",\"app\":\"siftscience\"}",
"_SOURCE_REALTIME_TIMESTAMP": "1551138643047036",
"name_prefix": "k8s",
"container_name": "sift-science-30162-ce2b1acd",
"pod_name": "sift-science-30162-ce2b1acd-1-z4g36",
"namespace": "sift-science",
"@timestamp": "2019-02-25T15:50:43.046-08:00",
"msg": "Clearing decision cache",
"logger_name": "com.blahblah.fraud_analysis.cache.DecisionCacheHolder",
"thread_name": "scheduling-1",
"level": "INFO",
"level_value": 20000,
"traceId": "2181d65dcf43ebbe",
"spanId": "2181d65dcf43ebbe",
"spanExportable": "true",
"app": "siftscience",
"syzygy": "syzygy44"
}

Attaching what I see in the logs module:

Hi @Erik_Tribou,

The xpack.infra.sources.default.fields.message configuration option is broken at present.

Currently, the UI uses a heuristic to attempt to generate the log message for a number of known Filebeat modules. After this we fallback to the message and @message fields of the document.

If you ingest your logs with a message or @message field you should see your logs in the Logs UI.

You might be able tosetup a field alias for this if that's an easier route. In 6.5 this is only possible if the index uses the setting index.mapping.single_type: true.

Apologies for this, we are looking to extend configuration of the Logs UI in the future.

@Kerry,

Thank you for letting me know that the configuration option does not work. I tried removing the xpack.infra.sources.default.fields.message parameter from my kibana.yml and creating aliases with the names message and @message (pointing to our real message fields), but had the same result. I could search on the aliases in the discover module but nothing ever showed in logs (except the graph on the side for the number of messages). The Logs module isn't critical to us, so I think we'll just try again after a couple more releases when hopefully the configuration option will work as documented.

Thank you so much for this post..

there seems to be a fix in #32502 :slight_smile: thanks, elastic!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.