I'm using elasticsearch and kibana 6.5.1 and trying to get the logs functionality working with logs we are inserting via fluentd. Unfortunately all I'm getting is message that no logs can be found and to adjust my filter. I'm seeing some action on the timeline at the right hand of the screen, but nothing ever shows when I set logs to start streaming or do searches for messages I know are there. I can see new messages through the discover module of Kibana, but nothing ever shows up in the logs module.
Any ideas on what could be wrong with the configuration?
Relevant entries from kibana.yml:
xpack.infra.sources.default.logAlias: 'fluentd-*'
xpack.infra.sources.default.fields.message: ['msg', 'MESSAGE']
xpack.infra.sources.default.fields.host: '_HOSTNAME'
xpack.infra.sources.default.fields.container: 'container_name'
xpack.infra.sources.default.fields.pod: 'pod_name'
xpack.infra.sources.default.fields.tiebreaker: '_SOURCE_REALTIME_TIMESTAMP'
An example of the source field (json view in kibana) from one of our records (redacting a few fields):
"_index": "fluentd-2019.02.25",
"_type": "fluentd",
"_id": "7lARJ2kBLWoRN3y-8Wpv",
"_version": 1,
"_score": null,
"_source": {
"PRIORITY": "6",
"_PID": "62984",
"_UID": "0",
"_GID": "0",
"_COMM": "dockerd-current",
"_EXE": "/usr/bin/dockerd-current",
"_CMDLINE": "blah blah cmdline",
"_CAP_EFFECTIVE": "1fffffffff",
"_SYSTEMD_CGROUP": "/system.slice/docker.service",
"_SYSTEMD_UNIT": "docker.service",
"_SYSTEMD_SLICE": "system.slice",
"_BOOT_ID": "a4a22bc8790b4ad1a666e4446f60cc06",
"_MACHINE_ID": "955d8fb4b61d4db3ad878e45e9008bcc",
"_HOSTNAME": "blahblah.com",
"CONTAINER_ID": "fd1f537b429f",
"CONTAINER_ID_FULL": "fd1f537b429fb51eca428b60334c93689029b7ed24bbf471f2999b62d013353f",
"CONTAINER_NAME": "blah-blah-container-name",
"CONTAINER_TAG": "",
"_TRANSPORT": "journal",
"_SELINUX_CONTEXT": "system_u:system_r:container_runtime_t:s0",
"MESSAGE": "{\"@timestamp\":\"2019-02-25T15:50:43.046-08:00\",\"msg":\"Clearing decision cache\",\"logger_name\":\"com.blahblah.fraud_analysis.cache.DecisionCacheHolder\",\"thread_name\":\"scheduling-1\",\"level\":\"INFO\",\"level_value\":20000,\"traceId\":\"2181d65dcf43ebbe\",\"spanId\":\"2181d65dcf43ebbe\",\"spanExportable\":\"true\",\"app\":\"siftscience\"}",
"_SOURCE_REALTIME_TIMESTAMP": "1551138643047036",
"name_prefix": "k8s",
"container_name": "sift-science-30162-ce2b1acd",
"pod_name": "sift-science-30162-ce2b1acd-1-z4g36",
"namespace": "sift-science",
"@timestamp": "2019-02-25T15:50:43.046-08:00",
"msg": "Clearing decision cache",
"logger_name": "com.blahblah.fraud_analysis.cache.DecisionCacheHolder",
"thread_name": "scheduling-1",
"level": "INFO",
"level_value": 20000,
"traceId": "2181d65dcf43ebbe",
"spanId": "2181d65dcf43ebbe",
"spanExportable": "true",
"app": "siftscience",
"syzygy": "syzygy44"
}
Attaching what I see in the logs module: