Hey all,
We have a kafka input that sends a single json line that look like this at regular intervals:
{"load": {"min15": 0.05, "min5": 0.05, "cpucore": 2, "load_warning": 1.0, "min1": 0.01, "load_critical": 5.0, "load_careful": 0.7}, "docker": {}, "uptime": {"seconds": 1289416}, "system": {"os_name": "Linux", "platform": "64bit", "linux_distro": "Red Hat Enterprise Linux Server 7.5", "hostname": "server1.example.com", "hr_name": "Red Hat Enterprise Linux Server 7.5 64bit"}}
Using the kafka input plugin and elasticsearch output the data appears but it is splitting the single json line into multiple documents like this (note the line was split into multiple documents):
{
"_index": "glances-2018.33",
"_type": "doc",
"_id": "MFJkOmUBPBw285gEwadI",
"_score": 1,
"_source": {
"cpucore": 2,
"load_log": "False",
"@version": "1",
"load_careful": 0.7,
"load_critical": 5,
"min15": 0.05,
"history_size": 28800,
"min5": 0.01,
"@timestamp": "2018-08-14T21:43:26.171Z",
"min1": 0,
"load_warning": 1
},
"fields": {
"@timestamp": [
"2018-08-14T21:43:26.171Z"
]
}
}
{
"_index": "glances-2018.33",
"_type": "doc",
"_id": "6txkOmUBox3BsJZkwRFJ",
"_version": 1,
"_score": null,
"_source": {
"hr_name": "Red Hat Enterprise Linux Server 7.5 64bit",
"@version": "1",
"history_size": 28800,
"platform": "64bit",
"os_name": "Linux",
"linux_distro": "Red Hat Enterprise Linux Server 7.5",
"hostname": "server1.example.com",
"@timestamp": "2018-08-14T21:43:26.173Z",
"os_version": "3.10.0-862.6.3.el7.x86_64"
},
"fields": {
"@timestamp": [
"2018-08-14T21:43:26.173Z"
]
},
"sort": [
1534283006173
]
}
What I'd like is for the json line to be indexed as a single document, but can't figure out how to do that. Any ideas?
Thanks,
Ryan