I have a Kafka logstash input plugin that's reading messages from a Kafka 1.0.0 fine.
But the "decorate_events" property does not seem to take effect, i.e, I receive none of the kafka topic/partition information, here's my input code.
I am using Logstash/ES 6.2.4
Thanks for a quick response. Didn't quite understand, do I have to explicitly assign [@metadata][kafka] to a variable in logstash, if so, can you please say how.
The kafka input puts the topic, partition etc. into fields under [@metadata][kafka]. The [@metadata] field on the event exists in logstash, but is not written to elasticsearch, so if you want to have that data in elasticsearch you would need to use mutate+copy to copy [@metadata][kafka] to another field.
Is there a way to index the kafka detail as well?
I think because its an inner json it doesn't index in Elasticsearch.
It would be nice to filter out messages of a particular partition in Kibana, for example.
[2018-06-20T15:05:20,979][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch.
{:status=>400, :action=>["index", {:_id=>nil, :_index=>"ms-logs-2018.06.20", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x354fa377],
:response=>{"index"=>{"_index"=>"ms-logs-2018.06.20", "_type"=>"doc", "_id"=>"ua66HWQBIApBjZNHh1si", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception",
"reason"=>"failed to parse [kafka]", "caused_by"=>{"type"=>"illegal_state_exception",
"reason"=>"Can't get text on a START_OBJECT at 1:248"}}}}}
I suspect the problem is that you previously indexed documents where "kafka" was a string, and now it is an object. Are you in a position to "DELETE ms-logs-2018.06.20"? Or can you wait until tomorrow and see if it starts working at midnight UTC when it rolls to a new index?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.