I am learning ELK and implemented in test environment. logstash is reading the data from Kafka topic which is not in JSON format , but when I specified in logstash configuration that codec => json , I see that all the events are in JSON from logstash stdout. Can I please know how exactly codec works. If we specify codec in logstash, will the filters work ?
I am learning ELK and implemented in test environment. logstash is reading the data from Kafka topic which is not in JSON format , but when I specified in logstash configuration that codec => json , I see that all the events are in JSON from logstash stdout.
I don't understand what you mean. What does your configuration look like? What events does Logstash produce?
Can I please know how exactly codec works. If we specify codec in logstash, will the filters work ?
Thank you for the reply.. when I sent the data directly from file beats to logstash , in logstash output I could see field extractions but when data is sent from kafka to logstash without specifying codec in kafak input at logstash , I see the entire event . When the codec = json {} is specified in kafka input for logstash, I can field extractions again in logstash output. Can I please know without specifying codec for file beats everything is fine but for kafka, why we need to specify codec. how does it exactly works .
It's very simple: If Filebeat is doing the JSON processing then Logstash obviously doesn't need to, but the Kafka input itself won't process the event as JSON (because it doesn't have to be JSON) so you have to explicitly ask Logstash to process it as JSON by choosing the json codec.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.